WorldWideScience

Sample records for ratios remain approximately

  1. A test of the mean density approximation for Lennard-Jones mixtures with large size ratios

    International Nuclear Information System (INIS)

    Ely, J.F.

    1986-01-01

    The mean density approximation for mixture radial distribution functions plays a central role in modern corresponding-states theories. This approximation is reasonably accurate for systems that do not differ widely in size and energy ratios and which are nearly equimolar. As the size ratio increases, however, or if one approaches an infinite dilution of one of the components, the approximation becomes progressively worse, especially for the small molecule pair. In an attempt to better understand and improve this approximation, isothermal molecular dynamics simulations have been performed on a series of Lennard-Jones mixtures. Thermodynamic properties, including the mixture radial distribution functions, have been obtained at seven compositions ranging from 5 to 95 mol%. In all cases the size ratio was fixed at two and three energy ratios were investigated, 22 / 11 =0.5, 1.0, and 1.5. The results of the simulations are compared with the mean density approximation and a modification to integrals evaluated with the mean density approximation is proposed

  2. On the Approximation Ratio of Lempel-Ziv Parsing

    DEFF Research Database (Denmark)

    Gagie, Travis; Navarro, Gonzalo; Prezza, Nicola

    2018-01-01

    in the text. Since computing b is NP-complete, a popular gold standard is z, the number of phrases in the Lempel-Ziv parse of the text, where phrases can be copied only from the left. While z can be computed in linear time, almost nothing has been known for decades about its approximation ratio with respect...

  3. Portfolio Optimization under Local-Stochastic Volatility: Coefficient Taylor Series Approximations & Implied Sharpe Ratio

    OpenAIRE

    Lorig, Matthew; Sircar, Ronnie

    2015-01-01

    We study the finite horizon Merton portfolio optimization problem in a general local-stochastic volatility setting. Using model coefficient expansion techniques, we derive approximations for the both the value function and the optimal investment strategy. We also analyze the `implied Sharpe ratio' and derive a series approximation for this quantity. The zeroth-order approximation of the value function and optimal investment strategy correspond to those obtained by Merton (1969) when the risky...

  4. Does the instantaneous wave-free ratio approximate the fractional flow reserve?

    NARCIS (Netherlands)

    Johnson, Nils P.; Kirkeeide, Richard L.; Asrress, Kaleab N.; Fearon, William F.; Lockie, Timothy; Marques, Koen M. J.; Pyxaras, Stylianos A.; Rolandi, M. Cristina; van 't Veer, Marcel; de Bruyne, Bernard; Piek, Jan J.; Pijls, Nico H. J.; Redwood, Simon; Siebes, Maria; Spaan, Jos A. E.; Gould, K. Lance

    2013-01-01

    This study sought to examine the clinical performance of and theoretical basis for the instantaneous wave-free ratio (iFR) approximation to the fractional flow reserve (FFR). Recent work has proposed iFR as a vasodilation-free alternative to FFR for making mechanical revascularization decisions. Its

  5. Approximated neutronic calculation for the tritium breeding ratio in fusion reactor blankets

    International Nuclear Information System (INIS)

    Santos, Raul dos

    1983-01-01

    An approximated model for the calculation of the tritium breeding ratio in conceptual thermonuclear fusion reactor blankets is presented. This model makes use of the exponential absorption concept due to the Li 6 (n, He 4 )T and Li 7 (n, n'He 4 )T reactions. The results of this approximated method are compared with reference benchmarks which were generated by the nuclear codes ANISN (discrete ordinates) and MORSE (Monte Carlo method). The maximum deviation among the results have been around 10%. (Author) [pt

  6. Approximate Waveforms for Extreme-Mass-Ratio Inspirals: The Chimera Scheme

    International Nuclear Information System (INIS)

    Sopuerta, Carlos F; Yunes, Nicolás

    2012-01-01

    We describe a new kludge scheme to model the dynamics of generic extreme-mass-ratio inspirals (EMRIs; stellar compact objects spiraling into a spinning supermassive black hole) and their gravitational-wave emission. The Chimera scheme is a hybrid method that combines tools from different approximation techniques in General Relativity: (i) A multipolar, post-Minkowskian expansion for the far-zone metric perturbation (the gravitational waveforms) and for the local prescription of the self-force; (ii) a post-Newtonian expansion for the computation of the multipole moments in terms of the trajectories; and (iii) a BH perturbation theory expansion when treating the trajectories as a sequence of self-adjusting Kerr geodesies. The EMRI trajectory is made out of Kerr geodesic fragments joined via the method of osculating elements as dictated by the multipolar post-Minkowskian radiation-reaction prescription. We implemented the proper coordinate mapping between Boyer-Lindquist coordinates, associated with the Kerr geodesies, and harmonic coordinates, associated with the multipolar post-Minkowskian decomposition. The Chimera scheme is thus a combination of approximations that can be used to model generic inspirals of systems with extreme to intermediate mass ratios, and hence, it can provide valuable information for future space-based gravitational-wave observatories, like LISA, and even for advanced ground detectors. The local character in time of our multipolar post-Minkowskian self-force makes this scheme amenable to study the possible appearance of transient resonances in generic inspirals.

  7. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  8. Using Approximate Bayesian Computation to infer sex ratios from acoustic data.

    Science.gov (United States)

    Lehnen, Lisa; Schorcht, Wigbert; Karst, Inken; Biedermann, Martin; Kerth, Gerald; Puechmaille, Sebastien J

    2018-01-01

    Population sex ratios are of high ecological relevance, but are challenging to determine in species lacking conspicuous external cues indicating their sex. Acoustic sexing is an option if vocalizations differ between sexes, but is precluded by overlapping distributions of the values of male and female vocalizations in many species. A method allowing the inference of sex ratios despite such an overlap will therefore greatly increase the information extractable from acoustic data. To meet this demand, we developed a novel approach using Approximate Bayesian Computation (ABC) to infer the sex ratio of populations from acoustic data. Additionally, parameters characterizing the male and female distribution of acoustic values (mean and standard deviation) are inferred. This information is then used to probabilistically assign a sex to a single acoustic signal. We furthermore develop a simpler means of sex ratio estimation based on the exclusion of calls from the overlap zone. Applying our methods to simulated data demonstrates that sex ratio and acoustic parameter characteristics of males and females are reliably inferred by the ABC approach. Applying both the ABC and the exclusion method to empirical datasets (echolocation calls recorded in colonies of lesser horseshoe bats, Rhinolophus hipposideros) provides similar sex ratios as molecular sexing. Our methods aim to facilitate evidence-based conservation, and to benefit scientists investigating ecological or conservation questions related to sex- or group specific behaviour across a wide range of organisms emitting acoustic signals. The developed methodology is non-invasive, low-cost and time-efficient, thus allowing the study of many sites and individuals. We provide an R-script for the easy application of the method and discuss potential future extensions and fields of applications. The script can be easily adapted to account for numerous biological systems by adjusting the type and number of groups to be

  9. Log-Likelihood Ratio Calculation for Iterative Decoding on Rayleigh Fading Channels Using Padé Approximation

    Directory of Open Access Journals (Sweden)

    Gou Hosoya

    2013-01-01

    Full Text Available Approximate calculation of channel log-likelihood ratio (LLR for wireless channels using Padé approximation is presented. LLR is used as an input of iterative decoding for powerful error-correcting codes such as low-density parity-check (LDPC codes or turbo codes. Due to the lack of knowledge of the channel state information of a wireless fading channel, such as uncorrelated fiat Rayleigh fading channels, calculations of exact LLR for these channels are quite complicated for a practical implementation. The previous work, an LLR calculation using the Taylor approximation, quickly becomes inaccurate as the channel output leaves some derivative point. This becomes a big problem when higher order modulation scheme is employed. To overcome this problem, a new LLR approximation using Padé approximation, which expresses the original function by a rational form of two polynomials with the same total number of coefficients of the Taylor series and can accelerate the Taylor approximation, is devised. By applying the proposed approximation to the iterative decoding and the LDPC codes with some modulation schemes, we show the effectiveness of the proposed methods by simulation results and analysis based on the density evolution.

  10. Errors due to the cylindrical cell approximation in lattice calculations

    Energy Technology Data Exchange (ETDEWEB)

    Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1960-06-15

    It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)

  11. Novel surgical performance evaluation approximates Standardized Incidence Ratio with high accuracy at simple means.

    Science.gov (United States)

    Gabbay, Itay E; Gabbay, Uri

    2013-01-01

    Excess adverse events may be attributable to poor surgical performance but also to case-mix, which is controlled through the Standardized Incidence Ratio (SIR). SIR calculations can be complicated, resource consuming, and unfeasible in some settings. This article suggests a novel method for SIR approximation. In order to evaluate a potential SIR surrogate measure we predefined acceptance criteria. We developed a new measure - Approximate Risk Index (ARI). "Number Needed for Event" (NNE) is the theoretical number of patients needed "to produce" one adverse event. ARI is defined as the quotient of the group of patients needed for no observed events Ge by total patients treated Ga. Our evaluation compared 2500 surgical units and over 3 million heterogeneous risk surgical patients that were induced through a computerized simulation. Surgical unit's data were computed for SIR and ARI to evaluate compliance with the predefined criteria. Approximation was evaluated by correlation analysis and performance prediction capability by Receiver Operating Characteristics (ROC) analysis. ARI strongly correlates with SIR (r(2) = 0.87, p 0.9) 87% sensitivity and 91% specificity. ARI provides good approximation of SIR and excellent prediction capability. ARI is simple and cost-effective as it requires thorough risk evaluation of only the adverse events patients. ARI can provide a crucial screening and performance evaluation quality control tool. The ARI method may suit other clinical and epidemiological settings where relatively small fraction of the entire population is affected. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  12. Gyromagnetic ratios of excited states in 198Pt; measurements and interacting boson approximation model calculations

    Science.gov (United States)

    Stuchbery, A. E.; Ryan, C. G.; Bolotin, H. H.; Morrison, I.; Sie, S. H.

    1981-07-01

    The enhanced transient hyperfine field manifest at the nuclei of swiftly recoiling ions traversing magnetized ferromagnetic materials was utilized to measure the gyromagnetic ratios of the 2 +1, 2 +2 and 4 +1 states in 198Pt by the thin-foil technique. The states of interest were populated by Coulomb excitation using a beam of 220 MeV 58Ni ions. The results obtained were: g(2 +1) = 0.324 ± 0.026; g(2 +2) = 0.34 ± 0.06; g(4 +1) = 0.34 ± 0.06. In addition, these measurements served to discriminate between the otherwise essentially equally probable values previously reported for the E2/M1 ratio of the 2 +2 → 2 +1 transition in 198Pt. We also performed interacting boson approximation (IBA) model-based calculations in the O(6) limit symmetry, with and without inclusion of a small degree of symmetry breaking, and employed the M1 operator in both first- and second-order to obtain M1 selection rules and to calculate gyromagnetic ratios of levels. When O(6) symmetry is broken, there is a predicted departure from constancy of the g-factors which provides a good test of the nuclear wave function. Evaluative comparisons are made between these experimental and predicted g-factors.

  13. Improved Dutch Roll Approximation for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Liang-Liang Yin

    2014-06-01

    Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.

  14. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  15. Alteration of the ground state by external magnetic fields. [External field, coupling constant ratio, static tree level approximation

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, B J; Shepard, H K [New Hampshire Univ., Durham (USA). Dept. of Physics

    1976-03-22

    By fully exploiting the mathematical and physical analogy to the Ginzburg-Landau theory of superconductivity, a complete discussion of the ground state behavior of the four-dimensional Abelian Higgs model in the static tree level approximation is presented. It is shown that a sufficiently strong external magnetic field can alter the ground state of the theory by restoring a spontaneously broken symmetry, or by creating a qualitatively different 'vortex' state. The energetically favored ground state is explicitly determined as a function of the external field and the ratio between coupling constants of the theory.

  16. Stable strontium isotopic ratios from archaeological organic remains from the Thorsberg peat bog

    DEFF Research Database (Denmark)

    Nosch, Marie-Louise Bech; von Carnap-Bornheim, Claus; Grupe, Gisela

    2007-01-01

    Pilot study analysing stable strontium isotopic ratios from Iron Age textile and leather finds from the Thorsberg peat bog.......Pilot study analysing stable strontium isotopic ratios from Iron Age textile and leather finds from the Thorsberg peat bog....

  17. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  18. MCA Vmean and the arterial lactate-to-pyruvate ratio correlate during rhythmic handgrip

    DEFF Research Database (Denmark)

    Rasmussen, Peter; Plomgaard, Peter; Krogh-Madsen, Rikke

    2006-01-01

    /P ratio at two plasma lactate levels. MCA Vmean was determined by ultrasound Doppler sonography at rest, during 10 min of rhythmic handgrip exercise at approximately 65% of maximal voluntary contraction force, and during 20 min of recovery in seven healthy male volunteers during control...... and a approximately 15 mmol/l hyperglycemic clamp. Cerebral arteriovenous differences for metabolites were obtained by brachial artery and retrograde jugular venous catheterization. Control resting arterial lactate was 0.78 +/- 0.09 mmol/l (mean +/- SE) and pyruvate 55.7 +/- 12.0 micromol/l (L/P ratio 16.4 +/- 1......Regulation of cerebral blood flow during physiological activation including exercise remains unknown but may be related to the arterial lactate-to-pyruvate (L/P) ratio. We evaluated whether an exercise-induced increase in middle cerebral artery mean velocity (MCA Vmean) relates to the arterial L...

  19. Application of a simplified calculation for full-wave microtremor H/ V spectral ratio based on the diffuse field approximation to identify underground velocity structures

    Science.gov (United States)

    Wu, Hao; Masaki, Kazuaki; Irikura, Kojiro; Sánchez-Sesma, Francisco José

    2017-12-01

    Under the diffuse field approximation, the full-wave (FW) microtremor H/ V spectral ratio ( H/ V) is modeled as the square root of the ratio of the sum of imaginary parts of the Green's function of the horizontal components to that of the vertical one. For a given layered medium, the FW H/ V can be well approximated with only surface waves (SW) H/ V of the "cap-layered" medium which consists of the given layered medium and a new larger velocity half-space (cap layer) at large depth. Because the contribution of surface waves can be simply obtained by the residue theorem, the computation of SW H/ V of cap-layered medium is faster than that of FW H/ V evaluated by discrete wavenumber method and contour integration method. The simplified computation of SW H/ V was then applied to identify the underground velocity structures at six KiK-net strong-motion stations. The inverted underground velocity structures were used to evaluate FW H/ Vs which were consistent with the SW H/ Vs of corresponding cap-layered media. The previous study on surface waves H/ Vs proposed with the distributed surface sources assumption and a fixed Rayleigh-to-Love waves amplitude ratio for horizontal motions showed a good agreement with the SW H/ Vs of our study. The consistency between observed and theoretical spectral ratios, such as the earthquake motions of H/ V spectral ratio and spectral ratio of horizontal motions between surface and bottom of borehole, indicated that the underground velocity structures identified from SW H/ V of cap-layered medium were well resolved by the new method.[Figure not available: see fulltext.

  20. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....

  1. Breeding sex ratio and population size of loggerhead turtles from Southwestern Florida.

    Directory of Open Access Journals (Sweden)

    Jacob A Lasala

    Full Text Available Species that display temperature-dependent sex determination are at risk as a result of increasing global temperatures. For marine turtles, high incubation temperatures can skew sex ratios towards females. There are concerns that temperature increases may result in highly female-biased offspring sex ratios, which would drive a future sex ratio skew. Studying the sex ratios of adults in the ocean is logistically very difficult because individuals are widely distributed and males are inaccessible because they remain in the ocean. Breeding sex ratios (BSR are sought as a functional alternative to study adult sex ratios. One way to examine BSR is to determine the number of males that contribute to nests. Our goal was to evaluate the BSR for loggerhead turtles (Caretta caretta nesting along the eastern Gulf of Mexico in Florida, from 2013-2015, encompassing three nesting seasons. We genotyped 64 nesting females (approximately 28% of all turtles nesting at that time and up to 20 hatchlings from their nests (n = 989 using 7 polymorphic microsatellite markers. We identified multiple paternal contributions in 70% of the nests analyzed and 126 individual males. The breeding sex ratio was approximately 1 female for every 2.5 males. We did not find repeat males in any of our nests. The sex ratio and lack of repeating males was surprising because of female-biased primary sex ratios. We hypothesize that females mate offshore of their nesting beaches as well as en route. We recommend further comparisons of subsequent nesting events and of other beaches as it is imperative to establish baseline breeding sex ratios to understand how growing populations behave before extreme environmental effects are evident.

  2. Sex ratios

    OpenAIRE

    West, Stuart A; Reece, S E; Sheldon, Ben C

    2002-01-01

    Sex ratio theory attempts to explain variation at all levels (species, population, individual, brood) in the proportion of offspring that are male (the sex ratio). In many cases this work has been extremely successful, providing qualitative and even quantitative explanations of sex ratio variation. However, this is not always the situation, and one of the greatest remaining problems is explaining broad taxonomic patterns. Specifically, why do different organisms show so ...

  3. The log-linear return approximation, bubbles, and predictability

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividendprice ratio. Next, we simulate various rational bubbles which have explosive conditional expec...

  4. The Log-Linear Return Approximation, Bubbles, and Predictability

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    2012-01-01

    We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional expe...

  5. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  6. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  7. Microbial enterotypes, inferred by the prevotella-to-bacteroides ratio, remained stable during a 6-month randomized controlled diet intervention with the new nordic diet

    DEFF Research Database (Denmark)

    Roager, Henrik Munch; Licht, Tine Rask; Poulsen, Sanne

    2014-01-01

    It has been suggested that the human gut microbiota can be divided into enterotypes based on the abundance of specific bacterial groups; however, the biological significance and stability of these enterotypes remain unresolved. Here, we demonstrated that subjects (n = 62) 18 to 65 years old......, controlled dietary intervention, where the effect of consuming a diet in accord with the new Nordic diet (NND) recommendations as opposed to consuming the average Danish diet (ADD) on the gut microbiota was investigated. In this study, subjects (with and without stratification according to P/B ratio) did...

  8. Approximating Preemptive Stochastic Scheduling

    OpenAIRE

    Megow Nicole; Vredeveld Tjark

    2009-01-01

    We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...

  9. A Study on the Effects of Compression Ratio, Engine Speed and Equivalence Ratio on HCCI Combustion of DME

    DEFF Research Database (Denmark)

    Pedersen, Troels Dyhr; Schramm, Jesper

    2007-01-01

    An experimental study has been carried out on the homogeneous charge compression ignition (HCCI) combustion of Dimethyl Ether (DME). The study was performed as a parameter variation of engine speed and compression ratio on excess air ratios of approximately 2.5, 3 and 4. The compression ratio was...

  10. Precise isotope ratio and multielement determination in prehistoric and historic human skeletal remains by HR-ICPMS - a novel application to shed light onto anthropological and archaeological questions

    International Nuclear Information System (INIS)

    Watkins, M.

    2000-12-01

    The primary aim of the presented work was the analytical setup for fast (including high sample throughput for statistical evaluation), precise and accurate measurement of strontium isotope ratios using HR-ICPMS (high-resolution inductively coupled plasma mass spectrometry) and their application to ancient human skeletal remains from different localities for the reconstruction of migration processes. Soils and plants are in isotopic equilibrium with local source rock and show therefore the same isotopic ratios for strontium (87Sr/86Sr). Dietary strontium incorporation varies for different body materials (teeth, muscle, bone, etc.) and repository periods depend on the different strontium turnover rates. Accordingly, strontium isotope analysis can provide important data for studying human or animal migration and mobility. An important issue will be addressed: the problem of strontium isotope ratio measurement reliability and the problem of post-mortem alterations. Thus a basic part of this interdisciplinary project is dealing with the systematic evaluation of diagenetic changes of the microstructure in human bone samples - including sample uptake and preparation. Different invasive histological techniques will be applied for further clarification. Newly developed chemical methods give us the opportunity to obtain details on ancient population mobility also in skeletal series of extreme fragmentary character, which usually restricts the macro-morphological approach. Since it is evident that strontium in teeth is only incorporated during childhood whereas strontium uptake in bones is constant, an intra-individual comparison of bone and teeth samples will answer the question whether teeth are indeed 'archives of the childhood'. The introduction of an analytical system allowing online matrix separation by High Performance Ion Chromatography (HPIC) and subsequent measurement of strontium isotope ratios by means of HR-ICPMS is presented, optimized and established as method

  11. Precise Isotope ratio and multielement determination in prehistoric and historic human skeletal remains by HR-ICPMS - a novel application to shed light onto anthropological and archaeological questions

    International Nuclear Information System (INIS)

    Watkins, M.

    2000-12-01

    The primary aim of the presented work was the analytical setup for fast (including high sample throughput for statistical evaluation), precise and accurate measurement of strontium isotope ratios using HR-ICPMS (high-resolution inductively coupled plasma mass spectrometry) and their application to ancient human skeletal remains from different localities for the reconstruction of migration processes. Soils and plants are in isotopic equilibrium with local source rock and show therefore the same isotopic ratios for strontium (87Sr/86Sr). Dietary strontium incorporation varies for different body materials (teeth, muscle, bone, etc.) and repository periods depend on the different strontium turnover rates. Accordingly, strontium isotope analysis can provide important data for studying human or animal migration and mobility. An important issue will be addressed: the problem of strontium isotope ratio measurement reliability and the problem of post-mortem alterations. Thus a basic part of this interdisciplinary project is dealing with the systematic evaluation of diagenetic changes of the microstructure in human bone samples - including sample uptake and preparation. Different invasive histological techniques will be applied for further clarification. Newly developed chemical methods give us the opportunity to obtain details on ancient population mobility also in skeletal series of extreme fragmentary character, which usually restricts the macro-morphological approach. Since it is evident that strontium in teeth is only incorporated during childhood whereas strontium uptake in bones is constant, an intra-individual comparison of bone and teeth samples will answer the question whether teeth are indeed 'archives of the childhood'. The introduction of an analytical system allowing online matrix separation by High Performance Ion Chromatography (HPIC) and subsequent measurement of strontium isotope ratios by means of HR-ICPMS is presented, optimized and established as method

  12. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  13. Discovering approximate-associated sequence patterns for protein-DNA interactions

    KAUST Repository

    Chan, Tak Ming

    2010-12-30

    Motivation: The bindings between transcription factors (TFs) and transcription factor binding sites (TFBSs) are fundamental protein-DNA interactions in transcriptional regulation. Extensive efforts have been made to better understand the protein-DNA interactions. Recent mining on exact TF-TFBS-associated sequence patterns (rules) has shown great potentials and achieved very promising results. However, exact rules cannot handle variations in real data, resulting in limited informative rules. In this article, we generalize the exact rules to approximate ones for both TFs and TFBSs, which are essential for biological variations. Results: A progressive approach is proposed to address the approximation to alleviate the computational requirements. Firstly, similar TFBSs are grouped from the available TF-TFBS data (TRANSFAC database). Secondly, approximate and highly conserved binding cores are discovered from TF sequences corresponding to each TFBS group. A customized algorithm is developed for the specific objective. We discover the approximate TF-TFBS rules by associating the grouped TFBS consensuses and TF cores. The rules discovered are evaluated by matching (verifying with) the actual protein-DNA binding pairs from Protein Data Bank (PDB) 3D structures. The approximate results exhibit many more verified rules and up to 300% better verification ratios than the exact ones. The customized algorithm achieves over 73% better verification ratios than traditional methods. Approximate rules (64-79%) are shown statistically significant. Detailed variation analysis and conservation verification on NCBI records demonstrate that the approximate rules reveal both the flexible and specific protein-DNA interactions accurately. The approximate TF-TFBS rules discovered show great generalized capability of exploring more informative binding rules. © The Author 2010. Published by Oxford University Press. All rights reserved.

  14. Faster and Simpler Approximation of Stable Matchings

    Directory of Open Access Journals (Sweden)

    Katarzyna Paluch

    2014-04-01

    Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.

  15. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  16. Approximate approaches to the one-dimensional finite potential well

    International Nuclear Information System (INIS)

    Singh, Shilpi; Pathak, Praveen; Singh, Vijay A

    2011-01-01

    The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m i ) is taken to be distinct from mass outside (m o ). A relevant parameter is the mass discontinuity ratio β = m i /m o . To correctly account for the mass discontinuity, we apply the BenDaniel-Duke boundary condition. We obtain approximate solutions for two cases: when the well is shallow and when the well is deep. We compare the approximate results with the exact results and find that higher-order approximations are quite robust. For the shallow case, the approximate solution can be expressed in terms of a dimensionless parameter σ l = 2m o V 0 L 2 /ℎ 2 (or σ = β 2 σ l for the deep case). We show that the lowest-order results are related by a duality transform. We also discuss how the energy upscales with L (E∼1/L γ ) and obtain the exponent γ. Exponent γ → 2 when the well is sufficiently deep and β → 1. The ratio of the masses dictates the physics. Our presentation is pedagogical and should be useful to students on a first course on elementary quantum mechanics or low-dimensional semiconductors.

  17. Approximate variational solutions of the Grad-Shafranov equation

    International Nuclear Information System (INIS)

    Ludwig, G.O.

    2001-01-01

    Approximate solutions of the Grad-Schlueter-Shafranov equation based on variational methods are developed. The power series solutions of the Euler-Lagrange equations for equilibrium are compared with direct variational results for a low aspect ratio tokamak equilibrium. (author)

  18. The approximation gap for the metric facility location problem is not yet closed

    NARCIS (Netherlands)

    Byrka, J.; Aardal, K.I.

    2007-01-01

    We consider the 1.52-approximation algorithm of Mahdian et al. for the metric uncapacitated facility location problem. We show that their algorithm does not close the gap with the lower bound on approximability, 1.463, by providing a construction of instances for which its approximation ratio is not

  19. A Study on the Effects of Compression Ratio, Engine Speed and Equivalence Ratio on HCCI Combustion of DME

    DEFF Research Database (Denmark)

    Pedersen, Troels Dyhr; Schramm, Jesper

    2007-01-01

    An experimental study has been carried out on the homogeneous charge compression ignition (HCCI) combustion of Dimethyl Ether (DME). The study was performed as a parameter variation of engine speed and compression ratio on excess air ratios of approximately 2.5, 3 and 4. The compression ratio...... was adjusted in steps to find suitable regions of operation, and the effect of engine speed was studied at 1000, 2000 and 3000 RPM. It was found that leaner excess air ratios require higher compression ratios to achieve satisfactory combustion. Engine speed also affects operation significantly....

  20. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  1. Sum of ratios of products forα-μ random variables in wireless multihop relaying and multiple scattering

    KAUST Repository

    Wang, Kezhi; Wang, Tian; Chen, Yunfei; Alouini, Mohamed-Slim

    2014-01-01

    The sum of ratios of products of independent 2642 2642α-μ random variables (RVs) is approximated by using the Generalized Gamma ratio approximation (GGRA) with Gamma ratio approximation (GRA) as a special case. The proposed approximation is used to calculate the outage probability of the equal gain combining (EGC) or maximum ratio combining (MRC) receivers for wireless multihop relaying or multiple scattering systems considering interferences. Numerical results show that the newly derived approximation works very well verified by the simulation, while GRA has a slightly worse performance than GGRA when outage probability is below 0.1 but with a more simplified form.

  2. Sum of ratios of products forα-μ random variables in wireless multihop relaying and multiple scattering

    KAUST Repository

    Wang, Kezhi

    2014-09-01

    The sum of ratios of products of independent 2642 2642α-μ random variables (RVs) is approximated by using the Generalized Gamma ratio approximation (GGRA) with Gamma ratio approximation (GRA) as a special case. The proposed approximation is used to calculate the outage probability of the equal gain combining (EGC) or maximum ratio combining (MRC) receivers for wireless multihop relaying or multiple scattering systems considering interferences. Numerical results show that the newly derived approximation works very well verified by the simulation, while GRA has a slightly worse performance than GGRA when outage probability is below 0.1 but with a more simplified form.

  3. Stable isotope ratio determination of the origin of vanillin in vanilla extracts and its relationship to vanillin/potassium ratios

    International Nuclear Information System (INIS)

    Martin, G.E.; Alfonso, F.C.; Figert, D.M.; Burggraff, J.M.

    1981-01-01

    A method is described for isolating vanillin from vanilla extract, followed by stable isotope ratio analysis to determine the amount of natural vanillin contained in adulterated vanilla extracts. After the potassium content is determined, the percent Madagascar and/or Java vanilla beans incorporated into the extract may then be approximated from the vanillin/potassium ratio

  4. Approximate Series Solutions for Nonlinear Free Vibration of Suspended Cables

    Directory of Open Access Journals (Sweden)

    Yaobing Zhao

    2014-01-01

    Full Text Available This paper presents approximate series solutions for nonlinear free vibration of suspended cables via the Lindstedt-Poincare method and homotopy analysis method, respectively. Firstly, taking into account the geometric nonlinearity of the suspended cable as well as the quasi-static assumption, a mathematical model is presented. Secondly, two analytical methods are introduced to obtain the approximate series solutions in the case of nonlinear free vibration. Moreover, small and large sag-to-span ratios and initial conditions are chosen to study the nonlinear dynamic responses by these two analytical methods. The numerical results indicate that frequency amplitude relationships obtained with different analytical approaches exhibit some quantitative and qualitative differences in the cases of motions, mode shapes, and particular sag-to-span ratios. Finally, a detailed comparison of the differences in the displacement fields and cable axial total tensions is made.

  5. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  6. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  7. Mixing ratios of carbon monoxide in the troposphere

    Energy Technology Data Exchange (ETDEWEB)

    Novelli, P.C.; Steele, L.P. (Univ. of Colorado, Boulder (United States)); Tans, P.P. (NOAA, Boulder, CO (United States))

    1992-12-20

    Carbon monoxide (CO) mixing ratios were measured in air samples collected weekly at eight locations. The air was collected as part of the CMDL/NOAA cooperative flask sampling program (Climate Monitoring and Diagnostics Laboratory, formerly Geophysical Monitoring for Climatic Change, Air Resources Laboratory/National Oceanic and Atmospheric Administration) at Point Barrow, Alaska, Niwot Ridge, Colorado, Mauna Loa and Cape Kumakahi, Hawaii, Guam, Marianas Islands, Christmas Island, Ascension Island and American Samoa. Half-liter or 3-L glass flasks fitted with glass piston stopcocks holding teflon O rings were used for sample collection. CO levels were determined within several weeks of collection using gas chromatography followed by mercuric oxide reduction detection, and mixing ratios were referenced against the CMDL/NOAA carbon monoxide standard scale. During the period of study (mid-1988 through December 1990) CO levels were greatest in the high latitudes of the northern hemisphere (mean mixing ratio from January 1989 to December 1990 at Point Barrow was approximately 154 ppb) and decreased towards the south (mean mixing ratio at Samoa over a similar period was 65 ppb). Mixing ratios varied seasonally, the amplitude of the seasonal cycle was greatest in the north and decreased to the south. Carbon monoxide levels were affected by both local and regional scale processes. The difference in CO levels between northern and southern latitudes also varied seasonally. The greatest difference in CO mixing ratios between Barrow and Samoa was observed during the northern winter (about 150 ppb). The smallest difference, 40 ppb, occurred during the austral winter. The annually averaged CO difference between 71[degrees]N and 14[degrees]S was approximately 90 ppb in both 1989 and 1990; the annually averaged interhemispheric gradient from 71[degrees]N to 41[degrees]S is estimated as approximately 95 ppb. 66 refs., 5 figs., 5 tabs.

  8. Muonic molecules as three-body Coulomb problem in adiabatic approximation

    International Nuclear Information System (INIS)

    Decker, M.

    1994-04-01

    The three-body Coulomb problem is treated within the framework of the hyperspherical adiabatic approach. The surface functions are expanded into Faddeev-type components in order to ensure the equivalent representation of all possible two-body contributions. It is shown that this decomposition reduces the numerical effort considerably. The remaining radial equations are solved both in the extreme and the uncoupled adiabatic approximation to determine the binding energies of the systems (dtμ) and (d 3 Heμ). Whereas the ground state is described very well in the uncoupled adiabatic approximation, the excited states should be treated within the coupled adiabatic approximation to obtain good agreement with variational calculations. (orig.)

  9. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    performance were examined, and it was ultimately concluded that ReFace facial approximations may have limited effectiveness if used in the traditional way. However, some promising alternative uses are explored that may expand the utility of facial approximations for aiding in the identification of unknown human remains. Published by Elsevier B.V.

  10. Analytic approximations for the elastic moduli of two-phase materials

    DEFF Research Database (Denmark)

    Zhang, Z. J.; Zhu, Y. K.; Zhang, P.

    2017-01-01

    Based on the models of series and parallel connections of the two phases in a composite, analytic approximations are derived for the elastic constants (Young's modulus, shear modulus, and Poisson's ratio) of elastically isotropic two-phase composites containing second phases of various volume...

  11. The ATP/DNA Ratio Is a Better Indicator of Islet Cell Viability Than the ADP/ATP Ratio

    Science.gov (United States)

    Suszynski, T.M.; Wildey, G.M.; Falde, E.J.; Cline, G.W.; Maynard, K. Stewart; Ko, N.; Sotiris, J.; Naji, A.; Hering, B.J.; Papas, K.K.

    2009-01-01

    Real-time, accurate assessment of islet viability is critical for avoiding transplantation of nontherapeutic preparations. Measurements of the intracellular ADP/ATP ratio have been recently proposed as useful prospective estimates of islet cell viability and potency. However, dead cells may be rapidly depleted of both ATP and ADP, which would render the ratio incapable of accounting for dead cells. Since the DNA of dead cells is expected to remain stable over prolonged periods of time (days), we hypothesized that use of the ATP/DNA ratio would take into account dead cells and may be a better indicator of islet cell viability than the ADP/ATP ratio. We tested this hypothesis using mixtures of healthy and lethally heat-treated (HT) rat insulinoma cells and human islets. Measurements of ATP/DNA and ADP/ATP from the known mixtures of healthy and HT cells and islets were used to evaluate how well these parameters correlated with viability. The results indicated that ATP and ADP were rapidly (within 1 hour) depleted in HT cells. The fraction of HT cells in a mixture correlated linearly with the ATP/DNA ratio, whereas the ADP/ADP ratio was highly scattered, remaining effectively unchanged. Despite similar limitations in both ADP/ADP and ATP/DNA ratios, in that ATP levels may fluctuate significantly and reversibly with metabolic stress, the results indicated that ATP/DNA was a better measure of islet viability than the ADP/ATP ratio. PMID:18374063

  12. Analytical approximations to the Hotelling trace for digital x-ray detectors

    Science.gov (United States)

    Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.

    2001-06-01

    The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.

  13. Lost in interpretation: should the highest VC value be used to calculate the FEV1/VC ratio?

    Directory of Open Access Journals (Sweden)

    Fortis S

    2016-09-01

    Full Text Available Spyridon FortisDepartment of Medicine, Division of Pulmonary, Critical Care and Occupational Medicine, University of Iowa, Iowa City, IA, USAAirflow obstruction or obstructive ventilatory defect (OVD is defined as low forced expiratory volume in 1 second (FEV1 to vital capacity (VC ratio. VC can be measured in various ways, and the definition of “low FEV1/VC” ratio varies.     VC can be measured during forced expiration before bronchodilators (forced vital capacity [FVC] and after bronchodilators (post-FVC, and during slow expiration (slow vital capacity [SVC] and during inspiration (inspiratory vital capacity [IVC]. Theoretically, in a healthy person, VC values should be the same regardless of the maneuver used. Nevertheless, SVC is usually larger than FVC except in patients with no OVD and body mass index <25 kg/m2.1 In obstructive lung diseases, FVC may be reduced, which may result in an increase of FEV1/FVC ratio and misdiagnosis.2 For that reason, American Thoracic Society–European Respiratory Society recommends using SVC or IVC to calculate the FEV1/VC ratio.2 Approximately, 10% of smokers have FEV1% predicted <80% and FEV1/FVC >70%, a pattern known as preserved ratio impaired spirometry.3 Of all the subjects with FVC below the lower limit of normal (LLN and FEV1/FVC > LLN, only 64% have restriction in lung volumes. The rest 36% have a nonspecific Pulmonary Function Test pattern.4 Approximately, 15% of patients with this nonspecific PFT pattern develop OVD in follow-up PFTs.4 It is possible that a portion of patients with obstructive lung disease remain underdiagnosed when FVC is used to compute FEV1/FVC ratio.View the original paper by Torén and colleagues.

  14. Method of Poisson's ratio imaging within a material part

    Science.gov (United States)

    Roth, Don J. (Inventor)

    1996-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

  15. High thrust-to-power ratio micro-cathode arc thruster

    Directory of Open Access Journals (Sweden)

    Joseph Lukas

    2016-02-01

    Full Text Available The Micro-Cathode Arc Thruster (μCAT is an electric propulsion device that ablates solid cathode material, through an electrical vacuum arc discharge, to create plasma and ultimately produce thrust in the μN to mN range. About 90% of the arc discharge current is conducted by electrons, which go toward heating the anode and contribute very little to thrust, with only the remaining 10% going toward thrust in the form of ion current. A preliminary set of experiments were conducted to show that, at the same power level, thrust may increase by utilizing an ablative anode. It was shown that ablative anode particles were found on a collection plate, compared to no particles from a non-ablative anode, while another experiment showed an increase in ion-to-arc current by approximately 40% at low frequencies compared to the non-ablative anode. Utilizing anode ablation leads to an increase in thrust-to-power ratio in the case of the μCAT.

  16. An effective algorithm for approximating adaptive behavior in seasonal environments

    DEFF Research Database (Denmark)

    Sainmont, Julie; Andersen, Ken Haste; Thygesen, Uffe Høgsbro

    2015-01-01

    Behavior affects most aspects of ecological processes and rates, and yet modeling frameworks which efficiently predict and incorporate behavioral responses into ecosystem models remain elusive. Behavioral algorithms based on life-time optimization, adaptive dynamics or game theory are unsuited...... for large global models because of their high computational demand. We compare an easily integrated, computationally efficient behavioral algorithm known as Gilliam's rule against the solution from a life-history optimization. The approximation takes into account only the current conditions to optimize...... behavior; the so-called "myopic approximation", "short sighted", or "static optimization". We explore the performance of the myopic approximation with diel vertical migration (DVM) as an example of a daily routine, a behavior with seasonal dependence that trades off predation risk with foraging...

  17. Choice with frequently changing food rates and food ratios.

    Science.gov (United States)

    Baum, William M; Davison, Michael

    2014-03-01

    In studies of operant choice, when one schedule of a concurrent pair is varied while the other is held constant, the constancy of the constant schedule may exert discriminative control over performance. In our earlier experiments, schedules varied reciprocally across components within sessions, so that while food ratio varied food rate remained constant. In the present experiment, we held one variable-interval (VI) schedule constant while varying the concurrent VI schedule within sessions. We studied five conditions, each with a different constant left VI schedule. On the right key, seven different VI schedules were presented in seven different unsignaled components. We analyzed performances at several different time scales. At the longest time scale, across conditions, behavior ratios varied with food ratios as would be expected from the generalized matching law. At shorter time scales, effects due to holding the left VI constant became more and more apparent, the shorter the time scale. In choice relations across components, preference for the left key leveled off as the right key became leaner. Interfood choice approximated strict matching for the varied right key, whereas interfood choice hardly varied at all for the constant left key. At the shortest time scale, visit patterns differed for the left and right keys. Much evidence indicated the development of a fix-and-sample pattern. In sum, the procedural difference made a large difference to performance, except for choice at the longest time scale and the fix-and-sample pattern at the shortest time scale. © Society for the Experimental Analysis of Behavior.

  18. Reduction of determinate errors in mass bias-corrected isotope ratios measured using a multi-collector plasma mass spectrometer

    International Nuclear Information System (INIS)

    Doherty, W.

    2015-01-01

    A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer

  19. Increased sex ratio in Russia and Cuba after Chernobyl: a radiological hypothesis.

    Science.gov (United States)

    Scherb, Hagen; Kusmierz, Ralf; Voigt, Kristina

    2013-08-15

    The ratio of male to female offspring at birth may be a simple and non-invasive way to monitor the reproductive health of a population. Except in societies where selective abortion skews the sex ratio, approximately 105 boys are born for every 100 girls. Generally, the human sex ratio at birth is remarkably constant in large populations. After the Chernobyl nuclear power plant accident in April 1986, a long lasting significant elevation in the sex ratio has been found in Russia, i.e. more boys or fewer girls compared to expectation were born. Recently, also for Cuba an escalated sex ratio from 1987 onward has been documented and discussed in the scientific literature. By the end of the eighties of the last century in Cuba as much as about 60% of the food imports were provided by the former Soviet Union. Due to its difficult economic situation, Cuba had neither the necessary insight nor the political strength to circumvent the detrimental genetic effects of imported radioactively contaminated foodstuffs after Chernobyl. We propose that the long term stable sex ratio increase in Cuba is essentially due to ionizing radiation. A synoptic trend analysis of Russian and Cuban annual sex ratios discloses upward jumps in 1987. The estimated jump height from 1986 to 1987 in Russia measures 0.51% with a 95% confidence interval (0.28, 0.75), p value Cuba the estimated jump height measures 2.99% (2.39, 3.60), p value Cuba and by radiological analyses of remains in Cuba for Cs-137 and Sr-90. If the evidence for the hypothesis is strengthened, there is potential to learn about genetic radiation risks and to prevent similar effects in present and future exposure situations.

  20. Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis

    CERN Document Server

    Layton, William J

    2012-01-01

    This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.

  1. Carbon isotope ratios of organic matter in Bering Sea settling particles. Extremely high remineralization of organic carbon derived from diatoms

    International Nuclear Information System (INIS)

    Yasuda, Saki; Akagi, Tasuku; Naraoka, Hiroshi; Kitajima, Fumio; Takahashi, Kozo

    2016-01-01

    The carbon isotope ratios of organic carbon in settling particles collected in the highly-diatom-productive Bering Sea were determined. Wet decomposition was employed to oxidize relatively fresh organic matter. The amount of unoxidised organic carbon in the residue following wet decomposition was negligible. The δ 13 C of organic carbon in the settling particles showed a clear relationship against SiO 2 /CaCO 3 ratio of settling particles: approximately -26‰ and -19‰ at lower and higher SiO 2 /CaCO 3 ratios, respectively. The δ 13 C values were largely interpreted in terms of mixing of two major plankton sources. Both δ 13 C and compositional data can be explained consistently only by assuming that more than 98% of diatomaceous organic matter decays and that organic matter derived from carbonate-shelled plankton may remain much less remineralized. A greater amount of diatom-derived organic matter is discovered to be trapped with the increase of SiO 2 /CaCO 3 ratio of the settling particles. The ratio of organic carbon to inorganic carbon, known as the rain ratio, therefore, tends to increase proportionally with the SiO 2 /CaCO 3 ratio under an extremely diatom-productive condition. (author)

  2. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  3. Approximate solution of oil film load-carrying capacity of turbulent journal bearing with couple stress flow

    Science.gov (United States)

    Zhang, Yongfang; Wu, Peng; Guo, Bo; Lü, Yanjun; Liu, Fuxi; Yu, Yingtian

    2015-01-01

    The instability of the rotor dynamic system supported by oil journal bearing is encountered frequently, such as the half-speed whirl of the rotor, which is caused by oil film lubricant with nonlinearity. Currently, more attention is paid to the physical characteristics of oil film due to an oil-lubricated journal bearing being the important supporting component of the bearing-rotor systems and its nonlinear nature. In order to analyze the lubrication characteristics of journal bearings efficiently and save computational efforts, an approximate solution of nonlinear oil film forces of a finite length turbulent journal bearing with couple stress flow is proposed based on Sommerfeld and Ocvirk numbers. Reynolds equation in lubrication of a finite length turbulent journal bearing is solved based on multi-parametric principle. Load-carrying capacity of nonlinear oil film is obtained, and the results obtained by different methods are compared. The validation of the proposed method is verified, meanwhile, the relationships of load-carrying capacity versus eccentricity ratio and width-to-diameter ratio under turbulent and couple stress working conditions are analyzed. The numerical results show that both couple stress flow and eccentricity ratio have obvious influence on oil film pressure distribution, and the proposed method approximates the load-carrying capacity of turbulent journal bearings efficiently with various width-to-diameter ratios. This research proposes an approximate solution of oil film load-carrying capacity of turbulent journal bearings with different width-to-diameter ratios, which are suitable for high eccentricity ratios and heavy loads.

  4. Electromagnetic radiation damping of charges in external gravitational fields (weak field, slow motion approximation). [Harmonic coordinates, weak field slow-motion approximation, Green function

    Energy Technology Data Exchange (ETDEWEB)

    Rudolph, E [Max-Planck-Institut fuer Physik und Astrophysik, Muenchen (F.R. Germany)

    1975-01-01

    As a model for gravitational radiation damping of a planet the electromagnetic radiation damping of an extended charged body moving in an external gravitational field is calculated in harmonic coordinates using a weak field, slowing-motion approximation. Special attention is paid to the case where this gravitational field is a weak Schwarzschild field. Using Green's function methods for this purpose it is shown that in a slow-motion approximation there is a strange connection between the tail part and the sharp part: radiation reaction terms of the tail part can cancel corresponding terms of the sharp part. Due to this cancelling mechanism the lowest order electromagnetic radiation damping force in an external gravitational field in harmonic coordinates remains the flat space Abraham Lorentz force. It is demonstrated in this simplified model that a naive slow-motion approximation may easily lead to divergent higher order terms. It is shown that this difficulty does not arise up to the considered order.

  5. Using Relative Statistics and Approximate Disease Prevalence to Compare Screening Tests.

    Science.gov (United States)

    Samuelson, Frank; Abbey, Craig

    2016-11-01

    Schatzkin et al. and other authors demonstrated that the ratios of some conditional statistics such as the true positive fraction are equal to the ratios of unconditional statistics, such as disease detection rates, and therefore we can calculate these ratios between two screening tests on the same population even if negative test patients are not followed with a reference procedure and the true and false negative rates are unknown. We demonstrate that this same property applies to an expected utility metric. We also demonstrate that while simple estimates of relative specificities and relative areas under ROC curves (AUC) do depend on the unknown negative rates, we can write these ratios in terms of disease prevalence, and the dependence of these ratios on a posited prevalence is often weak particularly if that prevalence is small or the performance of the two screening tests is similar. Therefore we can estimate relative specificity or AUC with little loss of accuracy, if we use an approximate value of disease prevalence.

  6. Approximate solutions of common fixed-point problems

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant. Beginning with an introduction, this monograph moves on to study: · dynamic string-averaging methods for common fixed point problems in a Hilbert space · dynamic string methods for common fixed point problems in a metric space · dynamic string-averaging version of the proximal...

  7. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  8. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  9. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  10. New fossil remains of Homo naledi from the Lesedi Chamber, South Africa

    Science.gov (United States)

    Hawks, John; Elliott, Marina; Schmid, Peter; Churchill, Steven E; de Ruiter, Darryl J; Roberts, Eric M; Hilbert-Wolf, Hannah; Garvin, Heather M; Williams, Scott A; Delezene, Lucas K; Feuerriegel, Elen M; Randolph-Quinney, Patrick; Kivell, Tracy L; Laird, Myra F; Tawane, Gaokgatlhe; DeSilva, Jeremy M; Bailey, Shara E; Brophy, Juliet K; Meyer, Marc R; Skinner, Matthew M; Tocheri, Matthew W; VanSickle, Caroline; Walker, Christopher S; Campbell, Timothy L; Kuhn, Brian; Kruger, Ashley; Tucker, Steven; Gurtov, Alia; Hlophe, Nompumelelo; Hunter, Rick; Morris, Hannah; Peixotto, Becca; Ramalepa, Maropeng; van Rooyen, Dirk; Tsikoane, Mathabela; Boshoff, Pedro; Dirks, Paul HGM; Berger, Lee R

    2017-01-01

    The Rising Star cave system has produced abundant fossil hominin remains within the Dinaledi Chamber, representing a minimum of 15 individuals attributed to Homo naledi. Further exploration led to the discovery of hominin material, now comprising 131 hominin specimens, within a second chamber, the Lesedi Chamber. The Lesedi Chamber is far separated from the Dinaledi Chamber within the Rising Star cave system, and represents a second depositional context for hominin remains. In each of three collection areas within the Lesedi Chamber, diagnostic skeletal material allows a clear attribution to H. naledi. Both adult and immature material is present. The hominin remains represent at least three individuals based upon duplication of elements, but more individuals are likely present based upon the spatial context. The most significant specimen is the near-complete cranium of a large individual, designated LES1, with an endocranial volume of approximately 610 ml and associated postcranial remains. The Lesedi Chamber skeletal sample extends our knowledge of the morphology and variation of H. naledi, and evidence of H. naledi from both recovery localities shows a consistent pattern of differentiation from other hominin species. DOI: http://dx.doi.org/10.7554/eLife.24232.001 PMID:28483039

  11. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  12. Vesicle computers: Approximating a Voronoi diagram using Voronoi automata

    International Nuclear Information System (INIS)

    Adamatzky, Andrew; De Lacy Costello, Ben; Holley, Julian; Gorecki, Jerzy; Bull, Larry

    2011-01-01

    Highlights: → We model irregular arrangements of vesicles filled with chemical systems. → We examine influence of precipitation threshold on the system's computational potential. → We demonstrate computation of Voronoi diagram and skeleton. - Abstract: Irregular arrangements of vesicles filled with excitable and precipitating chemical systems are imitated by Voronoi automata - finite-state machines defined on a planar Voronoi diagram. Every Voronoi cell takes four states: resting, excited, refractory and precipitate. A resting cell excites if it has at least one neighbour in an excited state. The cell precipitates if the ratio of excited cells in its neighbourhood versus the number of neighbours exceeds a certain threshold. To approximate a Voronoi diagram on Voronoi automata we project a planar set onto the automaton lattice, thus cells corresponding to data-points are excited. Excitation waves propagate across the Voronoi automaton, interact with each other and form precipitate at the points of interaction. The configuration of the precipitate represents the edges of an approximated Voronoi diagram. We discover the relationship between the quality of the Voronoi diagram approximation and the precipitation threshold, and demonstrate the feasibility of our model in approximating Voronoi diagrams of arbitrary-shaped objects and in constructing a skeleton of a planar shape.

  13. Hot embossing of photonic crystal polymer structures with a high aspect ratio

    DEFF Research Database (Denmark)

    Schelb, Mauno; Vannahme, Christoph; Kolew, Alexander

    2011-01-01

    ). A nickel tool for the replication of structures with lateral dimensions of 110 nm and heights of approximately 370 nm is fabricated via electroplating of a nanostructured sample resulting in an aspect ratio of approximately 3.5. The structures are subsequently hot embossed into PMMA and COC substrates....

  14. Analysis of Dextromethorphan and Dextrorphan in Skeletal Remains Following Decomposition in Different Microclimate Conditions.

    Science.gov (United States)

    Unger, K A; Watterson, J H

    2016-10-01

    The effects of decomposition microclimate on the distribution of dextromethorphan (DXM) and dextrorphan (DXT) in skeletonized remains of rats acutely exposed to DXM were examined. Animals (n = 10) received DXM (75 mg/kg, i.p.), were euthanized 30 min post-dose and immediately allowed to decompose at either Site A (shaded forest microenvironment on a grass-covered soil substrate) or Site B (rocky substrate exposed to direct sunlight, 600 m from Site A). Ambient temperature and relative humidity were automatically recorded 3 cm above rats at each site. Skeletal elements (vertebral columns, ribs, pelvic girdles, femora, tibiae, humeri and scapulae) were harvested, and analyzed using microwave assisted extraction, microplate solid phase extraction, and GC/MS. Drug levels, expressed as mass-normalized response ratios, and the ratios of DXT and DXM levels were compared across bones and between microclimate sites. No significant differences in DXT levels or metabolite/parent ratios were observed between sites or across bones. Only femoral DXM levels differed significantly between microclimate sites. For pooled data, microclimate was not observed to significantly affect analyte levels, nor the ratio of levels of DXT and DXM. These data suggest that microclimate conditions do not influence DXM and metabolite distribution in skeletal remains. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Moment ratios for heavy QQ- - states and their dependence on the quarkmass definition

    International Nuclear Information System (INIS)

    Bertlmann, R.A.

    1982-01-01

    When analyzing heavy qq - states with help of exponential moments we argue that a ratio of moments should be expanded rather than the moments themselves. Within a nonrelativistic approximation we show that the expanded ratio is totally independent on the quark mass definition, whereas the nonexpanded ratio of moments strongly depends on it. (Author)

  16. Bulk viscosity of strongly interacting matter in the relaxation time approximation

    Science.gov (United States)

    Czajka, Alina; Hauksson, Sigtryggur; Shen, Chun; Jeon, Sangyong; Gale, Charles

    2018-04-01

    We show how thermal mean field effects can be incorporated consistently in the hydrodynamical modeling of heavy-ion collisions. The nonequilibrium correction to the distribution function resulting from a temperature-dependent mass is obtained in a procedure which automatically satisfies the Landau matching condition and is thermodynamically consistent. The physics of the bulk viscosity is studied here for Boltzmann and Bose-Einstein gases within the Chapman-Enskog and 14-moment approaches in the relaxation time approximation. Constant and temperature-dependent masses are considered in turn. It is shown that, in the small mass limit, both methods lead to the same value of the ratio of the bulk viscosity to its relaxation time. The inclusion of a temperature-dependent mass leads to the emergence of the βλ function in that ratio, and it is of the expected parametric form for the Boltzmann gas, while for the Bose-Einstein case it is affected by the infrared cutoff. This suggests that the relaxation time approximation may be too crude to obtain a reliable form of ζ /τR for gases obeying Bose-Einstein statistics.

  17. Electronic states in clusters of H forms of zeolites with variation of the Si/Al ratio

    International Nuclear Information System (INIS)

    Gun'ko, V.M.

    1987-01-01

    Fragments of H forms of zeolites of the faujasite type including up to 12 silicon- and aluminum-oxygen tetrahedrons and having different Si/Al ratios have been calculated in the cluster approximation by the MINDO/3 and CNDO/2 methods. The dependence of the integral and orbital densities of electronic states in the clusters on the aluminum content has been investigated. It has been shown that the profiles of the s- and p-orbital density of states of Al remain practically unchanged as the Si/Al ratio is lowered and that the maxima of the orbital density of states of Si broaden, and new maxima appear at the bottom and top of the valence band. When the acidity of the structural OH groups is lowered, the maxima of the orbital density of states of the H atoms are displaced appreciably only in the deep valence band, while in the upper valence band the positions of the peaks of the s-orbital density of states of the H atoms remain constant. Satisfactory agreement of the calculated orbital densities of states of Si, Al, and O with the corresponding x-ray photoelectron spectra has been obtained. In the deep valence band the data from the MINDO/3 method are better than those from the CNDO/2 method and reproduce the positions of the maxima in the x-ray photoelectron spectra

  18. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    Science.gov (United States)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  19. Intersecting-storage-rings inclusive data and the charge ratio of cosmic-ray muons

    CERN Document Server

    Yen, E

    1973-01-01

    The ( mu /sup +// mu /sup -/) ratio at sea level has been calculated by Frazer et al (1972) using the hypothesis of limiting fragmentation together with the inclusive data below 30 GeV/c. They obtained a value of mu /sup +// mu /sup -/ approximately=1.56, to be compared with experimental value of 1.2 to 1.4. The ratio has been calculated using the recent ISR (CERN Intersecting Storage Rings) data, and obtained a value of mu /sup +// mu /sup -/ approximately 1.40 in good agreement with the experimental result. (8 refs).

  20. CMB spectra and bispectra calculations: making the flat-sky approximation rigorous

    International Nuclear Information System (INIS)

    Bernardeau, Francis; Pitrou, Cyril; Uzan, Jean-Philippe

    2011-01-01

    This article constructs flat-sky approximations in a controlled way in the context of the cosmic microwave background observations for the computation of both spectra and bispectra. For angular spectra, it is explicitly shown that there exists a whole family of flat-sky approximations of similar accuracy for which the expression and amplitude of next to leading order terms can be explicitly computed. It is noted that in this context two limiting cases can be encountered for which the expressions can be further simplified. They correspond to cases where either the sources are localized in a narrow region (thin-shell approximation) or are slowly varying over a large distance (which leads to the so-called Limber approximation). Applying this to the calculation of the spectra it is shown that, as long as the late integrated Sachs-Wolfe contribution is neglected, the flat-sky approximation at leading order is accurate at 1% level for any multipole. Generalization of this construction scheme to the bispectra led to the introduction of an alternative description of the bispectra for which the flat-sky approximation is well controlled. This is not the case for the usual description of the bispectrum in terms of reduced bispectrum for which a flat-sky approximation is proposed but the next-to-leading order terms of which remain obscure

  1. Increased sex ratio in Russia and Cuba after Chernobyl: a radiological hypothesis

    Science.gov (United States)

    2013-01-01

    Background The ratio of male to female offspring at birth may be a simple and non-invasive way to monitor the reproductive health of a population. Except in societies where selective abortion skews the sex ratio, approximately 105 boys are born for every 100 girls. Generally, the human sex ratio at birth is remarkably constant in large populations. After the Chernobyl nuclear power plant accident in April 1986, a long lasting significant elevation in the sex ratio has been found in Russia, i.e. more boys or fewer girls compared to expectation were born. Recently, also for Cuba an escalated sex ratio from 1987 onward has been documented and discussed in the scientific literature. Presentation of the hypothesis By the end of the eighties of the last century in Cuba as much as about 60% of the food imports were provided by the former Soviet Union. Due to its difficult economic situation, Cuba had neither the necessary insight nor the political strength to circumvent the detrimental genetic effects of imported radioactively contaminated foodstuffs after Chernobyl. We propose that the long term stable sex ratio increase in Cuba is essentially due to ionizing radiation. Testing of the hypothesis A synoptic trend analysis of Russian and Cuban annual sex ratios discloses upward jumps in 1987. The estimated jump height from 1986 to 1987 in Russia measures 0.51% with a 95% confidence interval (0.28, 0.75), p value < 0.0001. In Cuba the estimated jump height measures 2.99% (2.39, 3.60), p value < 0.0001. The hypothesis may be tested by reconstruction of imports from the world markets to Cuba and by radiological analyses of remains in Cuba for Cs-137 and Sr-90. Implications of the hypothesis If the evidence for the hypothesis is strengthened, there is potential to learn about genetic radiation risks and to prevent similar effects in present and future exposure situations. PMID:23947741

  2. Approximate solution to the Kolmogorov equation for a fission chain-reacting system

    International Nuclear Information System (INIS)

    Ruby, L.; McSwine, T.L.

    1986-01-01

    An approximate solution has been obtained for the Kolmogorov equation describing a fission chain-reacting system. The method considers the population of neutrons, delayed-neutron precursors, and detector counts. The effect of the detector is separated from the statistics of the chain reaction by a weak coupling assumption that predicts that the detector responds to the average rather than to the instantaneous neutron population. An approximate solution to the remaining equation, involving the populations of neutrons and precursors, predicts a negative-binomial behaviour for the neutron probability distribution

  3. Effect of interaction of embedded crack and free surface on remaining fatigue life

    Directory of Open Access Journals (Sweden)

    Genshichiro Katsumata

    2016-12-01

    Full Text Available Embedded crack located near free surface of a component interacts with the free surface. When the distance between the free surface and the embedded crack is short, stress at the crack tip ligament is higher than that at the other area of the cracked section. It can be easily expected that fatigue crack growth is fast, when the embedded crack locates near the free surface. To avoid catastrophic failures caused by fast fatigue crack growth at the crack tip ligament, fitness-for-service (FFS codes provide crack-to-surface proximity rules. The proximity rules are used to determine whether the cracks should be treated as embedded cracks as-is, or transformed to surface cracks. Although the concepts of the proximity rules are the same, the specific criteria and the rules to transform embedded cracks into surface cracks differ amongst FFS codes. This paper focuses on the interaction between an embedded crack and a free surface of a component as well as on its effects on the remaining fatigue lives of embedded cracks using the proximity rules provided by the FFS codes. It is shown that the remaining fatigue lives for the embedded cracks strongly depend on the crack aspect ratio and location from the component free surface. In addition, it can be said that the proximity criteria defined by the API and RSE-M codes give overly conservative remaining lives. On the contrary, the WES and AME codes always give long remaining lives and non-conservative estimations. When the crack aspect ratio is small, ASME code gives non-conservative estimation.

  4. Eigenvalue ratio detection based on exact moments of smallest and largest eigenvalues

    KAUST Repository

    Shakir, Muhammad; Tang, Wuchen; Rao, Anlei; Imran, Muhammad Ali; Alouini, Mohamed-Slim

    2011-01-01

    Detection based on eigenvalues of received signal covariance matrix is currently one of the most effective solution for spectrum sensing problem in cognitive radios. However, the results of these schemes always depend on asymptotic assumptions since the close-formed expression of exact eigenvalues ratio distribution is exceptionally complex to compute in practice. In this paper, non-asymptotic spectrum sensing approach to approximate the extreme eigenvalues is introduced. In this context, the Gaussian approximation approach based on exact analytical moments of extreme eigenvalues is presented. In this approach, the extreme eigenvalues are considered as dependent Gaussian random variables such that the joint probability density function (PDF) is approximated by bivariate Gaussian distribution function for any number of cooperating secondary users and received samples. In this context, the definition of Copula is cited to analyze the extent of the dependency between the extreme eigenvalues. Later, the decision threshold based on the ratio of dependent Gaussian extreme eigenvalues is derived. The performance analysis of our newly proposed approach is compared with the already published asymptotic Tracy-Widom approximation approach. © 2011 ICST.

  5. Serial binary interval ratios improve rhythm reproduction

    Directory of Open Access Journals (Sweden)

    Xiang eWu

    2013-08-01

    Full Text Available Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8, non-binary integer (1:3:5:6, and non-integer (1:2.3:5.3:6.4 ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.

  6. Serial binary interval ratios improve rhythm reproduction.

    Science.gov (United States)

    Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao

    2013-01-01

    Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.

  7. Digital Marketing Budgets for Independent Hotels: Continuously Shifting to Remain Competitive in the Online World

    Directory of Open Access Journals (Sweden)

    Leora Halpern Lanz

    2015-08-01

    Full Text Available The hotel marketing budget, typically amounting to approximately 4-5% of an asset’s total revenue, must remain fluid, so that the marketing director can constantly adapt the marketing tools to meet consumer communications methods and demands. This article suggests how an independent hotel can maximize their marketing budget by using multiple channels and strategies.

  8. Direct dating of Early Upper Palaeolithic human remains from Mladec.

    Science.gov (United States)

    Wild, Eva M; Teschler-Nicola, Maria; Kutschera, Walter; Steier, Peter; Trinkaus, Erik; Wanek, Wolfgang

    2005-05-19

    The human fossil assemblage from the Mladec Caves in Moravia (Czech Republic) has been considered to derive from a middle or later phase of the Central European Aurignacian period on the basis of archaeological remains (a few stone artefacts and organic items such as bone points, awls, perforated teeth), despite questions of association between the human fossils and the archaeological materials and concerning the chronological implications of the limited archaeological remains. The morphological variability in the human assemblage, the presence of apparently archaic features in some specimens, and the assumed early date of the remains have made this fossil assemblage pivotal in assessments of modern human emergence within Europe. We present here the first successful direct accelerator mass spectrometry radiocarbon dating of five representative human fossils from the site. We selected sample materials from teeth and from one bone for 14C dating. The four tooth samples yielded uncalibrated ages of approximately 31,000 14C years before present, and the bone sample (an ulna) provided an uncertain more-recent age. These data are sufficient to confirm that the Mladec human assemblage is the oldest cranial, dental and postcranial assemblage of early modern humans in Europe and is therefore central to discussions of modern human emergence in the northwestern Old World and the fate of the Neanderthals.

  9. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  10. Estimating 24-h urinary sodium/potassium ratio from casual ('spot') urinary sodium/potassium ratio: the INTERSALT Study.

    Science.gov (United States)

    Iwahori, Toshiyuki; Miura, Katsuyuki; Ueshima, Hirotsugu; Chan, Queenie; Dyer, Alan R; Elliott, Paul; Stamler, Jeremiah

    2017-10-01

    Association between casual and 24-h urinary sodium-to-potassium (Na/K) ratio is well recognized, although it has not been validated in diverse demographic groups. Our aim was to assess utility across and within populations of casual urine to estimate 24-h urinary Na/K ratio using data from the INTERSALT Study. The INTERSALT Study collected cross-sectional standardized data on casual urinary sodium and potassium and also on timed 24-h urinary sodium and potassium for 10 065 individuals from 52 population samples in 32 countries (1985-87). Pearson correlation coefficients and agreement were computed for Na/K ratio of casual urine against 24-h urinary Na/K ratio both at population and individual levels. Pearson correlation coefficients relating means of 24-h urine and casual urine Na/K ratio were r = 0.96 and r = 0.69 in analyses across populations and individuals, respectively. Correlations of casual urine Na/creatinine and K/creatinine ratios with 24-h urinary Na and K excretion, respectively, were lower than correlation of casual and 24-h urinary Na/K ratio in analyses across populations and individuals. The bias estimate with the Bland-Altman method, defined as the difference between Na/K ratio of 24-h urine and casual urine, was approximately 0.4 across both populations and individuals. Spread around, the mean bias was higher for individuals than populations. With appropriate bias correction, casual urine Na/K ratio may be a useful, low-burden alternative method to 24-h urine for estimation of population urinary Na/K ratio. It may also be applicable for assessment of the urinary Na/K ratio of individuals, with use of repeated measurements to reduce measurement error and increase precision. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association

  11. Branching ratios and CP asymmetries in the decay B → VV

    International Nuclear Information System (INIS)

    Kramer, G.; Palmer, W.F.

    1991-06-01

    We carry out a systematic study of branching ratios, angular correlations, and CP asymmetries in the decay of neutral and charged B mesons to final states consisting of two vector mesons. The renormalization group improved effective Hamiltonian is evaluated in the vacuum insertion (factorization) approximation. OZI suppressed and annihilation terms are neglected. Current matrix elements are evaluated using the wave functions of Bauer, Stech and Wirbel. Branching ratios and angular correlations among subsequent decays of the vector mesons are calculated for 34 channels and a comparison is made with the data. As a first approximation, the calculational scheme provides a useful framework with which to organize the data. Interesting direct CP asymmetries are particularly evident in K*ω and K*ρ final states, where branching ratios are moderate. They are excellent probes of penguin term influence on decay amplitudes. Even larger direct asymmetries are present in ωρ and ρρ final states where, however, branching ratios are low and results are very model dependent. We show how B 0 -B 0 mixing phases are influenced by phases in the direct amplitudes. The effect is particularly strong for K* 0 D* 0 final states. (orig.)

  12. System and method for high precision isotope ratio destructive analysis

    Science.gov (United States)

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  13. Approximate spacetime symmetries and conservation laws

    Energy Technology Data Exchange (ETDEWEB)

    Harte, Abraham I [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States)], E-mail: harte@uchicago.edu

    2008-10-21

    A notion of geometric symmetry is introduced that generalizes the classical concepts of Killing fields and other affine collineations. There is a sense in which flows under these new vector fields minimize deformations of the connection near a specified observer. Any exact affine collineations that may exist are special cases. The remaining vector fields can all be interpreted as analogs of Poincare and other well-known symmetries near timelike worldlines. Approximate conservation laws generated by these objects are discussed for both geodesics and extended matter distributions. One example is a generalized Komar integral that may be taken to define the linear and angular momenta of a spacetime volume as seen by a particular observer. This is evaluated explicitly for a gravitational plane wave spacetime.

  14. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  15. On the validity of localized approximation for an on-axis zeroth-order Bessel beam

    International Nuclear Information System (INIS)

    Gouesbet, Gérard; Lock, J.A.; Ambrosio, L.A.; Wang, J.J.

    2017-01-01

    Localized approximation procedures are efficient ways to evaluate beam shape coefficients of laser beams, and are particularly useful when other methods are ineffective or inefficient. Several papers in the literature have reported the use of such procedures to evaluate the beam shape coefficients of Bessel beams. Examining the specific case of an on-axis zeroth-order Bessel beam, we demonstrate that localized approximation procedures are valid only for small axicon angles. - Highlights: • The localized approximation has been widely used to evaluate the Beam Shape Coefficients (BSCs) of Bessel beams. • The validity of this approximation is examined in the case of an on-axis zeroth-order Bessel beam. • It is demonstrated, in this specific example, that the localized approximation is efficient only for small enough axicon angles. • It is easily argued that this result must remain true for any kind of Bessel beams.

  16. A Monte Carlo Application to Approximate the Integral from a to b of e Raised to the x Squared.

    Science.gov (United States)

    Easterday, Kenneth; Smith, Tommy

    1992-01-01

    Proposes an alternative means of approximating the value of complex integrals, the Monte Carlo procedure. Incorporating a discrete approach and probability, an approximation is obtained from the ratio of computer-generated points falling under the curve to the number of points generated in a predetermined rectangle. (MDH)

  17. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  18. Prognostic modelling options for remaining useful life estimation by industry

    Science.gov (United States)

    Sikorska, J. Z.; Hodkiewicz, M.; Ma, L.

    2011-07-01

    Over recent years a significant amount of research has been undertaken to develop prognostic models that can be used to predict the remaining useful life of engineering assets. Implementations by industry have only had limited success. By design, models are subject to specific assumptions and approximations, some of which are mathematical, while others relate to practical implementation issues such as the amount of data required to validate and verify a proposed model. Therefore, appropriate model selection for successful practical implementation requires not only a mathematical understanding of each model type, but also an appreciation of how a particular business intends to utilise a model and its outputs. This paper discusses business issues that need to be considered when selecting an appropriate modelling approach for trial. It also presents classification tables and process flow diagrams to assist industry and research personnel select appropriate prognostic models for predicting the remaining useful life of engineering assets within their specific business environment. The paper then explores the strengths and weaknesses of the main prognostics model classes to establish what makes them better suited to certain applications than to others and summarises how each have been applied to engineering prognostics. Consequently, this paper should provide a starting point for young researchers first considering options for remaining useful life prediction. The models described in this paper are Knowledge-based (expert and fuzzy), Life expectancy (stochastic and statistical), Artificial Neural Networks, and Physical models.

  19. Maximum mass ratio of am CVn-type binary systems and maximum white dwarf mass in ultra-compact x-ray binaries (addendum - Serb. Astron. J. No. 183 (2011, 63

    Directory of Open Access Journals (Sweden)

    Arbutina B.

    2012-01-01

    Full Text Available We recalculated the maximum white dwarf mass in ultra-compact X-ray binaries obtained in an earlier paper (Arbutina 2011, by taking the effects of super-Eddington accretion rate on the stability of mass transfer into account. It is found that, although the value formally remains the same (under the assumed approximations, for white dwarf masses M2 >~0.1MCh mass ratios are extremely low, implying that the result for Mmax is likely to have little if any practical relevance.

  20. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  1. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  2. High Resolution of the ECG Signal by Polynomial Approximation

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2006-04-01

    Full Text Available Averaging techniques as temporal averaging and space averaging have been successfully used in many applications for attenuating interference [6], [7], [8], [9], [10]. In this paper we introduce interference removing of the ECG signal by polynomial approximation, with smoothing discrete dependencies, to make up for averaging methods. The method is suitable for low-level signals of the electrical activity of the heart often less than 10 m V. Most low-level signals arising from PR, ST and TP segments which can be detected eventually and their physiologic meaning can be appreciated. Of special importance for the diagnostic of the electrical activity of the heart is the activity bundle of His between P and R waveforms. We have established an artificial sine wave to ECG signal between P and R wave. The aim focus is to verify the smoothing method by polynomial approximation if the SNR (signal-to-noise ratio is negative (i.e. a signal is lower than noise.

  3. Paternal effects on the human sex ratio at birth: evidence from interracial crosses.

    Science.gov (United States)

    Khoury, M J; Erickson, J D; James, L M

    1984-01-01

    The effects of interracial crossing on the human sex ratio at birth were investigated using United States birth-certificate data for 1972-1979. The sex ratio was 1.059 for approximately 14 million singleton infants born to white couples, 1.033 for 2 million born to black couples, and 1.024 for 64,000 born to American Indian couples. Paternal and maternal race influences on the observed racial differences in sex ratio were analyzed using additional data on approximately 97,000 singleton infants born to white-black couples and 60,000 born to white-Indian couples. After adjustment for mother's race, white fathers had significantly more male offspring than did black fathers (ratio of sex ratios [RSR] = 1.027) and Indian fathers (RSR = 1.022). On the other hand, after adjustment for father's race, white mothers did not have more male offspring than did black mothers (RSR = 0.998) or Indian mothers (RSR = 1.009). The paternal-race effect persisted after adjustment for parental ages, education, birth order, and maternal marital status. The study shows that the observed racial differences in the sex ratio at birth are due to the effects of father's race and not the mother's. The study points to paternal determinants of the human sex ratio at fertilization and/or of the prenatal differential sex survival. PMID:6496474

  4. Performance analysis of wind turbines at low tip-speed ratio using the Betz-Goldstein model

    International Nuclear Information System (INIS)

    Vaz, Jerson R.P.; Wood, David H.

    2016-01-01

    Highlights: • General formulations for power and thrust at any tip-speed ratio are developed. • The Joukowsky model for the blades is modified with specific vortex distributions. • Betz-Goldstein model is shown to be the most consistent at low tip-speed ratio. • The effects of finite blade number are assessed using tip loss factors. • Tip loss for finite blade number may complicate the vortex breakdown. - Abstract: Analyzing wind turbine performance at low tip-speed ratio is challenging due to the relatively high level of swirl in the wake. This work presents a new approach to wind turbine analysis including swirl for any tip-speed ratio. The methodology uses the induced velocity field from vortex theory in the general momentum theory, in the form of the turbine thrust and torque equations. Using the constant bound circulation model of Joukowsky, the swirl velocity becomes infinite on the wake centreline even at high tip-speed ratio. Rankine, Vatistas and Delery vortices were used to regularize the Joukowsky model near the centreline. The new formulation prevents the power coefficient from exceeding the Betz-Joukowsky limit. An alternative calculation, based on the varying circulation for Betz-Goldstein optimized rotors is shown to have the best general behavior. Prandtl’s approximation for the tip loss and a recent alternative were employed to account for the effects of a finite number of blades. The Betz-Goldstein model appears to be the only one resistant to vortex breakdown immediately behind the rotor for an infinite number of blades. Furthermore, the dependence of the induced velocity on radius in the Betz-Goldstein model allows the power coefficient to remain below Betz-Joukowsky limit which does not occur for the Joukowsky model at low tip-speed ratio.

  5. Likelihood ratio sequential sampling models of recognition memory.

    Science.gov (United States)

    Osth, Adam F; Dennis, Simon; Heathcote, Andrew

    2017-02-01

    The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Analytical solutions for the surface response to small amplitude perturbations in boundary data in the shallow-ice-stream approximation

    Directory of Open Access Journals (Sweden)

    G. H. Gudmundsson

    2008-07-01

    Full Text Available New analytical solutions describing the effects of small-amplitude perturbations in boundary data on flow in the shallow-ice-stream approximation are presented. These solutions are valid for a non-linear Weertman-type sliding law and for Newtonian ice rheology. Comparison is made with corresponding solutions of the shallow-ice-sheet approximation, and with solutions of the full Stokes equations. The shallow-ice-stream approximation is commonly used to describe large-scale ice stream flow over a weak bed, while the shallow-ice-sheet approximation forms the basis of most current large-scale ice sheet models. It is found that the shallow-ice-stream approximation overestimates the effects of bed topography perturbations on surface profile for wavelengths less than about 5 to 10 ice thicknesses, the exact number depending on values of surface slope and slip ratio. For high slip ratios, the shallow-ice-stream approximation gives a very simple description of the relationship between bed and surface topography, with the corresponding transfer amplitudes being close to unity for any given wavelength. The shallow-ice-stream estimates for the timescales that govern the transient response of ice streams to external perturbations are considerably more accurate than those based on the shallow-ice-sheet approximation. In particular, in contrast to the shallow-ice-sheet approximation, the shallow-ice-stream approximation correctly reproduces the short-wavelength limit of the kinematic phase speed given by solving a linearised version of the full Stokes system. In accordance with the full Stokes solutions, the shallow-ice-sheet approximation predicts surface fields to react weakly to spatial variations in basal slipperiness with wavelengths less than about 10 to 20 ice thicknesses.

  7. Assessment of the Remaining Life of Bituminous Layers in Road Pavements

    Directory of Open Access Journals (Sweden)

    Kálmán Adorjányi

    2017-02-01

    Full Text Available In this paper, a mechanistic-empirical approach is presented for the assessment of bearing capacity condition of asphalt pavement layers by Falling Weight Deflectometer measurements and laboratory fatigue tests. The bearing capacity condition ratio was determined using past traffic data and the remaining fatigue life which was determined from multilayer pavement response model. The traffic growth rate was taken into account with finite arithmetic and geometric progressions. Fatigue resistance of layers’ bituminous materials was obtained with indirect tensile fatigue tests. Deduct curve of condition scores was derived with Weibull distribution.

  8. Investigation of the anomalous isotope ratios of the Central-Transdanubian bauxites

    International Nuclear Information System (INIS)

    Viczian, M.

    1977-01-01

    In the case of the Central Transdanubian bauxite deposits significant anomaly of the lead isotope ratios has been found. The 206 Pb/ 204 Pb isotope ratio in approximately 40 samples was investigated and the results have shown an average deviation from the literary value by about 80%. These results have been cont confirmed by thermal ionisation measurings, too. Some possibilities for the explanation of this isotope anomaly are also dealt with in the paper. (author)

  9. Convective mixing length and the galactic carbon to oxygen ratio

    Energy Technology Data Exchange (ETDEWEB)

    Serrano, A; Peimbert, M [Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Astronomia

    1981-01-01

    We have studied chemical evolution models, assuming instantaneous recycling, and considering: a) the effects of mass loss both in massive stars and in intermediate mass stars, and b) the initial mass function of the solar neighbourhood (Serrano 1978). From these models we have derived the yields of carbon and oxygen. It is concluded that the condition C/O approximately 0.58 in the solar neighbourhood can only be satisfied if, during advanced stages of stellar evolution of intermediate mass stars, the ratio of the convective mixing length to the pressure scale height is > approximately 2.

  10. Green's Kernels and meso-scale approximations in perforated domains

    CERN Document Server

    Maz'ya, Vladimir; Nieves, Michael

    2013-01-01

    There are a wide range of applications in physics and structural mechanics involving domains with singular perturbations of the boundary. Examples include perforated domains and bodies with defects of different types. The accurate direct numerical treatment of such problems remains a challenge. Asymptotic approximations offer an alternative, efficient solution. Green’s function is considered here as the main object of study rather than a tool for generating solutions of specific boundary value problems. The uniformity of the asymptotic approximations is the principal point of attention. We also show substantial links between Green’s functions and solutions of boundary value problems for meso-scale structures. Such systems involve a large number of small inclusions, so that a small parameter, the relative size of an inclusion, may compete with a large parameter, represented as an overall number of inclusions. The main focus of the present text is on two topics: (a) asymptotics of Green’s kernels in domai...

  11. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  12. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  13. The range of validity of the two-body approximation in models of terrestrial planet accumulation. II - Gravitational cross sections and runaway accretion

    Science.gov (United States)

    Wetherill, G. W.; Cox, L. P.

    1985-01-01

    The validity of the two-body approximation in calculating encounters between planetesimals has been evaluated as a function of the ratio of unperturbed planetesimal velocity (with respect to a circular orbit) to mutual escape velocity when their surfaces are in contact (V/V-sub-e). Impact rates as a function of this ratio are calculated to within about 20 percent by numerical integration of the equations of motion. It is found that when the ratio is greater than 0.4 the two-body approximation is a good one. Consequences of reducing the ratio to less than 0.02 are examined. Factors leading to an optimal size for growth of planetesimals from a swarm of given eccentricity and placing a limit on the extent of runaway accretion are derived.

  14. Financial ratios in diagnostic radiology practices: variability and trends.

    Science.gov (United States)

    Hogan, Christopher; Sunshine, Jonathan H

    2004-03-01

    To evaluate variation in financial ratios for radiology practices nationwide and trends in these ratios and in payments. In 1999, the American College of Radiology surveyed radiology practices by mail. The final response rate was 66%. Weighting was used to make responses representative of all radiology practices in the United States. Self-reported financial ratios (payments, charges, accounts receivable turnover) were analyzed; 449 responses had usable data on these ratios. Comparison with results of a similar 1992 survey and combined analysis with Medicare data on billed charges provided information on trends. All measures of payment collections declined sharply from 1992 to 1999, with the gross collections rate (revenues as percentage of billed charges) decreasing from 71% to 55%. Average payment for a typical radiology service decreased approximately 4% in dollar terms or approximately 19% in inflation-adjusted terms. In 1999, nonmetropolitan practices appeared to fare better than others. Among insurers, Medicaid stood out as a low and slow payer, but neither managed care nor Medicare had a consistent effect on financial ratios. The gross collections rate varied substantially across geographic areas, as did, in an inverse pattern, the level of billed charges. One-quarter of practices had accounts receivable equal to 90 or more days of billings. The opposing geographic pattern of billed charges and gross collection rate suggests that geographic variation in the latter is driven more by variation in billed charges than by variation in payment levels. Radiologists saw a substantial decrease in the real (inflation-adjusted) value of payment per service during the 1990s. The large fraction of practices with accounts receivable of 90 or more days of billings-a level considered potentially imprudent by financial management advisors-suggests that many practices should improve financial management and that state prompt-payment laws have not had a substantial positive

  15. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  16. The Effect of Si and Al Concentration Ratios on the Removal of U(VI) under Hanford Site 200 Area Conditions-12115

    Energy Technology Data Exchange (ETDEWEB)

    Katsenovich, Yelena; Gonzalez, Nathan; Moreno-Pastor, Carol; Lagos, Leonel [Applied Research Center, Florida International University, 10555 W. Flagler Street, Miami, FL 33174 (United States)

    2012-07-01

    Injection of reactive gases, such as NH{sub 3}, is an innovative technique to mitigate uranium contamination in soil for a vadose zone (VZ) contaminated with radionuclides. A series of experiments were conducted to examine the effect of the concentration ratio of silicon to aluminum in the presence of various bicarbonate concentrations on the coprecipitation process of U(VI). The concentration of Al in all tests remained unchanged at 2.8 mM. Experiments showed that the removal efficiency of uranium was not significantly affected by the different bicarbonate and U(VI) concentrations tested. For the lower Si:Al molar ratios of 2:1 and 18:1, the removal efficiency of uranium was relatively low (≤ 8%). For the Si:Al molar ratio of 35:1, the removal efficiency of uranium was increased to an average of ∼82% for all bicarbonate concentrations tested. At higher Si:Al molar ratios (53:1 and above), a relatively high removal efficiency of U(VI), approximately 85% and higher, was observed. These results demonstrate that the U(VI) removal efficiency is more affected by the Si:Al molar ratio than by the bicarbonate concentration in solution. The results of this experiment are promising for the potential implementation of NH{sub 3} gas injection for the remediation of U(VI) -contaminated VZ. (authors)

  17. The Human Remains from HMS Pandora

    Directory of Open Access Journals (Sweden)

    D.P. Steptoe

    2002-04-01

    Full Text Available In 1977 the wreck of HMS Pandora (the ship that was sent to re-capture the Bounty mutineers was discovered off the north coast of Queensland. Since 1983, the Queensland Museum Maritime Archaeology section has carried out systematic excavation of the wreck. During the years 1986 and 1995-1998, more than 200 human bone and bone fragments were recovered. Osteological investigation revealed that this material represented three males. Their ages were estimated at approximately 17 +/-2 years, 22 +/-3 years and 28 +/-4 years, with statures of 168 +/-4cm, 167 +/-4cm, and 166cm +/-3cm respectively. All three individuals were probably Caucasian, although precise determination of ethnicity was not possible. In addition to poor dental hygiene, signs of chronic diseases suggestive of rickets and syphilis were observed. Evidence of spina bifida was seen on one of the skeletons, as were other skeletal anomalies. Various taphonomic processes affecting the remains were also observed and described. Compact bone was observed under the scanning electron microscope and found to be structurally coherent. Profiles of the three skeletons were compared with historical information about the 35 men lost with the ship, but no precise identification could be made. The investigation did not reveal the cause of death. Further research, such as DNA analysis, is being carried out at the time of publication.

  18. Analytical approximation of the erosion rate and electrode wear in micro electrical discharge machining

    International Nuclear Information System (INIS)

    Kurnia, W; Tan, P C; Yeo, S H; Wong, M

    2008-01-01

    Theoretical models have been used to predict process performance measures in electrical discharge machining (EDM), namely the material removal rate (MRR), tool wear ratio (TWR) and surface roughness (SR). However, these contributions are mainly applicable to conventional EDM due to limits on the range of energy and pulse-on-time adopted by the models. This paper proposes an analytical approximation of micro-EDM performance measures, based on the crater prediction using a developed theoretical model. The results show that the analytical approximation of the MRR and TWR is able to provide a close approximation with the experimental data. The approximation results for the MRR and TWR are found to have a variation of up to 30% and 24%, respectively, from their associated experimental values. Since the voltage and current input used in the computation are captured in real time, the method can be applied as a reliable online monitoring system for the micro-EDM process

  19. Random-phase approximation and broken symmetry

    International Nuclear Information System (INIS)

    Davis, E.D.; Heiss, W.D.

    1986-01-01

    The validity of the random-phase approximation (RPA) in broken-symmetry bases is tested in an appropriate many-body system for which exact solutions are available. Initially the regions of stability of the self-consistent quasiparticle bases in this system are established and depicted in a 'phase' diagram. It is found that only stable bases can be used in an RPA calculation. This is particularly true for those RPA modes which are not associated with the onset of instability of the basis; it is seen that these modes do not describe any excited state when the basis is unstable, although from a formal point of view they remain acceptable. The RPA does well in a stable broken-symmetry basis provided one is not too close to a point where a phase transition occurs. This is true for both energies and matrix elements. (author)

  20. Combination of Wavefunction and Density Functional Approximations for Describing Electronic Correlation

    Science.gov (United States)

    Garza, Alejandro J.

    Perhaps the most important approximations to the electronic structure problem in quantum chemistry are those based on coupled cluster and density functional theories. Coupled cluster theory has been called the ``gold standard'' of quantum chemistry due to the high accuracy that it achieves for weakly correlated systems. Kohn-Sham density functionals based on semilocal approximations are, without a doubt, the most widely used methods in chemistry and material science because of their high accuracy/cost ratio. The root of the success of coupled cluster and density functionals is their ability to efficiently describe the dynamic part of the electron correlation. However, both traditional coupled cluster and density functional approximations may fail catastrophically when substantial static correlation is present. This severely limits the applicability of these methods to a plethora of important chemical and physical problems such as, e.g., the description of bond breaking, transition states, transition metal-, lanthanide- and actinide-containing compounds, and superconductivity. In an attempt to tackle this problem, nonstandard (single-reference) coupled cluster-based techniques that aim to describe static correlation have been recently developed: pair coupled cluster doubles (pCCD) and singlet-paired coupled cluster doubles (CCD0). The ability to describe static correlation in pCCD and CCD0 comes, however, at the expense of important amounts of dynamic correlation so that the high accuracy of standard coupled cluster becomes unattainable. Thus, the reliable and efficient description of static and dynamic correlation in a simultaneous manner remains an open problem for quantum chemistry and many-body theory in general. In this thesis, different ways to combine pCCD and CCD0 with density functionals in order to describe static and dynamic correlation simultaneously (and efficiently) are explored. The combination of wavefunction and density functional methods has a long

  1. Improved Approximation Algorithms for Item Pricing with Bounded Degree and Valuation

    Science.gov (United States)

    Hamane, Ryoso; Itoh, Toshiya

    When a store sells items to customers, the store wishes to decide the prices of the items to maximize its profit. If the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. It would be hard for the store to decide the prices of items. Assume that a store has a set V of n items and there is a set C of m customers who wish to buy those items. The goal of the store is to decide the price of each item to maximize its profit. We refer to this maximization problem as an item pricing problem. We classify the item pricing problems according to how many items the store can sell or how the customers valuate the items. If the store can sell every item i with unlimited (resp. limited) amount, we refer to this as unlimited supply (resp. limited supply). We say that the item pricing problem is single-minded if each customer j∈C wishes to buy a set ej⊆V of items and assigns valuation w(ej)≥0. For the single-minded item pricing problems (in unlimited supply), Balcan and Blum regarded them as weighted k-hypergraphs and gave several approximation algorithms. In this paper, we focus on the (pseudo) degree of k-hypergraphs and the valuation ratio, i. e., the ratio between the smallest and the largest valuations. Then for the single-minded item pricing problems (in unlimited supply), we show improved approximation algorithms (for k-hypergraphs, general graphs, bipartite graphs, etc.) with respect to the maximum (pseudo) degree and the valuation ratio.

  2. Duplex Alu Screening for Degraded DNA of Skeletal Human Remains

    Directory of Open Access Journals (Sweden)

    Fabian Haß

    2017-10-01

    Full Text Available The human-specific Alu elements, belonging to the class of Short INterspersed Elements (SINEs, have been shown to be a powerful tool for population genetic studies. An earlier study in this department showed that it was possible to analyze Alu presence/absence in 3000-year-old skeletal human remains from the Bronze Age Lichtenstein cave in Lower Saxony, Germany. We developed duplex Alu screening PCRs with flanking primers for two Alu elements, each combined with a single internal Alu primer. By adding an internal primer, the approximately 400–500 bp presence signals of Alu elements can be detected within a range of less than 200 bp. Thus, our PCR approach is suited for highly fragmented ancient DNA samples, whereas NGS analyses frequently are unable to handle repetitive elements. With this analysis system, we examined remains of 12 individuals from the Lichtenstein cave with different degrees of DNA degradation. The duplex PCRs showed fully informative amplification results for all of the chosen Alu loci in eight of the 12 samples. Our analysis system showed that Alu presence/absence analysis is possible in samples with different degrees of DNA degradation and it reduces the amount of valuable skeletal material needed by a factor of four, as compared with a singleplex approach.

  3. Neutron-proton matrix element ratios of 21+ states in 58,60,62,64Ni

    International Nuclear Information System (INIS)

    Antalik, R.

    1989-01-01

    The neutron-proton matrix element ratios (η) for 2 1 + states of even Ni isotopes are investigated within the framework of the shell model quasiparticle random-phase approximation. The special attention is devoted to the dependence of η ratios on the radial neutron and proton ground-state density-distribution differences (Δ np ). This dependence is found to be about 0.5Δ np . The theoretical η ratios are 14-23% greater than the hydrodynamical limit. The theoretical Δ np dependence of η ratios enable us to understand the empirical η ratio results. 20 refs.; 2 figs.; 2 tabs

  4. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    International Nuclear Information System (INIS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2017-01-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)

  5. Collaborative spectrum sensing based on the ratio between largest eigenvalue and Geometric mean of eigenvalues

    KAUST Repository

    Shakir, Muhammad

    2011-12-01

    In this paper, we introduce a new detector referred to as Geometric mean detector (GEMD) which is based on the ratio of the largest eigenvalue to the Geometric mean of the eigenvalues for collaborative spectrum sensing. The decision threshold has been derived by employing Gaussian approximation approach. In this approach, the two random variables, i.e. The largest eigenvalue and the Geometric mean of the eigenvalues are considered as independent Gaussian random variables such that their cumulative distribution functions (CDFs) are approximated by a univariate Gaussian distribution function for any number of cooperating secondary users and received samples. The approximation approach is based on the calculation of exact analytical moments of the largest eigenvalue and the Geometric mean of the eigenvalues of the received covariance matrix. The decision threshold has been calculated by exploiting the CDF of the ratio of two Gaussian distributed random variables. In this context, we exchange the analytical moments of the two random variables with the moments of the Gaussian distribution function. The performance of the detector is compared with the performance of the energy detector and eigenvalue ratio detector. Analytical and simulation results show that our newly proposed detector yields considerable performance advantage in realistic spectrum sensing scenarios. Moreover, our results based on proposed approximation approach are in perfect agreement with the empirical results. © 2011 IEEE.

  6. Balancing Exchange Mixing in Density-Functional Approximations for Iron Porphyrin.

    Science.gov (United States)

    Berryman, Victoria E J; Boyd, Russell J; Johnson, Erin R

    2015-07-14

    Predicting the correct ground-state multiplicity for iron(II) porphyrin, a high-spin quintet, remains a significant challenge for electronic-structure methods, including commonly employed density functionals. An even greater challenge for these methods is correctly predicting favorable binding of O2 to iron(II) porphyrin, due to the open-shell singlet character of the adduct. In this work, the performance of a modest set of contemporary density-functional approximations is assessed and the results interpreted using Bader delocalization indices. It is found that inclusion of greater proportions of Hartree-Fock exchange, in hybrid or range-separated hybrid functionals, has opposing effects; it improves the ability of the functional to identify the ground state but is detrimental to predicting favorable dioxygen binding. Because of the uncomplementary nature of these properties, accurate prediction of both the relative spin-state energies and the O2 binding enthalpy eludes conventional density-functional approximations.

  7. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities.

    Science.gov (United States)

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2013-12-01

    Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Evaluation for moments of a ratio with application to regression estimation

    OpenAIRE

    Doukhan, Paul; Lang, Gabriel

    2008-01-01

    Ratios of random variables often appear in probability and statistical applications. We aim to approximate the moments of such ratios under several dependence assumptions. Extending the ideas in Collomb [C. R. Acad. Sci. Paris 285 (1977) 289–292], we propose sharper bounds for the moments of randomly weighted sums and for the Lp-deviations from the asymptotic normal law when the central limit theorem holds. We indicate suitable applications in finance and censored data analysis and focus on t...

  9. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  10. Could changes in reported sex ratios at birth during China's 1958-1961 famine support the adaptive sex ratio adjustment hypothesis?

    Directory of Open Access Journals (Sweden)

    Anna Reimondos

    2013-10-01

    Full Text Available Background: The adaptive sex ratio adjustment hypothesis suggests that when mothers are in poor conditions the sex ratio of their offspring will be biased towards females. Major famines provide opportunities for testing this hypothesis because they lead to the widespread deterioration of living conditions in the affected population. Objective: This study examines changes in sex ratio at birth before, during, and after China's 1958-1961 famine, to see whether they provide any support for the adaptive sex ratio adjustment hypothesis. Methods: We use descriptive statistics to analyse data collected by both China's 1982 and 1988 fertility sample surveys and examine changes in sex ratio at birth in recent history. In addition, we examine the effectiveness of using different methods to model changes in sex ratio at birth and compare their differences. Results: During China's 1958-1961 famine, reported sex ratio at birth remained notably higher than that observed in most countries in the world. The timing of the decline in sex ratio at birth did not coincide with the timing of the famine. After the famine, although living conditions were considerably improved, the sex ratio at birth was not higher but lower than that recorded during the famine. Conclusions: The analysis of the data collected by the two fertility surveys has found no evidence that changes in sex ratio at birth during China's 1958-1961 famine and the post-famine period supported the adaptive sex ratio adjustment hypothesis.

  11. The physical-optics approximation and its application to light backscattering by hexagonal ice crystals

    International Nuclear Information System (INIS)

    Borovoi, A.; Konoshonkin, A.; Kustova, N.

    2014-01-01

    The physical-optics approximation in the problem of light scattering by large particles is so defined that it includes the classical physical optics concerning the problem of light penetration through a large aperture in an opaque screen. In the second part of the paper, the problem of light backscattering by quasi-horizontally oriented atmospheric ice crystals is considered where conformity between the physical-optics and geometric-optics approximations is discussed. The differential scattering cross section as well as the polarization elements of the Mueller matrix for quasi-horizontally oriented hexagonal ice plates has been calculated in the physical-optics approximation for the case of vertically pointing lidars. - Highlights: • The physical-optics Mueller matrix is a smoothed geometric-optics counterpart. • Backscatter by partially oriented hexagonal ice plates has been calculated. • Depolarization ratio for partially oriented hexagonal ice plates is negligible

  12. Approximating Matsubara dynamics using the planetary model: Tests on liquid water and ice

    Science.gov (United States)

    Willatt, Michael J.; Ceriotti, Michele; Althorpe, Stuart C.

    2018-03-01

    Matsubara dynamics is the quantum-Boltzmann-conserving classical dynamics which remains when real-time coherences are taken out of the exact quantum Liouvillian [T. J. H. Hele et al., J. Chem. Phys. 142, 134103 (2015)]; because of a phase-term, it cannot be used as a practical method without further approximation. Recently, Smith et al. [J. Chem. Phys. 142, 244112 (2015)] developed a "planetary" model dynamics which conserves the Feynman-Kleinert (FK) approximation to the quantum-Boltzmann distribution. Here, we show that for moderately anharmonic potentials, the planetary dynamics gives a good approximation to Matsubara trajectories on the FK potential surface by decoupling the centroid trajectory from the locally harmonic Matsubara fluctuations, which reduce to a single phase-less fluctuation particle (the "planet"). We also show that the FK effective frequency can be approximated by a direct integral over these fluctuations, obviating the need to solve iterative equations. This modification, together with use of thermostatted ring-polymer molecular dynamics, allows us to test the planetary model on water (gas-phase, liquid, and ice) using the q-TIP4P/F potential surface. The "planetary" fluctuations give a poor approximation to the rotational/librational bands in the infrared spectrum, but a good approximation to the bend and stretch bands, where the fluctuation lineshape is found to be motionally narrowed by the vibrations of the centroid.

  13. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  14. Assessing various Infrared (IR) microscopic imaging techniques for post-mortem interval evaluation of human skeletal remains

    Science.gov (United States)

    Roider, Clemens; Ritsch-Marte, Monika; Pemberger, Nadin; Cemper-Kiesslich, Jan; Hatzer-Grubwieser, Petra; Parson, Walther; Pallua, Johannes Dominikus

    2017-01-01

    Due to the influence of many environmental processes, a precise determination of the post-mortem interval (PMI) of skeletal remains is known to be very complicated. Although methods for the investigation of the PMI exist, there still remains much room for improvement. In this study the applicability of infrared (IR) microscopic imaging techniques such as reflection-, ATR- and Raman- microscopic imaging for the estimation of the PMI of human skeletal remains was tested. PMI specific features were identified and visualized by overlaying IR imaging data with morphological tissue structures obtained using light microscopy to differentiate between forensic and archaeological bone samples. ATR and reflection spectra revealed that a more prominent peak at 1042 cm-1 (an indicator for bone mineralization) was observable in archeological bone material when compared with forensic samples. Moreover, in the case of the archaeological bone material, a reduction in the levels of phospholipids, proteins, nucleic acid sugars, complex carbohydrates as well as amorphous or fully hydrated sugars was detectable at (reciprocal wavelengths/energies) between 3000 cm-1 to 2800 cm-1. Raman spectra illustrated a similar picture with less ν2PO43−at 450 cm-1 and ν4PO43− from 590 cm-1 to 584 cm-1, amide III at 1272 cm-1 and protein CH2 deformation at 1446 cm-1 in archeological bone material/samples/sources. A semi-quantitative determination of various distributions of biomolecules by chemi-maps of reflection- and ATR- methods revealed that there were less carbohydrates and complex carbohydrates as well as amorphous or fully hydrated sugars in archaeological samples compared with forensic bone samples. Raman- microscopic imaging data showed a reduction in B-type carbonate and protein α-helices after a PMI of 3 years. The calculated mineral content ratio and the organic to mineral ratio displayed that the mineral content ratio increases, while the organic to mineral ratio decreases with

  15. Assessing various Infrared (IR microscopic imaging techniques for post-mortem interval evaluation of human skeletal remains.

    Directory of Open Access Journals (Sweden)

    Claudia Woess

    Full Text Available Due to the influence of many environmental processes, a precise determination of the post-mortem interval (PMI of skeletal remains is known to be very complicated. Although methods for the investigation of the PMI exist, there still remains much room for improvement. In this study the applicability of infrared (IR microscopic imaging techniques such as reflection-, ATR- and Raman- microscopic imaging for the estimation of the PMI of human skeletal remains was tested. PMI specific features were identified and visualized by overlaying IR imaging data with morphological tissue structures obtained using light microscopy to differentiate between forensic and archaeological bone samples. ATR and reflection spectra revealed that a more prominent peak at 1042 cm-1 (an indicator for bone mineralization was observable in archeological bone material when compared with forensic samples. Moreover, in the case of the archaeological bone material, a reduction in the levels of phospholipids, proteins, nucleic acid sugars, complex carbohydrates as well as amorphous or fully hydrated sugars was detectable at (reciprocal wavelengths/energies between 3000 cm-1 to 2800 cm-1. Raman spectra illustrated a similar picture with less ν2PO43-at 450 cm-1 and ν4PO43- from 590 cm-1 to 584 cm-1, amide III at 1272 cm-1 and protein CH2 deformation at 1446 cm-1 in archeological bone material/samples/sources. A semi-quantitative determination of various distributions of biomolecules by chemi-maps of reflection- and ATR- methods revealed that there were less carbohydrates and complex carbohydrates as well as amorphous or fully hydrated sugars in archaeological samples compared with forensic bone samples. Raman- microscopic imaging data showed a reduction in B-type carbonate and protein α-helices after a PMI of 3 years. The calculated mineral content ratio and the organic to mineral ratio displayed that the mineral content ratio increases, while the organic to mineral ratio

  16. Assessing various Infrared (IR) microscopic imaging techniques for post-mortem interval evaluation of human skeletal remains.

    Science.gov (United States)

    Woess, Claudia; Unterberger, Seraphin Hubert; Roider, Clemens; Ritsch-Marte, Monika; Pemberger, Nadin; Cemper-Kiesslich, Jan; Hatzer-Grubwieser, Petra; Parson, Walther; Pallua, Johannes Dominikus

    2017-01-01

    Due to the influence of many environmental processes, a precise determination of the post-mortem interval (PMI) of skeletal remains is known to be very complicated. Although methods for the investigation of the PMI exist, there still remains much room for improvement. In this study the applicability of infrared (IR) microscopic imaging techniques such as reflection-, ATR- and Raman- microscopic imaging for the estimation of the PMI of human skeletal remains was tested. PMI specific features were identified and visualized by overlaying IR imaging data with morphological tissue structures obtained using light microscopy to differentiate between forensic and archaeological bone samples. ATR and reflection spectra revealed that a more prominent peak at 1042 cm-1 (an indicator for bone mineralization) was observable in archeological bone material when compared with forensic samples. Moreover, in the case of the archaeological bone material, a reduction in the levels of phospholipids, proteins, nucleic acid sugars, complex carbohydrates as well as amorphous or fully hydrated sugars was detectable at (reciprocal wavelengths/energies) between 3000 cm-1 to 2800 cm-1. Raman spectra illustrated a similar picture with less ν2PO43-at 450 cm-1 and ν4PO43- from 590 cm-1 to 584 cm-1, amide III at 1272 cm-1 and protein CH2 deformation at 1446 cm-1 in archeological bone material/samples/sources. A semi-quantitative determination of various distributions of biomolecules by chemi-maps of reflection- and ATR- methods revealed that there were less carbohydrates and complex carbohydrates as well as amorphous or fully hydrated sugars in archaeological samples compared with forensic bone samples. Raman- microscopic imaging data showed a reduction in B-type carbonate and protein α-helices after a PMI of 3 years. The calculated mineral content ratio and the organic to mineral ratio displayed that the mineral content ratio increases, while the organic to mineral ratio decreases with time

  17. Omniclassical Diffusion in Low Aspect Ratio Tokamaks

    International Nuclear Information System (INIS)

    Mynick, H.E.; White, R.B.; Gates, D.A.

    2004-01-01

    Recently reported numerical results for axisymmetric devices with low aspect ratio A found radial transport enhanced over the expected neoclassical value by a factor of 2 to 3. In this paper, we provide an explanation for this enhancement. Transport theory in toroidal devices usually assumes large A, and that the ratio B p /B t of the poloidal to the toroidal magnetic field is small. These assumptions result in transport which, in the low collision limit, is dominated by banana orbits, giving the largest collisionless excursion of a particle from an initial flux surface. However in a small aspect ratio device one may have B p /B t ∼ 1, and the gyroradius may be larger than the banana excursion. Here, we develop an approximate analytic transport theory valid for devices with arbitrary A. For low A, we find that the enhanced transport, referred to as omniclassical, is a combination of neoclassical and properly generalized classical effects, which become dominant in the low-A, B p /B t ∼ 1 regime. Good agreement of the analytic theory with numerical simulations is obtained

  18. Interacting-fermion approximation in the two-dimensional ANNNI model

    International Nuclear Information System (INIS)

    Grynberg, M.D.; Ceva, H.

    1990-12-01

    We investigate the effect of including domain-walls interactions in the two-dimensional axial next-nearest-neighbor Ising or ANNNI model. At low temperatures this problem is reduced to a one-dimensional system of interacting fermions which can be treated exactly. It is found that the critical boundaries of the low-temperature phases are in good agreement with those obtained using a free-fermion approximation. In contrast with the monotonic behavior derived from the free-fermion approach, the wall density or wave number displays reentrant phenomena when the ratio of the next-nearest-neighbor and nearest-neighbor interactions is greater than one-half. (author). 17 refs, 2 figs

  19. Determination of the mass-ratio distribution, I: single-lined spectroscopic binary stars

    NARCIS (Netherlands)

    Hogeveen, S.J.

    1992-01-01

    For single-lined spectroscopic binary stars (sbi), the mass ratio q = Msec=Mprim is calculated from the mass function f(m), which is determined from observations. For statistical investigations of the mass-ratio distribution, the term sin^3 i, that remains in the cubic equation from which q is

  20. Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...

  1. Generalized finite polynomial approximation (WINIMAX) to the reduced partition function of isotopic molecules

    International Nuclear Information System (INIS)

    Lee, M.W.; Bigeleisen, J.

    1978-01-01

    The MINIMAX finite polynomial approximation to an arbitrary function has been generalized to include a weighting function (WINIMAX). It is suggested that an exponential is a reasonable weighting function for the logarithm of the reduced partition function of a harmonic oscillator. Comparison of the error function for finite orthogonal polynomial (FOP), MINIMAX, and WINIMAX expansions of the logarithm of the reduced vibrational partition function show WINIMAX to be the best of the three approximations. A condensed table of WINIMAX coefficients is presented. The FOP, MINIMAX, and WINIMAX approximations are compared with exact calculations of the logarithm of the reduced partition function ratios for isotopic substitution in H 2 O, CH 4 , CH 2 O, C 2 H 4 , and C 2 H 6 at 300 0 K. Both deuterium and heavy atom isotope substitution are studied. Except for a third order expansion involving deuterium substitution, the WINIMAX method is superior to FOP and MINIMAX. At the level of a second order expansion WINIMAX approximations to ln(s/s')f are good to 2.5% and 6.5% for deuterium and heavy atom substitution, respectively

  2. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Approximating perfection a mathematician's journey into the world of mechanics

    CERN Document Server

    Lebedev, Leonid P

    2004-01-01

    This is a book for those who enjoy thinking about how and why Nature can be described using mathematical tools. Approximating Perfection considers the background behind mechanics as well as the mathematical ideas that play key roles in mechanical applications. Concentrating on the models of applied mechanics, the book engages the reader in the types of nuts-and-bolts considerations that are normally avoided in formal engineering courses: how and why models remain imperfect, and the factors that motivated their development. The opening chapter reviews and reconsiders the basics of c

  4. Determination of thoron and radon ratio by liquid scintillation spectrometry

    International Nuclear Information System (INIS)

    Yoshikawa, H.; Nakanishi, T.; Nakahara, H.

    2006-01-01

    A portable liquid scintillation counter was applied for the analysis of alpha-ray energy spectrum to determine the ratio of 220 Rn/ 222 Rn in fumarolic gas in the field. A surface-polished vial was developed, by which a Gaussian distribution could be approximated for the alpha-ray energy spectra and the peak areas of the nuclides could be estimated independently, because of the wide FWHM in the liquid scintillation pulse. A fumarolic gas sample was collected in Mt. Kamiyama (Hakoneyama geothermal field in Japan) having low 220 Rn/ 222 Rn ratio of 2.20 ± 0.13. (author)

  5. Approximate cohomology in Banach algebras | Pourabbas ...

    African Journals Online (AJOL)

    We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...

  6. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  7. Digital marketing budgets for independent hotels Continuously Shifting to Remain Competitive in the Online World

    OpenAIRE

    Lanz, Leora Halpern; Carmichael, Megan

    2015-01-01

    The hotel marketing budget, typically amounting to approximately 4-5% of an asset’s total revenue, must remain fluid so that the marketing director can constantly adapt the marketing tools to meet consumer communications methods and demands. Though only a small amount of a hotel’s revenue is traditionally allocated for the marketing budget, the hotel’s success is directly reliant on how effectively that budget is utilized. Thus far in 2015, over 55% percent of hotel bookings are happening onl...

  8. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  9. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  10. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    Science.gov (United States)

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 (22α̂)0.50 for 0.020.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  11. Preschoolers' precision of the approximate number system predicts later school mathematics performance.

    Science.gov (United States)

    Mazzocco, Michèle M M; Feigenson, Lisa; Halberda, Justin

    2011-01-01

    The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities.

  12. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  13. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  14. Cosmological applications of Padé approximant

    International Nuclear Information System (INIS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation

  15. Cosmological applications of Padé approximant

    Science.gov (United States)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  16. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  17. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  18. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  19. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  20. Determination of a Two Variable Approximation Function with Application to the Fuel Combustion Charts

    Directory of Open Access Journals (Sweden)

    Irina-Carmen ANDREI

    2017-09-01

    Full Text Available Following the demands of the design and performance analysis in case of liquid fuel propelled rocket engines, as well as the trajectory optimization, the development of efficient codes, which frequently need to call the Fuel Combustion Charts, became an important matter. This paper presents an efficient solution to the issue; the author has developed an original approach to determine the non-linear approximation function of two variables: the chamber pressure and the nozzle exit pressure ratio. The numerical algorithm based on this two variable approximation function is more efficient due to its simplicity, capability to providing numerical accuracy and prospects for an increased convergence rate of the optimization codes.

  1. Approximating the edit distance for genomes with duplicate genes under DCJ, insertion and deletion

    Directory of Open Access Journals (Sweden)

    Shao Mingfu

    2012-12-01

    Full Text Available Abstract Computing the edit distance between two genomes under certain operations is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be easily computed for genomes without duplicate genes. In this paper, we study the edit distance for genomes with duplicate genes under a model that includes DCJ operations, insertions and deletions. We prove that computing the edit distance is equivalent to finding the optimal cycle decomposition of the corresponding adjacency graph, and give an approximation algorithm with an approximation ratio of 1.5 + ∈.

  2. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  3. Background approximation in automatic qualitative X-ray-fluorescent analysis

    International Nuclear Information System (INIS)

    Jordanov, J.; Tsanov, T.; Stefanov, R.; Jordanov, N.; Paunov, M.

    1982-01-01

    An empirical method of finding the dependence of the background intensity (Isub(bg) on the wavelength is proposed, based on the approximation of the experimentally found values for the background in the course of an automatic qualitative X-ray fluorescent analysis with pre-set curve. It is assumed that the dependence I(lambda) will be well approximated by a curve of the type Isub(bg)=(lambda-lambda sub(o)sup(fsub(1)(lambda))exp[fsub(2)(lambda)] where fsub(1) (lambda) and f 2 (lambda) are linear functions with respect to the sought parameters. This assumption was checked out on a ''pure'' starch background, in which it is not known beforehand which points belong to the background. It was assumed that the dependence I(lambda) can be found from all minima in the spectrum. Three types of minima has been distinguished: 1. the lowest point between two well-solved X-ray lines; 2. a minimum obtained as a result of statistical fluctuations of the measured signal; 3. the lowest point between two overlapped lines. The minima strongly deviating from the background are removed from the obtained set. The sum-total of the remaining minima serves as a base for the approximation of the dependence I(lambda). The unknown parameters are determined by means of the LSM. The approximated curve obtained by this method is closer to the real background than the background determined by the method described by Kigaki Denki, as the effect of all recorded minima is taken into account. As an example the PbTe spectrum recorded with crystal LiF 220 is shown graphically. The curve well describes the background of the spectrum even in the regions in which there are no minima belonging to the background. (authors)

  4. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  5. Coronal Loops: Evolving Beyond the Isothermal Approximation

    Science.gov (United States)

    Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.

    2002-05-01

    Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.

  6. Coupled kinetic equations for fermions and bosons in the relaxation-time approximation

    Science.gov (United States)

    Florkowski, Wojciech; Maksymiuk, Ewa; Ryblewski, Radoslaw

    2018-02-01

    Kinetic equations for fermions and bosons are solved numerically in the relaxation-time approximation for the case of one-dimensional boost-invariant geometry. Fermions are massive and carry baryon number, while bosons are massless. The conservation laws for the baryon number, energy, and momentum lead to two Landau matching conditions, which specify the coupling between the fermionic and bosonic sectors and determine the proper-time dependence of the effective temperature and baryon chemical potential of the system. The numerical results illustrate how a nonequilibrium mixture of fermions and bosons approaches hydrodynamic regime described by the Navier-Stokes equations with appropriate forms of the kinetic coefficients. The shear viscosity of a mixture is the sum of the shear viscosities of fermion and boson components, while the bulk viscosity is given by the formula known for a gas of fermions, however, with the thermodynamic variables characterising the mixture. Thus, we find that massless bosons contribute in a nontrivial way to the bulk viscosity of a mixture, provided fermions are massive. We further observe the hydrodynamization effect, which takes place earlier in the shear sector than in the bulk one. The numerical studies of the ratio of the longitudinal and transverse pressures show, to a good approximation, that it depends on the ratio of the relaxation and proper times only. This behavior is connected with the existence of an attractor solution for conformal systems.

  7. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  8. Bounded-Degree Approximations of Stochastic Networks

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.

  9. Usefulness of left ventricular wall thickness-to-diameter ratio in thallium-201 scintigraphy

    International Nuclear Information System (INIS)

    Manno, B.; Hakki, A.H.; Kane, S.A.; Iskandrian, A.S.

    1983-01-01

    The ratio of left ventricular wall thickness to the cavity dimension, as seen on thallium-201 images, was used in this study to predict left ventricular ejection fraction and volume. We obtained rest thallium-201 images in 50 patients with symptomatic coronary artery disease. The thickness of a normal-appearing segment of the left ventricular wall and the transverse diameter of the cavity were measured in the left anterior oblique projection. The left ventricular ejection fraction and volume in these patients were determined by radionuclide ventriculography. There was a good correlation between thickness-to-diameter ratio and ejection fraction and end-systolic volume. In 18 patients with a thickness-to-diameter ratio less than 0.70, the ejection fraction was lower than in the 16 patients with thickness-to-diameter ratio greater than or equal to 1.0. Similarly, in patients with a thickness-to-diameter ratio less than 0.70, the end-diastolic and end-systolic volume were higher than in the remaining patients with higher thickness-to-diameter ratios. All 18 patients with a thickness-to-diameter ratio less than 0.70 had ejection fractions less than 40%; 14 of 15 patients with a thickness-to-diameter ratio greater than or equal to 1.0 had an ejection fraction greater than 40%. The remaining 16 patients with a thickness-to-diameter ratio of 0.7-0.99 had intermediate ejection fractions and volumes.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    International Nuclear Information System (INIS)

    Colferai, D.; Niccoli, A.

    2015-01-01

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet “radius” R=0.5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  11. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Colferai, D.; Niccoli, A. [Dipartimento di Fisica e Astronomia, Università di Firenze and INFN, Sezione di Firenze, 50019 Sesto Fiorentino (Italy)

    2015-04-15

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet “radius” R=0.5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  12. Improved superposition schemes for approximate multi-caloron configurations

    International Nuclear Information System (INIS)

    Gerhold, P.; Ilgenfritz, E.-M.; Mueller-Preussker, M.

    2007-01-01

    Two improved superposition schemes for the construction of approximate multi-caloron-anti-caloron configurations, using exact single (anti-)caloron gauge fields as underlying building blocks, are introduced in this paper. The first improvement deals with possible monopole-Dirac string interactions between different calorons with non-trivial holonomy. The second one, based on the ADHM formalism, improves the (anti-)selfduality in the case of small caloron separations. It conforms with Shuryak's well-known ratio-ansatz when applied to instantons. Both superposition techniques provide a higher degree of (anti-)selfduality than the widely used sum-ansatz, which simply adds the (anti)caloron vector potentials in an appropriate gauge. Furthermore, the improved configurations (when discretized onto a lattice) are characterized by a higher stability when they are exposed to lattice cooling techniques

  13. Digital color analysis of color-ratio composite LANDSAT scenes. [Nevada

    Science.gov (United States)

    Raines, G. L.

    1977-01-01

    A method is presented that can be used to calculate approximate Munsell coordinates of the colors produced by making a color composite from three registered images. Applied to the LANDSAT MSS data of the Goldfield, Nevada, area, this method permits precise and quantitative definition of the limonitic areas originally observed in a LANDSAT color ratio composite. In addition, areas of transported limonite can be discriminated from the limonite in the hydrothermally altered areas of the Goldfield mining district. From the analysis, the numerical distinction between limonitic and nonlimonitic ground is generally less than 3% using the LANDSAT bands and as much as 8% in ratios of LANDSAT MSS bands.

  14. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  15. The effect of microstructure on the sheared edge quality and hole expansion ratio of hot-rolled 700 MPa steel

    Science.gov (United States)

    Kaijalainen, A.; Kesti, V.; Vierelä, R.; Ylitolva, M.; Porter, D.; Kömi, J.

    2017-09-01

    The effects of microstructure on the cutting and hole expansion properties of three thermomechanically rolled steels have been investigated. The yield strength of the studied 3 mm thick strip steels was approximately 700 MPa. Detailed microstructural studies using laser scanning confocal microscopy (LCSM), FESEM and FESEM-EBSD revealed that the three investigated materials consist of 1) single-phase polygonal ferrite, 2) polygonal ferrite with precipitates and 3) granular bainite. The quality of mechanically sheared edges were evaluated using visual inspection and LSCM, while hole expansion properties were characterised according to the methods described in ISO 16630. Roughness values (Ra and Rz) of the sheet edge with different cutting clearances varied between 12 µm to 21 µm and 133 µm to 225 µm, respectively. Mean hole expansion ratios varied from 28.4% to 40.5%. It was shown that granular bainite produced the finest cutting edge, but the hole expansion ratio remained at the same level as in the steel comprising single-phase ferrite. This indicates that a single-phase ferritic matrix enhances hole expansion properties even with low quality edges. A brief discussion of the microstructural features controlling the cutting quality and hole expansion properties is given.

  16. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Isotopic ratio measurement using a double focusing magnetic sector mass analyser with an inductively coupled plasma as an ion source

    International Nuclear Information System (INIS)

    Walder, A.J.; Freedman, P.A.

    1992-01-01

    An inductively coupled plasma source was coupled to a magnetic sector mass analyser equipped with seven Faraday detectors. An electrostatic filter located between the plasma source and the magnetic sector was used to create a double focusing system. Isotopic ratio measurements of uranium and lead standards revealed levels of internal and external precision comparable to those obtained using thermal inonization mass spectrometry. An external precision of 0.014% was obtained from the 235 U: 238 U measurement of six samples of a National Bureau of Standards (NBS) Standard Reference Material (SRM) U-500, while an RSD of 0.022% was obtained from the 206 Pb: 204 Pb measurement of six samples of NBS SRM Pb-981. Measured isotopic ratios deviated from the NBS value by approximately 0.9% per atomic mass unit. This deviation approximates to a linear function of mass bias and can therefore be corrected for by the analysis of standards. The analysis of NBS SRM Sr-987 revealed superior levels of internal and external precision. The normalization of the 87 Sr: 86 Sr ratio to the 86 Sr: 88 Sr ratio reduced the RSD to approximately 0.008%. The measured ratio was within 0.01% of the NBS value and the day-to-day reproducibility was consistent within one standard deviation. (author)

  18. RECENT TRENDS IN GENDER RATIO AT BIRTH IN HANGZHOU, CHINA.

    Science.gov (United States)

    Tang, L; Qiu, L Q; Yau, Kkw; Hui, Y V; Binns, C W; Lee, A H

    2015-12-01

    Higher than normal sex ratios at birth in China have been reported since the early 1980's. This study aimed to investigate recent trends in sex ratio at birth in Hangzhou, capital of Zhejiang Province in southeast China. Information on selected maternal and birth-related characteristics was extracted from the Hangzhou Birth Information Database for all pregnant women who delivered live births during 2005-2014. The sex ratios at birth were calculated after excluding infants with missing data on gender and those born with ambiguous genitalia. A total of 478,192 male births and 430,852 female births were recorded giving an overall ratio of 111.0. The sex ratio at birth was almost constant at around 110.7 during the period 2005-2008, followed by an increase to the peak at 113.1 in 2010 and then declined back to 109.6 in 2014. The gender ratio at birth in Hangzhou remained unbalanced for the past decade.

  19. Changes in Income at Macro Level Predict Sex Ratio at Birth in OECD Countries.

    Science.gov (United States)

    Kanninen, Ohto; Karhula, Aleksi

    2016-01-01

    The human sex ratio at birth (SRB) is approximately 107 boys for every 100 girls. SRB was rising until the World War II and has been declining slightly after the 1950s in several industrial countries. Recent studies have shown that SRB varies according to exposure to disasters and socioeconomic conditions. However, it remains unknown whether changes in SRB can be explained by observable macro-level socioeconomic variables across multiple years and countries. Here we show that changes in disposable income at the macro level positively predict SRB in OECD countries. A one standard deviation increase in the change of disposable income is associated with an increase of 1.03 male births per 1000 female births. The relationship is possibly nonlinear and driven by extreme changes. The association varies from country to country being particular strong in Estonia. This is the first evidence to show that economic and social conditions are connected to SRB across countries at the macro level. This calls for further research on the effects of societal conditions on general characteristics at birth.

  20. Bond charge approximation for valence electron density in elemental semiconductors

    International Nuclear Information System (INIS)

    Bashenov, V.K.; Gorbachov, V.E.; Marvakov, D.I.

    1985-07-01

    The spatial valence electron distribution in silicon and diamond is calculated in adiabatic bond charge approximation at zero temperature when bond charges have the Gaussian shape and their tensor character is taken into account. An agreement between theory and experiment has been achieved. For this purpose Xia's ionic pseudopotentials and Schulze-Unger's dielectric function are used. By two additional parameters Asub(B) and Zsub(B)sup(') we describe the spatial extent of the bond charge and local-field corrections, respectively. The parameter Zsub(B)sup(') accounts for the ratio between the Coulomb and exchange correlation interactions of the valence electrons and its silicon and diamond values have different signs. (author)

  1. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  2. Projection after variation in the finite-temperature Hartree-Fock-Bogoliubov approximation

    Science.gov (United States)

    Fanto, P.

    2017-11-01

    The finite-temperature Hartree-Fock-Bogoliubov (HFB) approximation often breaks symmetries of the underlying many-body Hamiltonian. Restricting the calculation of the HFB partition function to a subspace with good quantum numbers through projection after variation restores some of the correlations lost in breaking these symmetries, although effects of the broken symmetries such as sharp kinks at phase transitions remain. However, the most general projection after variation formula in the finite-temperature HFB approximation is limited by a sign ambiguity. Here, I extend the Pfaffian formula for the many-body traces of HFB density operators introduced by Robledo [L. M. Robledo, Phys. Rev. C. 79, 021302(R) (2009), 10.1103/PhysRevC.79.021302] to eliminate this sign ambiguity and evaluate the more complicated many-body traces required in projection after variation in the most general HFB case. The method is validated through a proof-of-principle calculation of the particle-number-projected HFB thermal energy in a simple model.

  3. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  4. Analytical approximation of neutron physics data

    International Nuclear Information System (INIS)

    Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.

    1984-01-01

    The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy

  5. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  6. Prognostic value of serum heavy/light chain ratios in patients with POEMS syndrome.

    Science.gov (United States)

    Wang, Chen; Su, Wei; Cai, Qian-Qian; Cai, Hao; Ji, Wei; Di, Qian; Duan, Ming-Hui; Cao, Xin-Xin; Zhou, Dao-Bin; Li, Jian

    2016-07-01

    POEMS syndrome is a rare plasma cell dyscrasia. Serum concentrations of the monoclonal protein in this disorder are typically low, and inapplicable to monitor disease activity in most cases, resulting in limited practical and prognostic values. Novel immunoassays measuring isotype-specific heavy/light chain (HLC) pairs showed its utility in disease monitoring and outcome prediction in several plasma cell dyscrasias. We report results of HLC measurements in 90 patients with POEMS syndrome. Sixty-six patients (73%; 95% confidence interval, 63-82%) had an abnormal HLC ratio at baseline. It could stratify the risk of disease relapse and was strongly associated with worse progression-free survival in a multivariate analysis (P = 0.021; hazard ratio [HR] 6.89, 95% CI 1.34-35.43). After therapy, HLC ratios improved, with 43 patients (48%) remaining abnormal. The post-therapeutic HLC ratio, if abnormal, also remained as an independent prognostic factor associated with worse progression-free survival (P = 0.019; HR 4.30, 95% CI 1.27-14.56). These results suggest the prognostic utility of HLC ratios in clinical management of POEMS patients. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  8. KERMA ratios in pediatric CT dosimetry

    International Nuclear Information System (INIS)

    Huda, Walter; Ogden, Kent M.; Lavallee, Robert L.; Roskopf, Marsha L.; Scalzetti, Ernest M.

    2012-01-01

    Patient organ doses may be estimated from CTDI values. More accurate estimates may be obtained by measuring KERMA (Kinetic Energy Released in Matter) in anthropomorphic phantoms and referencing these values to free-in-air X-ray intensity. To measure KERMA ratios (R K ) in pediatric phantoms at CT. CT scans produce an air KERMA K in a phantom and an air KERMA K CT at isocenter. KERMA ratios (R K ) are defined as (K/K CT ), measured using TLD chips in phantoms representing newborns to 10-year-olds. R K in the newborn is approximately constant. For the other phantoms, there is a peak R K value in the neck. The median R K values for the GE scanner at 120 kV were 0.92, 0.83, 0.77 and 0.76 for newborns, 1-year-olds, 5-year-olds and 10-year-olds, respectively. Organ R K values were 0.91 ± 0.04, 0.84 ± 0.07, 0.74 ± 0.09 and 0.72 ± 0.10 in newborns, 1-year-olds, 5-year-olds and 10-year-olds, respectively. At 120 kV, a Siemens Sensation 16 scanner had R K values 5% higher than those of the GE LightSpeed Ultra. KERMA ratios may be combined with air KERMA measurements at the isocenter to estimate organ doses in pediatric CT patients. (orig.)

  9. Arrival-time picking method based on approximate negentropy for microseismic data

    Science.gov (United States)

    Li, Yue; Ni, Zhuo; Tian, Yanan

    2018-05-01

    Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.

  10. Isoscalar compression modes in relativistic random phase approximation

    International Nuclear Information System (INIS)

    Ma, Zhong-yu; Van Giai, Nguyen.; Wandelt, A.; Vretenar, D.; Ring, P.

    2001-01-01

    Monopole and dipole compression modes in nuclei are analyzed in the framework of a fully consistent relativistic random phase approximation (RRPA), based on effective mean-field Lagrangians with nonlinear meson self-interaction terms. The large effect of Dirac sea states on isoscalar strength distribution functions is illustrated for the monopole mode. The main contribution of Fermi and Dirac sea pair states arises through the exchange of the scalar meson. The effect of vector meson exchange is much smaller. For the monopole mode, RRPA results are compared with constrained relativistic mean-field calculations. A comparison between experimental and calculated energies of isoscalar giant monopole resonances points to a value of 250-270 MeV for the nuclear matter incompressibility. A large discrepancy remains between theoretical predictions and experimental data for the dipole compression mode

  11. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    Science.gov (United States)

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  12. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  13. Ratio of carbon monoxide to molecular hydrogen in interstellar dark clouds

    International Nuclear Information System (INIS)

    Dickman, R.L.; Rensselaer Polytechnic Institute; and The Ivan A. Getting Laboratories, The Aerospace Corporation)

    1978-01-01

    Carbon monoxide and molecular hydrogen column densities are compared at various locations within 38 interstellar dark clouds. CO column densities were obtained from radio observations of the J=1→0 transitions of the 12 C 16 O and 13 C 16 O isotopic species of the molecule. Corresponding H 2 column densities were inferred by means of visual extinctions derived from star counts, since it is argued that the standard gas-to-extinction ratio can be expected to remain valid in the clouds studied. For locations in the sources possessing line-of-sight visual extinctions in the approximate range 1.5 -2 ) = (5.0 +- 2.5) x 10 5 N 13 between molecular hydrogen and 13 CO LTE column densities. The carbon monoxide molecule can therefore be used as a quantitative ''tracer'' for the (directly unobservable) H 2 content of dark clouds. The above relationship implies that at least approx.12% of the gas-phase carbon in the clouds studied is in the form of CO, provided that the clouds are assumed to be chemically homogeneous. Langer's ion-molecule chemistry for dark clouds appears to agree well with the present work if the fractionation channel of Watson, Anicich, and Huntress is included

  14. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  15. Extension of geometrical-optics approximation to on-axis Gaussian beam scattering. I. By a spherical particle.

    Science.gov (United States)

    Xu, Feng; Ren, Kuan Fang; Cai, Xiaoshu

    2006-07-10

    The geometrical-optics approximation of light scattering by a transparent or absorbing spherical particle is extended from plane wave to Gaussian beam incidence. The formulas for the calculation of the phase of each ray and the divergence factor are revised, and the interference of all the emerging rays is taken into account. The extended geometrical-optics approximation (EGOA) permits one to calculate the scattering diagram in all directions from 0 degrees to 180 degrees. The intensities of the scattered field calculated by the EGOA are compared with those calculated by the generalized Lorenz-Mie theory, and good agreement is found. The surface wave effect in Gaussian beam scattering is also qualitatively analyzed by introducing a flux ratio factor. The approach proposed is particularly important to the further extension of the geometrical-optics approximation to the scattering of large spheroidal particles.

  16. Nitrogen to phosphorus ratio of plant biomass versus soil solution in a tropical pioneer tree, Ficus insipida.

    Science.gov (United States)

    Garrish, Valerie; Cernusak, Lucas A; Winter, Klaus; Turner, Benjamin L

    2010-08-01

    It is commonly assumed that the nitrogen to phosphorus (N:P) ratio of a terrestrial plant reflects the relative availability of N and P in the soil in which the plant grows. Here, this was assessed for a tropical pioneer tree, Ficus insipida. Seedlings were grown in sand and irrigated with nutrient solutions containing N:P ratios ranging from 100. The experimental design further allowed investigation of physiological responses to N and P availability. Homeostatic control over N:P ratios was stronger in leaves than in stems or roots, suggesting that N:P ratios of stems and roots are more sensitive indicators of the relative availability of N and P at a site than N:P ratios of leaves. The leaf N:P ratio at which the largest plant dry mass and highest photosynthetic rates were achieved was approximately 11, whereas the corresponding whole-plant N:P ratio was approximately 6. Plant P concentration varied as a function of transpiration rate at constant nutrient solution P concentration, possibly due to transpiration-induced variation in the mass flow of P to root surfaces. The transpiration rate varied in response to nutrient solution N concentration, but not to nutrient solution P concentration, demonstrating nutritional control over transpiration by N but not P. Water-use efficiency varied as a function of N availability, but not as a function of P availability.

  17. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  18. Conversion and matched filter approximations for serial minimum-shift keyed modulation

    Science.gov (United States)

    Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.

    1982-01-01

    Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.

  19. Analysis of Case-Parent Trios Using a Loglinear Model with Adjustment for Transmission Ratio Distortion

    DEFF Research Database (Denmark)

    Huang, Lam Opal; Infante-RIvard, Claire; Labbe, Aurélie

    2016-01-01

    Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium...

  20. Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test

    Science.gov (United States)

    Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara

    2016-01-01

    The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.

  1. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...

  2. The human heart: application of the golden ratio and angle.

    Science.gov (United States)

    Henein, Michael Y; Zhao, Ying; Nicoll, Rachel; Sun, Lin; Khir, Ashraf W; Franklin, Karl; Lindqvist, Per

    2011-08-04

    The golden ratio, or golden mean, of 1.618 is a proportion known since antiquity to be the most aesthetically pleasing and has been used repeatedly in art and architecture. Both the golden ratio and the allied golden angle of 137.5° have been found within the proportions and angles of the human body and plants. In the human heart we found many applications of the golden ratio and angle, in addition to those previously described. In healthy hearts, vertical and transverse dimensions accord with the golden ratio, irrespective of different absolute dimensions due to ethnicity. In mild heart failure, the ratio of 1.618 was maintained but in end-stage heart failure the ratio significantly reduced. Similarly, in healthy ventricles mitral annulus dimensions accorded with the golden ratio, while in dilated cardiomyopathy and mitral regurgitation patients the ratio had significantly reduced. In healthy patients, both the angles between the mid-luminal axes of the pulmonary trunk and the ascending aorta continuation and between the outflow tract axis and continuation of the inflow tract axis of the right ventricle approximate to the golden angle, although in severe pulmonary hypertension, the angle is significantly increased. Hence the overall cardiac and ventricular dimensions in a normal heart are consistent with the golden ratio and angle, representing optimum pump structure and function efficiency, whereas there is significant deviation in the disease state. These findings could have anatomical, functional and prognostic value as markers of early deviation from normality. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  4. Solving Ratio-Dependent Predatorprey System with Constant Effort Harvesting Using Variational Iteration Method

    DEFF Research Database (Denmark)

    Ghotbi, Abdoul R; Barari, Amin

    2009-01-01

    Due to wide range of interest in use of bio-economic models to gain insight in to the scientific management of renewable resources like fisheries and forestry, variational iteration method (VIM) is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort...

  5. Improved approximate inspirals of test bodies into Kerr black holes

    International Nuclear Information System (INIS)

    Gair, Jonathan R; Glampedakis, Kostas

    2006-01-01

    We present an improved version of the approximate scheme for generating inspirals of test bodies into a Kerr black hole recently developed by Glampedakis, Hughes and Kennefick. Their original 'hybrid' scheme was based on combining exact relativistic expressions for the evolution of the orbital elements (the semilatus rectum p and eccentricity e) with an approximate, weak-field, formula for the energy and angular momentum fluxes, amended by the assumption of constant inclination angle ι during the inspiral. Despite the fact that the resulting inspirals were overall well behaved, certain pathologies remained for orbits in the strong-field regime and for orbits which are nearly circular and/or nearly polar. In this paper we eliminate these problems by incorporating an array of improvements in the approximate fluxes. First, we add certain corrections which ensure the correct behavior of the fluxes in the limit of vanishing eccentricity and/or 90 deg. inclination. Second, we use higher order post-Newtonian formulas, adapted for generic orbits. Third, we drop the assumption of constant inclination. Instead, we first evolve the Carter constant by means of an approximate post-Newtonian expression and subsequently extract the evolution of ι. Finally, we improve the evolution of circular orbits by using fits to the angular momentum and inclination evolution determined by Teukolsky-based calculations. As an application of our improved scheme, we provide a sample of generic Kerr inspirals which we expect to be the most accurate to date, and for the specific case of nearly circular orbits we locate the critical radius where orbits begin to decircularize under radiation reaction. These easy-to-generate inspirals should become a useful tool for exploring LISA data analysis issues and may ultimately play a role in the detection of inspiral signals in the LISA data

  6. Association between isolation of Staphylococcus aureus one week after calving and milk yield, somatic cell count, clinical mastitis, and culling through the remaining lactation.

    Science.gov (United States)

    Whist, Anne Cathrine; Osterås, Olav; Sølverød, Liv

    2009-02-01

    Cows with isolation of Staphylococcus aureus approximately 1 week after calving and milk yield, somatic cell count (SCC), clinical mastitis (CM), and culling risk through the remaining lactation were assessed in 178 Norwegian dairy herds. Mixed models with repeated measures were used to compare milk yield and SCC, and survival analyses were used to estimate the hazard ratio for CM and culling. On average, cows with an isolate of Staph. aureus had a significantly higher SCC than culture-negative cows. If no post-milking teat disinfection (PMTD) was used, the mean values of SCC were 42,000, 61,000, 68,000 and 77,000 cells/ml for cows with no Staph. aureus isolate, with Staph. aureus isolated in 1 quarter, in 2 quarters and more than 2 quarters respectively. If iodine PMTD was used, SCC means were 36,000; 63,000; 70,000 and 122,000, respectively. Primiparous cows testing positive for Staph. aureus had the same milk yield curve as culture-negative cows, except for those with Staph. aureus isolated in more than 2 quarters. They produced 229 kg less during a 305-d lactation. Multiparous cows with isolation of Staph. aureus in at least 1 quarter produced 94-161 kg less milk in 2nd and >3rd parity, respectively, and those with isolation in more than 2 quarters produced 303-390 kg less than multiparous culture-negative animals during a 305-d lactation. Compared with culture-negative cows, the hazard ratio for CM and culling in cows with isolation of Staph. aureus in at least 1 quarter was 2.0 (1.6-2.4) and 1.7 (1.5-1.9), respectively. There was a decrease in the SCC and in the CM risk in culture-negative cows where iodine PMTD had been used, indicating that iodine PMTD has a preventive effect on already healthy cows. For cows testing positive for Staph. aureus in more than 2 quarters at calving, iodine PMTD had a negative effect on the CM risk and on the SCC through the remaining lactation.

  7. Gyromagnetic ratio of charged Kerr-anti-de Sitter black holes

    International Nuclear Information System (INIS)

    Aliev, Alikram N

    2007-01-01

    We examine the gyromagnetic ratios of rotating and charged AdS black holes in four and higher spacetime dimensions. We compute the gyromagnetic ratio for Kerr-AdS black holes with an arbitrary electric charge in four dimensions and show that it corresponds to g = 2 irrespective of the AdS nature of the spacetime. We also compute the gyromagnetic ratio for Kerr-AdS black holes with a single angular momentum and with a test electric charge in all higher dimensions. The gyromagnetic ratio crucially depends on the dimensionless ratio of the rotation parameter to the curvature radius of the AdS background. At the critical limit, when the boundary Einstein universe is rotating at the speed of light, it exhibits a striking feature leading to g 2 regardless of the spacetime dimension. Next, we extend our consideration to include the exact metric for five-dimensional rotating charged black holes in minimal gauged supergravity. We show that the value of the gyromagnetic ratio found in the 'test-charge' approach remains unchanged for these black holes

  8. Efficient, Low Pressure Ratio Propulsor for Gas Turbine Engines

    Science.gov (United States)

    Gallagher, Edward J. (Inventor); Monzon, Byron R. (Inventor)

    2018-01-01

    A gas turbine engine includes a bypass flow passage that has an inlet and defines a bypass ratio in a range of approximately 8.5 to 13.5. A fan is arranged within the bypass flow passage. A first turbine is a 5-stage turbine and is coupled with a first shaft, which is coupled with the fan. A first compressor is coupled with the first shaft and is a 3-stage compressor. A second turbine is coupled with a second shaft and is a 2-stage turbine. The fan includes a row of fan blades that extend from a hub. The row includes a number (N) of the fan blades, a solidity value (R) at tips of the fab blades, and a ratio of N/R that is from 14 to 16.

  9. A comparison between decomposition rates of buried and surface remains in a temperate region of South Africa.

    Science.gov (United States)

    Marais-Werner, Anátulie; Myburgh, J; Becker, P J; Steyn, M

    2018-01-01

    Several studies have been conducted on decomposition patterns and rates of surface remains; however, much less are known about this process for buried remains. Understanding the process of decomposition in buried remains is extremely important and aids in criminal investigations, especially when attempting to estimate the post mortem interval (PMI). The aim of this study was to compare the rates of decomposition between buried and surface remains. For this purpose, 25 pigs (Sus scrofa; 45-80 kg) were buried and excavated at different post mortem intervals (7, 14, 33, 92, and 183 days). The observed total body scores were then compared to those of surface remains decomposing at the same location. Stages of decomposition were scored according to separate categories for different anatomical regions based on standardised methods. Variation in the degree of decomposition was considerable especially with the buried 7-day interval pigs that displayed different degrees of discolouration in the lower abdomen and trunk. At 14 and 33 days, buried pigs displayed features commonly associated with the early stages of decomposition, but with less variation. A state of advanced decomposition was reached where little change was observed in the next ±90-183 days after interment. Although the patterns of decomposition for buried and surface remains were very similar, the rates differed considerably. Based on the observations made in this study, guidelines for the estimation of PMI are proposed. This pertains to buried remains found at a depth of approximately 0.75 m in the Central Highveld of South Africa.

  10. Non-Gaussianity in two-field inflation beyond the slow-roll approximation

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Gabriel; Tent, Bartjan van, E-mail: gabriel.jung@th.u-psud.fr, E-mail: bartjan.van-tent@th.u-psud.fr [Laboratoire de Physique Théorique (UMR 8627), CNRS, Univ. Paris-Sud, Université Paris-Saclay, Bâtiment 210, 91405 Orsay Cedex (France)

    2017-05-01

    We use the long-wavelength formalism to investigate the level of bispectral non-Gaussianity produced in two-field inflation models with standard kinetic terms. Even though the Planck satellite has so far not detected any primordial non-Gaussianity, it has tightened the constraints significantly, and it is important to better understand what regions of inflation model space have been ruled out, as well as prepare for the next generation of experiments that might reach the important milestone of Δ f {sub NL}{sup local}=1. We derive an alternative formulation of the previously derived integral expression for f {sub NL}, which makes it easier to physically interpret the result and see which types of potentials can produce large non-Gaussianity. We apply this to the case of a sum potential and show that it is very difficult to satisfy simultaneously the conditions for a large f {sub NL} and the observational constraints on the spectral index n {sub s} . In the case of the sum of two monomial potentials and a constant we explicitly show in which small region of parameter space this is possible, and we show how to construct such a model. Finally, the new general expression for f {sub NL} also allows us to prove that for the sum potential the explicit expressions derived within the slow-roll approximation remain valid even when the slow-roll approximation is broken during the turn of the field trajectory (as long as only the ε slow-roll parameter remains small).

  11. Queen-worker caste ratio depends on colony size in the pharaoh ant (Monomorium pharaonis)

    DEFF Research Database (Denmark)

    Schmidt, Anna Mosegaard; Linksvayer, Timothy Arnold; Boomsma, Jacobus Jan

    2011-01-01

    The success of an ant colony depends on the simultaneous presence of reproducing queens and nonreproducing workers in a ratio that will maximize colony growth and reproduction. Despite its presumably crucial role, queen–worker caste ratios (the ratio of adult queens to workers) and the factors...... affecting this variable remain scarcely studied. Maintaining polygynous pharaoh ant (Monomorium pharaonis) colonies in the laboratory has provided us with the opportunity to experimentally manipulate colony size, one of the key factors that can be expected to affect colony level queen–worker caste ratios...... species with budding colonies may adaptively adjust caste ratios to ensure rapid growth....

  12. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2013-01-01

    Full Text Available In this paper, a novel direction of arrival (DOA estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  13. Peak power ratio generator

    Science.gov (United States)

    Moyer, R.D.

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  14. Technique for Selecting Optimum Fan Compression Ratio based on the Effective Power Plant Parameters

    Directory of Open Access Journals (Sweden)

    I. I. Kondrashov

    2016-01-01

    Full Text Available Nowadays, civilian aircrafts occupy the major share of global aviation industry market. As to medium and long - haul aircrafts, turbofans with separate exhaust streams are widely used. Here, fuel efficiency is the main criterion of this engine. The paper presents the research results of the mutual influence of fan pressure ratio and bypass ratio on the effective specific fuel consumption. Shows the increasing bypass ratio to be a rational step for reducing the fuel consumption. Also considers the basic features of engines with a high bypass ratio. Among the other working process parameters, fan pressure ratio and bypass ratio are the most relevant for consideration as they are the most structural variables at a given level of technical excellence. The paper presents the dependence of the nacelle drag coefficient on the engine bypass ratio. For computation were adopted the projected parameters of prospective turbofans to be used in the power plant of the 180-seat medium-haul aircraft. Computation of the engine cycle was performed in Mathcad using these data, with fan pressure ratio and bypass ratio being varied. The combustion chamber gas temperature, the overall pressure ratio and engine thrust remained constant. Pressure loss coefficients, the efficiency of the engine components and the amount of air taken for cooling also remained constant. The optimal parameters corresponding to the minimum effective specific fuel consumption were found as the result of computation. The paper gives recommendations for adjusting optimal parameters, depending on the considered external factors, such as weight of engine and required fuel reserve. The obtained data can be used to estimate parameters of future turbofan engines with high bypass ratio.

  15. [PALEOPATHOLOGY OF HUMAN REMAINS].

    Science.gov (United States)

    Minozzi, Simona; Fornaciari, Gino

    2015-01-01

    Many diseases induce alterations in the human skeleton, leaving traces of their presence in ancient remains. Paleopathological examination of human remains not only allows the study of the history and evolution of the disease, but also the reconstruction of health conditions in the past populations. This paper describes the most interesting diseases observed in skeletal samples from the Roman Imperial Age necropoles found in urban and suburban areas of Rome during archaeological excavations in the last decades. The diseases observed were grouped into the following categories: articular diseases, traumas, infections, metabolic or nutritional diseases, congenital diseases and tumours, and some examples are reported for each group. Although extensive epidemiological investigation in ancient skeletal records is impossible, the palaeopathological study allowed to highlight the spread of numerous illnesses, many of which can be related to the life and health conditions of the Roman population.

  16. Chemical forms and discharge ratios to stack and sea of tritium from Tokai Reprocessing Plant

    International Nuclear Information System (INIS)

    Mikami, Satoshi; Akiyama, Kiyomitsu; Miyabe, Kenjiro

    2002-03-01

    Chemical forms and discharge ratios to stack and sea of tritium form Tokai Reprocessing Plant of Japan Nuclear Cycle Development Institute (JNC) were investigated by analyzing monitoring data. It was ascertained that approximately 70-80% of tritium discharged from the main stack was tritiated water vapor (HTO) and approximately 20-30% was tritiated hydrogen (HT) as a result of analyzing the data taken from reprocessing campaign's in 1994, 1995, 1996, 1997, 2000 and 2001, and also that the amount of tritium released from the stack was less than 1% of tritium inventory in spent fuel and the amount of tritium released into sea was approximately 20-40% of inventory. (author)

  17. Survey of plutonium and uranium atom ratios and activity levels in Mortandad Canyon

    Energy Technology Data Exchange (ETDEWEB)

    Gallaher, B.M.; Benjamin, T.M.; Rokop, D.J.; Stoker, A.K.

    1997-09-22

    For more than three decades Mortandad Canyon has been the primary release area of treated liquid radioactive waste from the Los Alamos National Laboratory (Laboratory). In this survey, six water samples and seven stream sediment samples collected in Mortandad Canyon were analyzed by thermal ionization mass spectrometry (TIMS) to determine the plutonium and uranium activity levels and atom ratios. Be measuring the {sup 240}Pu/{sup 239}Pu atom ratios, the Laboratory plutonium component was evaluated relative to that from global fallout. Measurements of the relative abundance of {sup 235}U and {sup 236}U were also used to identify non-natural components. The survey results indicate the Laboratory plutonium and uranium concentrations in waters and sediments decrease relatively rapidly with distance downstream from the major industrial sources. Plutonium concentrations in shallow alluvial groundwater decrease by approximately 1000 fold along a 3000 ft distance. At the Laboratory downstream boundary, total plutonium and uranium concentrations were generally within regional background ranges previously reported. Laboratory derived plutonium is readily distinguished from global fallout in on-site waters and sediments. The isotopic ratio data indicates off-site migration of trace levels of Laboratory plutonium in stream sediments to distances approximately two miles downstream of the Laboratory boundary.

  18. Survey of plutonium and uranium atom ratios and activity levels in Mortandad Canyon

    Energy Technology Data Exchange (ETDEWEB)

    Gallaher, B.M.; Efurd, D.W.; Rokop, D.J.; Benjamin, T.M. [Los Alamos National Lab., NM (United States); Stoker, A.K. [Science Applications, Inc., White Rock, NM (United States)

    1997-10-01

    For more than three decades, Mortandad Canyon has been the primary release area of treated liquid radioactive waste from the Los Alamos National Laboratory (Laboratory). In this survey, six water samples and seven stream sediment samples collected in Mortandad Canyon were analyzed by thermal ionization mass spectrometry to determine the plutonium and uranium activity levels and atom ratios. By measuring the {sup 240}Pu/{sup 239}Pu atom ratios, the Laboratory plutonium component was evaluated relative to that from global fallout. Measurements of the relative abundance of {sup 235}U and {sup 236}U were also used to identify non-natural components. The survey results indicate that the Laboratory plutonium and uranium concentrations in waters and sediments decrease relatively rapidly with distance downstream from the major industrial sources. Plutonium concentrations in shallow alluvial groundwater decrease by approximately 1,000-fold along a 3,000-ft distance. At the Laboratory downstream boundary, total plutonium and uranium concentrations were generally within regional background ranges previously reported. Laboratory-derived plutonium is readily distinguished from global fallout in on-site waters and sediments. The isotopic ratio data indicate off-site migration of trace levels of Laboratory plutonium in stream sediments to distances approximately two miles downstream of the Laboratory boundary.

  19. Survey of plutonium and uranium atom ratios and activity levels in Mortandad Canyon

    International Nuclear Information System (INIS)

    Gallaher, B.M.; Efurd, D.W.; Rokop, D.J.; Benjamin, T.M.; Stoker, A.K.

    1997-10-01

    For more than three decades, Mortandad Canyon has been the primary release area of treated liquid radioactive waste from the Los Alamos National Laboratory (Laboratory). In this survey, six water samples and seven stream sediment samples collected in Mortandad Canyon were analyzed by thermal ionization mass spectrometry to determine the plutonium and uranium activity levels and atom ratios. By measuring the 240 Pu/ 239 Pu atom ratios, the Laboratory plutonium component was evaluated relative to that from global fallout. Measurements of the relative abundance of 235 U and 236 U were also used to identify non-natural components. The survey results indicate that the Laboratory plutonium and uranium concentrations in waters and sediments decrease relatively rapidly with distance downstream from the major industrial sources. Plutonium concentrations in shallow alluvial groundwater decrease by approximately 1,000-fold along a 3,000-ft distance. At the Laboratory downstream boundary, total plutonium and uranium concentrations were generally within regional background ranges previously reported. Laboratory-derived plutonium is readily distinguished from global fallout in on-site waters and sediments. The isotopic ratio data indicate off-site migration of trace levels of Laboratory plutonium in stream sediments to distances approximately two miles downstream of the Laboratory boundary

  20. Survey of plutonium and uranium atom ratios and activity levels in Mortandad Canyon

    International Nuclear Information System (INIS)

    Gallaher, B.M.; Benjamin, T.M.; Rokop, D.J.; Stoker, A.K.

    1997-01-01

    For more than three decades Mortandad Canyon has been the primary release area of treated liquid radioactive waste from the Los Alamos National Laboratory (Laboratory). In this survey, six water samples and seven stream sediment samples collected in Mortandad Canyon were analyzed by thermal ionization mass spectrometry (TIMS) to determine the plutonium and uranium activity levels and atom ratios. Be measuring the 240 Pu/ 239 Pu atom ratios, the Laboratory plutonium component was evaluated relative to that from global fallout. Measurements of the relative abundance of 235 U and 236 U were also used to identify non-natural components. The survey results indicate the Laboratory plutonium and uranium concentrations in waters and sediments decrease relatively rapidly with distance downstream from the major industrial sources. Plutonium concentrations in shallow alluvial groundwater decrease by approximately 1000 fold along a 3000 ft distance. At the Laboratory downstream boundary, total plutonium and uranium concentrations were generally within regional background ranges previously reported. Laboratory derived plutonium is readily distinguished from global fallout in on-site waters and sediments. The isotopic ratio data indicates off-site migration of trace levels of Laboratory plutonium in stream sediments to distances approximately two miles downstream of the Laboratory boundary

  1. Standardized binomial models for risk or prevalence ratios and differences.

    Science.gov (United States)

    Richardson, David B; Kinlaw, Alan C; MacLehose, Richard F; Cole, Stephen R

    2015-10-01

    Epidemiologists often analyse binary outcomes in cohort and cross-sectional studies using multivariable logistic regression models, yielding estimates of adjusted odds ratios. It is widely known that the odds ratio closely approximates the risk or prevalence ratio when the outcome is rare, and it does not do so when the outcome is common. Consequently, investigators may decide to directly estimate the risk or prevalence ratio using a log binomial regression model. We describe the use of a marginal structural binomial regression model to estimate standardized risk or prevalence ratios and differences. We illustrate the proposed approach using data from a cohort study of coronary heart disease status in Evans County, Georgia, USA. The approach reduces problems with model convergence typical of log binomial regression by shifting all explanatory variables except the exposures of primary interest from the linear predictor of the outcome regression model to a model for the standardization weights. The approach also facilitates evaluation of departures from additivity in the joint effects of two exposures. Epidemiologists should consider reporting standardized risk or prevalence ratios and differences in cohort and cross-sectional studies. These are readily-obtained using the SAS, Stata and R statistical software packages. The proposed approach estimates the exposure effect in the total population. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  2. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  3. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  4. Fundamental ratios and stock market performance : evidence from Turkey

    OpenAIRE

    Parlak, Deniz

    2013-01-01

    The fundamental analysis strives to determine the approximate future market value of a firm and an important step in a fundamental analysis is the computation of basic ratios which provide an indication of firms' financial performance in several key areas. The purpose of this study is to investigate the financial performance of Turkish manufacturing companies and the impact of this performance on common stock returns for the three years from 2009 to 2012. The sample consisted of 20 chemical-s...

  5. Denoising in Wavelet Packet Domain via Approximation Coefficients

    Directory of Open Access Journals (Sweden)

    Zahra Vahabi

    2012-01-01

    Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.

  6. A low-fluorine solution with a 2:1 F/Ba mole ratio for the fabrication of YBCO films

    DEFF Research Database (Denmark)

    Wu, Wei; Feng, Feng; Yue, Zhao

    2014-01-01

    must be at least 2 for full conversion of the Ba-precursor to BaF2 to avoid the formation of BaCO3, which is detrimental to the superconducting performance of YBCO films. In this study, a solution with a 2:1 F/Ba mole ratio was developed, and the fluorine content of this solution was approximately only...... 10.3% of that used in the conventional TFA-MOD method. Attenuated total reflectance-Fourier transform-infrared spectra (ATR-FT-IR) revealed that BaCO3 was remarkably suppressed in the as-pyrolyzed film—and eliminated at 700 °C. Thus, YBCO films with a critical current density (Jc) of over 5 MA cm−2...... (77 K, 0 T, 200 nm thickness) could be obtained on lanthanum aluminate single-crystal substrates. In situ FT-IR spectra showed that no obvious fluorinated gaseous by-products were detected in the pyrolysis step, which indicated that all F atoms might remain in the film as fluorides. X-ray diffraction...

  7. Lindhard's polarization parameter and atomic sum rules in the local plasma approximation

    DEFF Research Database (Denmark)

    Cabrera-Trujillo, R.; Apell, P.; Oddershede, J.

    2017-01-01

    In this work, we analyze the effects of Lindhard polarization parameter, χ, on the sum rule, Sp, within the local plasma approximation (LPA) as well as on the logarithmic sum rule Lp = dSp/dp, in both cases for the system in an initial excited state. We show results for a hydrogenic atom with nuc......In this work, we analyze the effects of Lindhard polarization parameter, χ, on the sum rule, Sp, within the local plasma approximation (LPA) as well as on the logarithmic sum rule Lp = dSp/dp, in both cases for the system in an initial excited state. We show results for a hydrogenic atom...... in terms of a screened charge Z* for the ground state. Our study shows that by increasing χ, the sum rule for p0 it increases, and the value p=0 provides the normalization/closure relation which remains fixed to the number of electrons for the same initial state. When p is fixed...

  8. Approximation Preserving Reductions among Item Pricing Problems

    Science.gov (United States)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  9. Young’s Modulus and Poisson’s Ratio of Monolayer Graphyne

    Directory of Open Access Journals (Sweden)

    H. Rouhi

    2013-09-01

    Full Text Available Despite its numerous potential applications, two-dimensional monolayer graphyne, a novel form of carbon allotropes with sp and sp2 carbon atoms, has received little attention so far, perhaps as a result of its unknown properties. Especially, determination of the exact values of its elastic properties can pave the way for future studies on this nanostructure. Hence, this article describes a density functional theory (DFT investigation into elastic properties of graphyne including surface Young’s modulus and Poisson’s ratio. The DFT analyses are performed within the framework of generalized gradient approximation (GGA, and the Perdew–Burke–Ernzerhof (PBE exchange correlation is adopted. This study indicates that the elastic modulus of graphyne is approximately half of that of graphene due to its lower number of bonds.

  10. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  11. Bent approximations to synchrotron radiation optics

    International Nuclear Information System (INIS)

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors

  12. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  13. Adaptive EMG noise reduction in ECG signals using noise level approximation

    Science.gov (United States)

    Marouf, Mohamed; Saranovac, Lazar

    2017-12-01

    In this paper the usage of noise level approximation for adaptive Electromyogram (EMG) noise reduction in the Electrocardiogram (ECG) signals is introduced. To achieve the adequate adaptiveness, a translation-invariant noise level approximation is employed. The approximation is done in the form of a guiding signal extracted as an estimation of the signal quality vs. EMG noise. The noise reduction framework is based on a bank of low pass filters. So, the adaptive noise reduction is achieved by selecting the appropriate filter with respect to the guiding signal aiming to obtain the best trade-off between the signal distortion caused by filtering and the signal readability. For the evaluation purposes; both real EMG and artificial noises are used. The tested ECG signals are from the MIT-BIH Arrhythmia Database Directory, while both real and artificial records of EMG noise are added and used in the evaluation process. Firstly, comparison with state of the art methods is conducted to verify the performance of the proposed approach in terms of noise cancellation while preserving the QRS complex waves. Additionally, the signal to noise ratio improvement after the adaptive noise reduction is computed and presented for the proposed method. Finally, the impact of adaptive noise reduction method on QRS complexes detection was studied. The tested signals are delineated using a state of the art method, and the QRS detection improvement for different SNR is presented.

  14. Invisible Base Electrode Coordinates Approximation for Simultaneous SPECT and EEG Data Visualization

    Science.gov (United States)

    Kowalczyk, L.; Goszczynska, H.; Zalewska, E.; Bajera, A.; Krolicki, L.

    2014-04-01

    This work was performed as part of a larger research concerning the feasibility of improving the localization of epileptic foci, as compared to the standard SPECT examination, by applying the technique of EEG mapping. The presented study extends our previous work on the development of a method for superposition of SPECT images and EEG 3D maps when these two examinations are performed simultaneously. Due to the lack of anatomical data in SPECT images it is a much more difficult task than in the case of MRI/EEG study where electrodes are visible in morphological images. Using the appropriate dose of radioisotope we mark five base electrodes to make them visible in the SPECT image and then approximate the coordinates of the remaining electrodes using properties of the 10-20 electrode placement system and the proposed nine-ellipses model. This allows computing a sequence of 3D EEG maps spanning on all electrodes. It happens, however, that not all five base electrodes can be reliably identified in SPECT data. The aim of the current study was to develop a method for determining the coordinates of base electrode(s) missing in the SPECT image. The algorithm for coordinates approximation has been developed and was tested on data collected for three subjects with all visible electrodes. To increase the accuracy of the approximation we used head surface models. Freely available model from Oostenveld research based on data from SPM package and our own model based on data from our EEG/SPECT studies were used. For data collected in four cases with one electrode not visible we compared the invisible base electrode coordinates approximation for Oostenveld and our models. The results vary depending on the missing electrode placement, but application of the realistic head model significantly increases the accuracy of the approximation.

  15. Invisible Base Electrode Coordinates Approximation for Simultaneous SPECT and EEG Data Visualization

    Directory of Open Access Journals (Sweden)

    Kowalczyk L.

    2014-04-01

    Full Text Available This work was performed as part of a larger research concerning the feasibility of improving the localization of epileptic foci, as compared to the standard SPECT examination, by applying the technique of EEG mapping. The presented study extends our previous work on the development of a method for superposition of SPECT images and EEG 3D maps when these two examinations are performed simultaneously. Due to the lack of anatomical data in SPECT images it is a much more difficult task than in the case of MRI/EEG study where electrodes are visible in morphological images. Using the appropriate dose of radioisotope we mark five base electrodes to make them visible in the SPECT image and then approximate the coordinates of the remaining electrodes using properties of the 10-20 electrode placement system and the proposed nine-ellipses model. This allows computing a sequence of 3D EEG maps spanning on all electrodes. It happens, however, that not all five base electrodes can be reliably identified in SPECT data. The aim of the current study was to develop a method for determining the coordinates of base electrode(s missing in the SPECT image. The algorithm for coordinates approximation has been developed and was tested on data collected for three subjects with all visible electrodes. To increase the accuracy of the approximation we used head surface models. Freely available model from Oostenveld research based on data from SPM package and our own model based on data from our EEG/SPECT studies were used. For data collected in four cases with one electrode not visible we compared the invisible base electrode coordinates approximation for Oostenveld and our models. The results vary depending on the missing electrode placement, but application of the realistic head model significantly increases the accuracy of the approximation.

  16. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  17. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  18. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  19. A series approximation model for optical light transport and output intensity field distribution in large aspect ratio cylindrical scintillation crystals

    Energy Technology Data Exchange (ETDEWEB)

    Tobias, Benjamin John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-09

    A series approximation has been derived for the transport of optical photons within a cylindrically symmetric light pipe and applied to the task of evaluating both the origin and angular distribution of light reaching the output plane. This analytic expression finds particular utility in first-pass photonic design applications since it may be evaluated at a very modest computational cost and is readily parameterized for relevant design constraints. It has been applied toward quantitative exploration of various scintillation crystal preparations and their impact on both quantum efficiency and noise, reproducing sensible dependencies and providing physical justification for certain gamma ray camera design choices.

  20. Origin of quantum criticality in Yb-Al-Au approximant crystal and quasicrystal

    International Nuclear Information System (INIS)

    Watanabe, Shinji; Miyake, Kazumasa

    2016-01-01

    To get insight into the mechanism of emergence of unconventional quantum criticality observed in quasicrystal Yb 15 Al 34 Au 51 , the approximant crystal Yb 14 Al 35 Au 51 is analyzed theoretically. By constructing a minimal model for the approximant crystal, the heavy quasiparticle band is shown to emerge near the Fermi level because of strong correlation of 4f electrons at Yb. We find that charge-transfer mode between 4f electron at Yb on the 3rd shell and 3p electron at Al on the 4th shell in Tsai-type cluster is considerably enhanced with almost flat momentum dependence. The mode-coupling theory shows that magnetic as well as valence susceptibility exhibits χ ∼ T -0.5 for zero-field limit and is expressed as a single scaling function of the ratio of temperature to magnetic field T/B over four decades even in the approximant crystal when some condition is satisfied by varying parameters, e.g., by applying pressure. The key origin is clarified to be due to strong locality of the critical Yb-valence fluctuation and small Brillouin zone reflecting the large unit cell, giving rise to the extremely-small characteristic energy scale. This also gives a natural explanation for the quantum criticality in the quasicrystal corresponding to the infinite limit of the unit-cell size. (author)

  1. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  2. Covariances for measured activation and fission ratios data

    International Nuclear Information System (INIS)

    Smith, D.L.; Meadows, J.W.; Watanabe, Y.

    1986-01-01

    Methods which are routinely used in the determination of covariance matrices for both integral and differential activation and fission-ratios data acquired at the Argonne National Laboratory Fast-Neutron Generator Facility (FNG) are discussed. Special consideration is given to problems associated with the estimation of correlations between various identified sources of experimental error. Approximation methods which are commonly used to reduce the labor involved in this analysis to manageable levels are described. Results from some experiments which have been recently carried out in this laboratory are presented to illustrate these procedures. 13 refs., 1 fig., 5 tabs

  3.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  4. MPPT for PM wind generator using gradient approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Ying-Yi; Lu, Shiue-Der; Chiou, Ching-Sheng [Department of Electrical Engineering, Chung Yuan Christian University, 200, Chung-Pei Road, Chung Li 320 (China)

    2009-01-15

    This paper applies new maximum-power-point tracking (MPPT) algorithms to a wind-turbine generator system (WTGS). In this paper, the WTGS is a direct-drive system and includes the wind-turbine, permanent-magnet (PM) synchronous generator, three-phase full bridge rectifier, buck-boost converter and load. The new MPPT method uses gradient approximation (GA) algorithm. Three methods based on GA for achieving MPPT are discussed in this paper: (1) full-sensor control with anemometer and tachometer, (2) rule-based method and (3) adaptive duty cycle method. The third method has merits of no PID parameters, proportional constant, anemometer, tachometer and characteristics of WTGS required. This method enables the permanent-magnet synchronous generator (PMSG) to operate at variable speeds to achieve good performance. Simulation results show that the tip-speed ratio (TSR) and power coefficient obtained by the adaptive duty cycle method with GA can be almost identical to the optimal values. (author)

  5. MPPT for PM wind generator using gradient approximation

    International Nuclear Information System (INIS)

    Hong, Y.-Y.; Lu, S.-D.; Chiou, C.-S.

    2009-01-01

    This paper applies new maximum-power-point tracking (MPPT) algorithms to a wind-turbine generator system (WTGS). In this paper, the WTGS is a direct-drive system and includes the wind-turbine, permanent-magnet (PM) synchronous generator, three-phase full bridge rectifier, buck-boost converter and load. The new MPPT method uses gradient approximation (GA) algorithm. Three methods based on GA for achieving MPPT are discussed in this paper: (1) full-sensor control with anemometer and tachometer, (2) rule-based method and (3) adaptive duty cycle method. The third method has merits of no PID parameters, proportional constant, anemometer, tachometer and characteristics of WTGS required. This method enables the permanent-magnet synchronous generator (PMSG) to operate at variable speeds to achieve good performance. Simulation results show that the tip-speed ratio (TSR) and power coefficient obtained by the adaptive duty cycle method with GA can be almost identical to the optimal values

  6. A Padé approximant approach to two kinds of transcendental equations with applications in physics

    International Nuclear Information System (INIS)

    Luo, Qiang; Wang, Zhidan; Han, Jiurong

    2015-01-01

    In this paper, we obtain the analytical solutions of two kinds of transcendental equations with numerous applications in college physics by means of the Lagrange inversion theorem. Afterwards we rewrite them in the form of a ratio of rational polynomials by a second-order Padé approximant from a practical and instructional perspective. Our method is illustrated in a pedagogical manner for the benefit of students at the undergraduate level. The approximate formulas introduced in the paper can be applied to abundant examples in physics textbooks, such as Fraunhofer single-slit diffraction, Wien’s displacement law, and the Schrödinger equation with single- or double-δ potential. These formulas, consequently, can reach considerable accuracies according to the numerical results; therefore, they promise to act as valuable ingredients in the standard teaching curriculum. (paper)

  7. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  8. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  9. A note on imperfect hedging: a method for testing stability of the hedge ratio

    Directory of Open Access Journals (Sweden)

    Michal Černý

    2012-01-01

    Full Text Available Companies producing, processing and consuming commodities in the production process often hedge their commodity expositions using derivative strategies based on different, highly correlated underlying commodities. Once the open position in a commodity is hedged using a derivative position with another underlying commodity, the appropriate hedge ratio must be determined in order the hedge relationship be as effective as possible. However, it is questionable whether the hedge ratio determined at the inception of the risk management strategy remains stable over the whole period for which the hedging strategy exists. Usually it is assumed that in the short run, the relationship (say, correlation between the two commodities remains stable, while in the long run it may vary. We propose a method, based on statistical theory of stability, for on-line detection whether market movements of prices of the commodities involved in the hedge relationship indicate that the hedge ratio may have been subject to a recent change. The change in the hedge ratio decreases the effectiveness of the original hedge relationship and creates a new open position. The method proposed should inform the risk manager that it could be reasonable to adjust the derivative strategy in a way reflecting the market conditions after the change in the hedge ratio.

  10. Analysis of root surface properties by fluorescence/Raman intensity ratio.

    Science.gov (United States)

    Nakamura, Shino; Ando, Masahiro; Hamaguchi, Hiro-O; Yamamoto, Matsuo

    2017-11-01

    The aim of this study is to evaluate the existence of residual calculus on root surfaces by determining the fluorescence/Raman intensity ratio. Thirty-two extracted human teeth, partially covered with calculus on the root surface, were evaluated by using a portable Raman spectrophotometer, and a 785-nm, 100-mW laser was applied for fluorescence/Raman excitation. The collected spectra were normalized to the hydroxyapatite Raman band intensity at 960 cm -1 . Raman spectra were recorded from the same point after changing the focal distance of the laser and the target radiating angle. In seven teeth, the condition of calculus, cementum, and dentin were evaluated. In 25 teeth, we determined the fluorescence/Raman intensity ratio following three strokes of debridement. Raman spectra collected from the dentin, cementum, and calculus were different. After normalization, spectra values were constant. The fluorescence/Raman intensity ratio of calculus region showed significant differences compared to the cementum and dentin (p Raman intensity ratio decreased with calculus debridement. For this analysis, the delta value was defined as the difference between the values before and after three strokes, with the final 2 delta values close to zero, indicating a gradual asymptotic curve and the change in intensity ratio approximating that of individual constants. Fluorescence/Raman intensity ratio was effectively used to cancel the angle- and distance-dependent fluctuations of fluorescence collection efficiency during measurement. Changes in the fluorescence/Raman intensity ratio near zero suggested that cementum or dentin was exposed, and calculus removed.

  11. Exact and approximate multiple diffraction calculations

    International Nuclear Information System (INIS)

    Alexander, Y.; Wallace, S.J.; Sparrow, D.A.

    1976-08-01

    A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation

  12. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  13. Particular mechanism for continuously varying the compression ratio for an internal combustion engine

    Science.gov (United States)

    Raţiu, S.; Cătălinoiu, R.; Alexa, V.; Miklos, I.; Cioată, V.

    2018-01-01

    Variable compression ratio (VCR) is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. The paper proposes the presentation of a particular mechanism allowing the position of the top dead centre to be changed, while the position of the bottom dead centre remains fixed. The kinematics of the mechanism is studied and its trajectories are graphically represented for different positions of operation.

  14. Enumeration of an extremely high particle-to-PFU ratio for Varicella-zoster virus.

    Science.gov (United States)

    Carpenter, John E; Henderson, Ernesto P; Grose, Charles

    2009-07-01

    Varicella-zoster virus (VZV) is renowned for its low titers. Yet investigations to explore the low infectivity are hampered by the fact that the VZV particle-to-PFU ratio has never been determined with precision. Herein, we accomplish that task by applying newer imaging technology. More than 300 images were taken of VZV-infected cells on 4 different samples at high magnification. We enumerated the total number of viral particles within 25 cm(2) of the infected monolayer at 415 million. Based on these numbers, the VZV particle:PFU ratio was approximately 40,000:1 for a cell-free inoculum.

  15. Getting past nature as a guide to the human sex ratio.

    Science.gov (United States)

    Murphy, Timothy F

    2013-05-01

    Sex selection of children by pre-conception and post-conception techniques remains morally controversial and even illegal in some jurisdictions. Among other things, some critics fear that sex selection will distort the sex ratio, making opposite-sex relationships more difficult to secure, while other critics worry that sex selection will tilt some nations toward military aggression. The human sex ratio varies depending on how one estimates it; there is certainly no one-to-one correspondence between males and females either at birth or across the human lifespan. Complications about who qualifies as 'male' and 'female' complicate judgments about the ratio even further. Even a judiciously estimated sex ratio does not have, however, the kind of normative status that requires society to refrain from antenatal sex selection. Some societies exhibit lopsided sex ratios as a consequence of social policies and practices, and pragmatic estimates of social needs are a better guide to what the sex ratio should be, as against looking to 'nature'. The natural sex ratio cannot be a sound moral basis for prohibiting parents from selecting the sex of their children, since it ultimately lacks any normative meaning for social choices. © 2011 Blackwell Publishing Ltd.

  16. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  17. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  18. Maternal condition but not corticosterone is linked to offspring sex ratio in a passerine bird.

    Directory of Open Access Journals (Sweden)

    Lindsay J Henderson

    Full Text Available There is evidence of offspring sex ratio adjustment in a range of species, but the potential mechanisms remain largely unknown. Elevated maternal corticosterone (CORT is associated with factors that can favour brood sex ratio adjustment, such as reduced maternal condition, food availability and partner attractiveness. Therefore, the steroid hormone has been suggested to play a key role in sex ratio manipulation. However, despite correlative and causal evidence CORT is linked to sex ratio manipulation in some avian species, the timing of adjustment varies between studies. Consequently, whether CORT is consistently involved in sex-ratio adjustment, and how the hormone acts as a mechanism for this adjustment remains unclear. Here we measured maternal baseline CORT and body condition in free-living blue tits (Cyanistes caeruleus over three years and related these factors to brood sex ratio and nestling quality. In addition, a non-invasive technique was employed to experimentally elevate maternal CORT during egg laying, and its effects upon sex ratio and nestling quality were measured. We found that maternal CORT was not correlated with brood sex ratio, but mothers with elevated CORT fledged lighter offspring. Also, experimental elevation of maternal CORT did not influence brood sex ratio or nestling quality. In one year, mothers in superior body condition produced male biased broods, and maternal condition was positively correlated with both nestling mass and growth rate in all years. Unlike previous studies maternal condition was not correlated with maternal CORT. This study provides evidence that maternal condition is linked to brood sex ratio manipulation in blue tits. However, maternal baseline CORT may not be the mechanistic link between the maternal condition and sex ratio adjustment. Overall, this study serves to highlight the complexity of sex ratio adjustment in birds and the difficulties associated with identifying sex biasing mechanisms.

  19. Approximation algorithms for a genetic diagnostics problem.

    Science.gov (United States)

    Kosaraju, S R; Schäffer, A A; Biesecker, L G

    1998-01-01

    We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.

  20. Precise and accurate isotope ratio measurements by ICP-MS.

    Science.gov (United States)

    Becker, J S; Dietze, H J

    2000-09-01

    The precise and accurate determination of isotope ratios by inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) is important for quite different application fields (e.g. for isotope ratio measurements of stable isotopes in nature, especially for the investigation of isotope variation in nature or age dating, for determining isotope ratios of radiogenic elements in the nuclear industry, quality assurance of fuel material, for reprocessing plants, nuclear material accounting and radioactive waste control, for tracer experiments using stable isotopes or long-lived radionuclides in biological or medical studies). Thermal ionization mass spectrometry (TIMS), which used to be the dominant analytical technique for precise isotope ratio measurements, is being increasingly replaced for isotope ratio measurements by ICP-MS due to its excellent sensitivity, precision and good accuracy. Instrumental progress in ICP-MS was achieved by the introduction of the collision cell interface in order to dissociate many disturbing argon-based molecular ions, thermalize the ions and neutralize the disturbing argon ions of plasma gas (Ar+). The application of the collision cell in ICP-QMS results in a higher ion transmission, improved sensitivity and better precision of isotope ratio measurements compared to quadrupole ICP-MS without the collision cell [e.g., for 235U/238U approximately 1 (10 microg x L(-1) uranium) 0.07% relative standard deviation (RSD) vs. 0.2% RSD in short-term measurements (n = 5)]. A significant instrumental improvement for ICP-MS is the multicollector device (MC-ICP-MS) in order to obtain a better precision of isotope ratio measurements (with a precision of up to 0.002%, RSD). CE- and HPLC-ICP-MS are used for the separation of isobaric interferences of long-lived radionuclides and stable isotopes by determination of spallation nuclide abundances in an irradiated tantalum target.

  1. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  2. Comparison of the Born series and rational approximants in potential scattering. [Pade approximants, Yikawa and exponential potential

    Energy Technology Data Exchange (ETDEWEB)

    Garibotti, C R; Grinstein, F F [Rosario Univ. Nacional (Argentina). Facultad de Ciencias Exactas e Ingenieria

    1976-05-08

    It is discussed the real utility of Born series for the calculation of atomic collision processes in the Born approximation. It is suggested to make use of Pade approximants and it is shown that this approach provides very fast convergent sequences over all the energy range studied. Yukawa and exponential potential are explicitly considered and the results are compared with high-order Born approximation.

  3. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  4. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    Science.gov (United States)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  5. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  6. RATIO_TOOL - SOFTWARE FOR COMPUTING IMAGE RATIOS

    Science.gov (United States)

    Yates, G. L.

    1994-01-01

    Geological studies analyze spectral data in order to gain information on surface materials. RATIO_TOOL is an interactive program for viewing and analyzing large multispectral image data sets that have been created by an imaging spectrometer. While the standard approach to classification of multispectral data is to match the spectrum for each input pixel against a library of known mineral spectra, RATIO_TOOL uses ratios of spectral bands in order to spot significant areas of interest within a multispectral image. Each image band can be viewed iteratively, or a selected image band of the data set can be requested and displayed. When the image ratios are computed, the result is displayed as a gray scale image. At this point a histogram option helps in viewing the distribution of values. A thresholding option can then be used to segment the ratio image result into two to four classes. The segmented image is then color coded to indicate threshold classes and displayed alongside the gray scale image. RATIO_TOOL is written in C language for Sun series computers running SunOS 4.0 and later. It requires the XView toolkit and the OpenWindows window manager (version 2.0 or 3.0). The XView toolkit is distributed with Open Windows. A color monitor is also required. The standard distribution medium for RATIO_TOOL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation is included on the program media. RATIO_TOOL was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Sun, SunOS, and OpenWindows are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  7. The measurement of psychological literacy: a first approximation.

    Science.gov (United States)

    Roberts, Lynne D; Heritage, Brody; Gasson, Natalie

    2015-01-01

    Psychological literacy, the ability to apply psychological knowledge to personal, family, occupational, community and societal challenges, is promoted as the primary outcome of an undergraduate education in psychology. As the concept of psychological literacy becomes increasingly adopted as the core business of undergraduate psychology training courses world-wide, there is urgent need for the construct to be accurately measured so that student and institutional level progress can be assessed and monitored. Key to the measurement of psychological literacy is determining the underlying factor-structure of psychological literacy. In this paper we provide a first approximation of the measurement of psychological literacy by identifying and evaluating self-report measures for psychological literacy. Multi-item and single-item self-report measures of each of the proposed nine dimensions of psychological literacy were completed by two samples (N = 218 and N = 381) of undergraduate psychology students at an Australian university. Single and multi-item measures of each dimension were weakly to moderately correlated. Exploratory and confirmatory factor analyses of multi-item measures indicated a higher order three factor solution best represented the construct of psychological literacy. The three factors were reflective processes, generic graduate attributes, and psychology as a helping profession. For the measurement of psychological literacy to progress there is a need to further develop self-report measures and to identify/develop and evaluate objective measures of psychological literacy. Further approximations of the measurement of psychological literacy remain an imperative, given the construct's ties to measuring institutional efficacy in teaching psychology to an undergraduate audience.

  8. Ancilla-approximable quantum state transformations

    International Nuclear Information System (INIS)

    Blass, Andreas; Gurevich, Yuri

    2015-01-01

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation

  9. Ancilla-approximable quantum state transformations

    Energy Technology Data Exchange (ETDEWEB)

    Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  10. Approximate anlysis of an unreliable $M/M/c$ retrial queue with phase merging algorithm.

    Directory of Open Access Journals (Sweden)

    faiza BELARBI

    2016-06-01

    Full Text Available In this paper, we investigate an approximate analysis of unreliable $M/M/c$ retrial queue with $c\\geq 3$ in which all servers are subject to breakdowns and repairs. Arriving customers that are unable to access a server due to congestion or failure can choose to enter a retrial orbit for an exponentially distributed amount of time and persistently attempt to gain access to a server, or abandon their request and depart the system. Once a customer is admitted to a service station, they remain there for a random duration until service is complete and then depart the system. However, if the server fails during service, i.e., an active breakdown, the customer may choose to abandon the system or proceed directly to the retrial orbit while the server begins repair immediately. In the unreliable model, there are no exact solutions when the number of servers exceeds one. Therefore, we seek to approximate the steady-state joint distribution of the number of customers in orbit and the status of the $c$ servers for the case of Markovian arrival and service times. Our approach to deriving the approximate steady-state probabilities employs a phase-merging algorithm.

  11. C-12/C-13 Ratio in Ethane on Titan and Implications for Methane's Replenishment

    Science.gov (United States)

    Jennings, Donald E.; Romani, Paul N.; Bjoraker, Gordon L.; Sada, Pedro V.; Nixon, Conor A.; Lunsford, Allen W.; Boyle, Robert J.; Hesman, Brigette E.; McCabe, George H.

    2009-01-01

    The C-12/C-13 abundance ratio in ethane in the atmosphere of Titan has been measured at 822 cm(sup -1) from high spectral resolution ground-based observations. The value 89(8), coincides with the telluric standard and also agrees with the ratio seen in the outer planets. It is almost identical to the result for ethane on Titan found by the composite infrared spectrometer (CIRS) on Cassini. The C-12/C-13 ratio for ethane is higher than the ratio measured in atmospheric methane by Cassini/Huygens GCMS, 82.3(l), representing an enrichment of C-12 in the ethane that might be explained by a kinetic isotope effect of approximately 1.1 in the formation of methyl radicals. If methane is being continuously resupplied to balance photochemical destruction, then we expect the isotopic composition in the ethane product to equilibrate at close to the same C-12/C-13 ratio as that in the supply. The telluric value of the ratio in ethane then implies that the methane reservoir is primordial.

  12. Excess abdominal adiposity remains correlated with altered lipid concentrations in healthy older women.

    Science.gov (United States)

    DiPietro, L; Katz, L D; Nadel, E R

    1999-04-01

    To determine associations between overall adiposity, absolute and relative abdominal adiposity, and lipid concentrations in healthy older women. Cross-sectional analysis of baseline data. Subjects were 21 healthy, untrained older women (71 +/- 1 y) entering a randomized, controlled aerobic training program. Overall adiposity was assessed by anthropometry and the body mass index (BMI=kg/m2). Absolute and relative abdominal adiposity was determined by computed tomography (CT) and circumference measures. Fasting serum lipid concentrations of total-, high density lipoprotein (HDL)-, and low density lipoprotein (LDL)-cholesterol (C) and triglycerides (TGs) were determined by standard enzymatic procedures. Compared to the measures of overall adiposity, we observed much stronger correlations between measures more specific to absolute or relative abdominal adiposity and lipid concentrations. Visceral fat area was the strongest correlate of HDL-C (r = -0.75; P HDL-C ratio (r = 0.86; P correlated with TGs (r = 0.54; P HDL-C (r= -0.69; P HDL-C ratio (r = 0.75; P adiposity remains an important correlate of lipid metabolism, even in healthy older women of normal weight. Thus, overall obesity is not a necessary condition for the correlation between excess abdominal fat and metabolic risk among postmenopausal women.

  13. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference.

    Directory of Open Access Journals (Sweden)

    Giorgos Minas

    2017-07-01

    Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.

  14. Quantum mean-field approximation for lattice quantum models: Truncating quantum correlations and retaining classical ones

    Science.gov (United States)

    Malpetti, Daniele; Roscilde, Tommaso

    2017-02-01

    The mean-field approximation is at the heart of our understanding of complex systems, despite its fundamental limitation of completely neglecting correlations between the elementary constituents. In a recent work [Phys. Rev. Lett. 117, 130401 (2016), 10.1103/PhysRevLett.117.130401], we have shown that in quantum many-body systems at finite temperature, two-point correlations can be formally separated into a thermal part and a quantum part and that quantum correlations are generically found to decay exponentially at finite temperature, with a characteristic, temperature-dependent quantum coherence length. The existence of these two different forms of correlation in quantum many-body systems suggests the possibility of formulating an approximation, which affects quantum correlations only, without preventing the correct description of classical fluctuations at all length scales. Focusing on lattice boson and quantum Ising models, we make use of the path-integral formulation of quantum statistical mechanics to introduce such an approximation, which we dub quantum mean-field (QMF) approach, and which can be readily generalized to a cluster form (cluster QMF or cQMF). The cQMF approximation reduces to cluster mean-field theory at T =0 , while at any finite temperature it produces a family of systematically improved, semi-classical approximations to the quantum statistical mechanics of the lattice theory at hand. Contrary to standard MF approximations, the correct nature of thermal critical phenomena is captured by any cluster size. In the two exemplary cases of the two-dimensional quantum Ising model and of two-dimensional quantum rotors, we study systematically the convergence of the cQMF approximation towards the exact result, and show that the convergence is typically linear or sublinear in the boundary-to-bulk ratio of the clusters as T →0 , while it becomes faster than linear as T grows. These results pave the way towards the development of semiclassical numerical

  15. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  16. Ratio of serum free triiodothyronine to free thyroxine in Graves' hyperthyroidism and thyrotoxicosis caused by painless thyroiditis.

    Science.gov (United States)

    Yoshimura Noh, Jaeduk; Momotani, Naoko; Fukada, Shuji; Ito, Koichi; Miyauchi, Akira; Amino, Nobuyuki

    2005-10-01

    The serum T3 to T4 ratio is a useful indicator for differentiating destruction-induced thyrotoxicosis from Graves' thyrotoxicosis. However, the usefulness of the serum free T3 (FT3) to free T4 (FT4) ratio is controversial. We therefore systematically evaluated the usefulness of this ratio, based on measurements made using two widely available commercial kits in two hospitals. Eighty-two untreated patients with thyrotoxicosis (48 patients with Graves' disease and 34 patients with painless thyroiditis) were examined in Kuma Hospital, and 218 patients (126 with Graves' disease and 92 with painless thyroiditis) and 66 normal controls were examined in Ito Hospital. The FT3 and FT4 values, as well as the FT3/FT4 ratios, were significantly higher in the patients with Graves' disease than in those with painless thyroiditis in both hospitals, but considerable overlap between the two disorders was observed. Receiver operating characteristic (ROC) curves for the FT3 and FT4 values and the FT3/FT4 ratios of patients with Graves' disease and those with painless thyroiditis seen in both hospitals were prepared, and the area under the curves (AUC), the cut-off points for discriminating Graves' disease from painless thyroiditis, the sensitivity, and the specificity were calculated. AUC and sensitivity of the FT(3)/FT(4) ratio were smaller than those of FT(3) and FT(4) in both hospitals. The patients treated at Ito hospital were then divided into 4 groups according to their FT4 levels (A: 2.3 approximately 5.4 ng/dl), and the AUC, cut-off points, sensitivity, and specificity of the FT(3)/FT(4) ratios were calculated. The AUC and sensitivity of each group increased with the FT4 levels (AUC: 57.8%, 72.1%, 91.1%, and 93.4%, respectively; sensitivity: 62.6%, 50.0%, 77.8%, and 97.0%, respectively). The means +/- SE of the FT3/FT4 ratio in the Graves' disease groups were 3.1 +/- 0.22, 3.1 +/- 0.09, 3.2 +/- 0.06, and 3.1 +/- 0.07, respectively, versus 2.9 +/- 0.1, 2.6 +/- 0.07, 2.5 +/- 0

  17. An approximation for kanban controlled assembly systems

    NARCIS (Netherlands)

    Topan, E.; Avsar, Z.M.

    2011-01-01

    An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated

  18. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  19. Ethane's 12C/13C Ratio in Titan: Implications for Methane Replenishment

    Science.gov (United States)

    Jennings, Donald E.; Nixon, C. A.; Romani, P. N.; Bjoraker, G. L.; Sada, P. V.; Lunsford, A. W.; Boyle, R. J.; Hesman, B. E.; McCabe, G. H.

    2009-01-01

    As the .main destination of carbon in the destruction of methane in the atmosphere of Titan, ethane provides information about the carbon isotopic composition of the reservoir from which methane is replenished. If the amount of methane entering the atmosphere is presently equal to the amount converted to ethane, the 12C/13C ratio in ethane should be close to the ratio in the reservoir. We have measured the 12C/13C ratio in ethane both with Cassini CIRS(exp 1) and from the ground and find that it is very close to the telluric standard and outer planet values (89), consistent with a primordial origin for the methane reservoir. The lower 12C/13C ratio measured for methane by Huygens GCMS (82.3) can be explained if the conversion of CH4 to CH3 (and C2H6) favors 12C over 13C with a carbon kinetic isotope effect of 1.08. The time required for the atmospheric methane to reach equilibrium, i.e., for replenishment to equal destruction, is approximately 5 methane atmospheric lifetimes.

  20. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  1. Usefulness of meniscal width to transverse diameter ratio on coronal MRI in the diagnosis of incomplete discoid lateral meniscus

    International Nuclear Information System (INIS)

    Park, H.J.; Lee, S.Y.; Park, N.-H.; Chung, E.C.; Park, J.Y.; Kim, M.S.; Lee, E.J.

    2014-01-01

    Aim: To evaluate the clinical utility of the meniscal width to transverse diameter ratio (L/M ratio) of the lateral meniscus in the diagnosis of incomplete discoid lateral meniscus (IDLM) as compared with the arthroscopic diagnosis, meniscal width to tibial diameter ratio (L/T ratio) and conventional lateral meniscus width criteria. Materials and methods: This retrospective study sample included 41 patients with IDLM who underwent knee magnetic resonance imaging (MRI) and arthroscopy, as well as 50 controls with normal lateral menisci. MRI examinations were interpreted independently by two radiologists, both of whom were blinded to clinical information and radiological reports. Assessment of meniscal width (L), maximal transverse diameter of the lateral meniscus (M), and transverse diameter of the tibia (T) was carried out on central coronal sections that were observed to pass through the medial collateral ligament. L/M and L/T ratios were calculated. These results were correlated with arthroscopic findings and analysed statistically using categorical regression analysis and non-parametric correlation analysis. Using arthroscopic findings as the standard of reference, sensitivity and specificity were calculated for: (1) 12, 13, 14, and 15 mm meniscal width thresholds; (2) 40%, 50%, 60%, and 70% L/M ratio thresholds; and (3) 15%, 18%, 20%, and 25% L/T ratio thresholds. Results: The mean L/M ratio of the IDLM was approximately 67% and was statistically significantly higher than the control (44%). The best diagnostic discrimination was achieved using a threshold of 50%. The mean L/T ratio of the IDLM was approximately 23% and was statistically significant. The best diagnostic discrimination was achieved using a threshold of 18%. The threshold of 13 mm of meniscal width also showed high sensitivity and high specificity. Conclusion: The use of the L/M ratio or L/T ratio in combination with meniscal width criteria may be a useful method for evaluating IDLM

  2. On the decision threshold of eigenvalue ratio detector based on moments of joint and marginal distributions of extreme eigenvalues

    KAUST Repository

    Shakir, Muhammad Zeeshan

    2013-03-01

    Eigenvalue Ratio (ER) detector based on the two extreme eigenvalues of the received signal covariance matrix is currently one of the most effective solution for spectrum sensing. However, the analytical results of such scheme often depend on asymptotic assumptions since the distribution of the ratio of two extreme eigenvalues is exceptionally complex to compute. In this paper, a non-asymptotic spectrum sensing approach for ER detector is introduced to approximate the marginal and joint distributions of the two extreme eigenvalues. The two extreme eigenvalues are considered as dependent Gaussian random variables such that their joint probability density function (PDF) is approximated by a bivariate Gaussian distribution function for any number of cooperating secondary users and received samples. The PDF approximation approach is based on the moment matching method where we calculate the exact analytical moments of joint and marginal distributions of the two extreme eigenvalues. The decision threshold is calculated by exploiting the statistical mean and the variance of each of the two extreme eigenvalues and the correlation coefficient between them. The performance analysis of our newly proposed approximation approach is compared with the already published asymptotic Tracy-Widom approximation approach. It has been shown that our results are in perfect agreement with the simulation results for any number of secondary users and received samples. © 2002-2012 IEEE.

  3. Collective gyromagnetic ratio and moment of inertia from density-dependent Hartree-Fock calculations

    International Nuclear Information System (INIS)

    Sprung, D.W.L.; Lie, S.G.; Vallieres, M.; Quentin, P.

    1979-01-01

    The collective gyromagnetic ratio and moment of inertia of deformed even-even axially symmetric nuclei are calculated in the cranking approximation using wave functions obtained with the Skyrme force S-III. Good agreement is found for gsub(R), while the moment of inertia is about 20% too small. The cranking formula leads to better agreement than the projection method. (Auth.)

  4. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  5. Laser thermal annealing of Ge, optimized for highly activated dopants and diode ION/IOFF ratios

    DEFF Research Database (Denmark)

    Shayesteh, M.; O'Connell, D.; Gity, F.

    2014-01-01

    The authors compared the influence of laser thermal annealing (LTA) and rapid thermal annealing (RTA) on dopant activation and electrical performance of phosphorus and arsenic doped n+/p junction. High carrier concentration above 1020 cm-3 as well as an ION/IOFF ratio of approximately 105 and ide...

  6. Modeling of finite aspect ratio effects on current drive

    International Nuclear Information System (INIS)

    Wright, J.C.; Phillips, C.K.

    1996-01-01

    Most 2D RF modeling codes use a parameterization of current drive efficiencies to calculate fast wave driven currents. This parameterization assumes a uniform diffusion coefficient and requires a priori knowledge of the wave polarizations. These difficulties may be avoided by a direct calculation of the quasilinear diffusion coefficient from the Kennel-Englemann form with the field polarizations calculated by a full wave code. This eliminates the need to use the approximation inherent in the parameterization. Current profiles are then calculated using the adjoint formulation. This approach has been implemented in the FISIC code. The accuracy of the parameterization of the current drive efficiency, η, is judged by a comparison with a direct calculation: where χ is the adjoint function, ε is the kinetic energy, and rvec Γ is the quasilinear flux. It is shown that for large aspect ratio devices (ε → 0), the parameterization is nearly identical to the direct calculation. As the aspect ratio approaches unity, visible differences between the two calculations appear

  7. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  8. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  9. Skewed Marriage Markets and Sex Ratios of Finnish People in their Twenties

    Directory of Open Access Journals (Sweden)

    Lassi Lainiala

    2014-03-01

    Full Text Available This article studies variation in regional sex ratios in Finland and outlines potential implications of the skewed sex ratios for family formation patterns. Difficulties in finding a suitable partner are typically mentioned as one of the most important reasons for remaining childless, and we explore if this reason is apparent structurally at the regional macro level. We found significant variation in sex ratios in age-groups 18–30 at the regional and sub-regional levels. Of the whole 20–29-year old population in Finland, almost 50 percent live in sub-region areas with a male surplus. As expected, a higher proportion of men compared to women appears to increase fertility of women in younger age groups. Contrary to expectations, high male-female ratios were not related to higher proportion of women living with a partner

  10. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  11. On Nash-Equilibria of Approximation-Stable Games

    Science.gov (United States)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  12. Does Height to Width Ratio Correlate with Mean Volume in Gastropods?

    Science.gov (United States)

    Barriga, R.; Seixas, G.; Payne, J.

    2012-12-01

    Marine organisms' shell shape and size show important biological information. For example, shape and size can dictate how the organism ranges for food and escapes predation. Due to lack of data and analysis, the evolution of shell size in marine gastropods (snails) remains poorly known. In this study, I attempt to find the relationship between height to width ratio and mean volume. I collected height and width measurements from primary literature sources and calculated volume from these measurements. My results indicate that there was no correlation between height to width ratio and mean volume between 500 to 200 Ma, but there was a correlation between 200 Ma to present where there is a steady increase in both height to width ratio and mean volume. This means that shell shape was not an important factor at the beginning of gastropod evolution but after 200 Ma body size evolution was increasingly driven by the height to width ratio.

  13. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  14. Local density approximations for relativistic exchange energies

    International Nuclear Information System (INIS)

    MacDonald, A.H.

    1986-01-01

    The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented

  15. The 5th Canadian Symposium on Hepatitis C Virus: We Are Not Done Yet—Remaining Challenges in Hepatitis C

    Directory of Open Access Journals (Sweden)

    Nicholas van Buuren

    2016-01-01

    Full Text Available Hepatitis C virus (HCV affects approximately 268,000 Canadians and results in more years of life lost than any other infectious disease in the country. Both the Canadian Institutes of Health Research (CIHR and the Public Health Agency of Canada (PHAC have identified HCV-related liver disease as a priority and supported the establishment of a National Hepatitis C Research Network. In 2015, the introduction of new interferon- (IFN- free therapies with high cure rates (>90% and few side effects revolutionized HCV therapy. However, a considerable proportion of the population remains undiagnosed and treatment uptake remains low in Canada due to financial, geographical, cultural, and social barriers. Comprehensive prevention strategies, including enhanced harm reduction, broader screening, widespread treatment, and vaccine development, are far from being realized. The theme of the 2016 symposium, “We’re not done yet: remaining challenges in Hepatitis C,” was focused on identifying strategies to enhance prevention, diagnosis, and treatment of HCV to reduce disease burden and ultimately eliminate HCV in Canada.

  16. Prognostic value of neutrophil-to-lymphocyte ratio and platelet-to-lymphocyte ratio in acute pulmonary embolism: a systematic review and meta-analysis.

    Science.gov (United States)

    Wang, Qian; Ma, Junfen; Jiang, Zhiyun; Ming, Liang

    2018-02-01

    Neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) have been reported to predict prognosis of acute pulmonary embolism (PE). However, the prognostic value of NLR and PLR remained inconsistent between studies. The aim of this meta-analysis was to assess the prognostic role of NLR and PLR in acute PE. We systematically searched Pubmed, Embase, Web of Science and CNKI for relative literature up to March 2017. The pooled statistics for all outcomes were expressed as odds ratio (OR) and 95% confidence intervals (95% CI). The statistical analyses were performed using Review Manager 5.3.5 analysis software and Stata software. Totally 7 eligible studies consisting of 2323 patients were enrolled in our meta-analysis. Elevated NLR was significantly associated with overall (short-term and long-term) mortality (OR 10.13, 95% CI 6.57-15.64, Panalysis revealed that NLR and PLR are promising biomarkers in predicting prognosis in acute PE patients. We suggest NLR and PLR be used routinely in the PE prognostic assessment.

  17. PADÉ APPROXIMANTS FOR THE EQUATION OF STATE FOR RELATIVISTIC HYDRODYNAMICS BY KINETIC THEORY

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Shang-Hsi; Yang, Jaw-Yen, E-mail: shanghsi@gmail.com [Institute of Applied Mechanics, National Taiwan University, Taipei 10764, Taiwan (China)

    2015-07-20

    A two-point Padé approximant (TPPA) algorithm is developed for the equation of state (EOS) for relativistic hydrodynamic systems, which are described by the classical Maxwell–Boltzmann statistics and the semiclassical Fermi–Dirac statistics with complete degeneracy. The underlying rational function is determined by the ratios of the macroscopic state variables with various orders of accuracy taken at the extreme relativistic limits. The nonunique TPPAs are validated by Taub's inequality for the consistency of the kinetic theory and the special theory of relativity. The proposed TPPA is utilized in deriving the EOS of the dilute gas and in calculating the specific heat capacity, the adiabatic index function, and the isentropic sound speed of the ideal gas. Some general guidelines are provided for the application of an arbitrary accuracy requirement. The superiority of the proposed TPPA is manifested in manipulating the constituent polynomials of the approximants, which avoids the arithmetic complexity of struggling with the modified Bessel functions and the hyperbolic trigonometric functions arising from the relativistic kinetic theory.

  18. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  19. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  20. Approximating centrality in evolving graphs: toward sublinearity

    Science.gov (United States)

    Priest, Benjamin W.; Cybenko, George

    2017-05-01

    The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.

  1. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  2. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...

  3. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.

    2008-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  4. Efficient automata constructions and approximate automata

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.

    2006-01-01

    In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern

  5. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  6. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  7. 'LTE-diffusion approximation' for arc calculations

    International Nuclear Information System (INIS)

    Lowke, J J; Tanaka, M

    2006-01-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

  8. A BLUEPRINT OF RATIO ANALYSIS AS INFORMATION BASIS OF CORPORATION FINANCIAL MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Maja Andrijasevic

    2014-10-01

    Full Text Available Ratio analysis, due to its simplicity, has, for a long time, been one of the most frequently used methods of financial analysis. However, the question is how its results are a good basis for assessment of financial condition of a company by the external users of financial reports. If one takes into account numerous limitations, one can rather say that ratio analysis is a rough approximation of financial situation. What are the limitations, can they be overcome and in what way, can they, at least, be reduced, and to what extent the user has to take a reserved attitude when making business decisions on the basis of ratio analysis? The last but not the least, we should accept the fact that by insisting on financial analysis other aspects of the analyses are, in practice frequently marginalized, thus neglecting the fact that actually they themselves in the most direct manner point to the causes of potential disorders in business activities of a company.

  9. Dispersion in two dimensional channels—the Fick-Jacobs approximation revisited

    Science.gov (United States)

    Mangeat, M.; Guérin, T.; Dean, D. S.

    2017-12-01

    We examine the dispersion of Brownian particles in a symmetric two dimensional channel, this classical problem has been widely studied in the literature using the so called Fick-Jacobs’ approximation and its various improvements. Most studies rely on the reduction to an effective one dimensional diffusion equation, here we derive an explicit formula for the diffusion constant which avoids this reduction. Using this formula the effective diffusion constant can be evaluated numerically without resorting to Brownian simulations. In addition, a perturbation theory can be developed in \\varepsilon = h_0/L where h 0 is the characteristic channel height and L the period. This perturbation theory confirms the results of Kalinay and Percus (2006 Phys. Rev. E 74 041203), based on the reduction, to one dimensional diffusion are exact at least to {{ O}}(\\varepsilon^6) . Furthermore, we show how the Kalinay and Percus pseudo-linear approximation can be straightforwardly recovered. The approach proposed here can also be exploited to yield exact results in the limit \\varepsilon \\to ∞ , we show that here the diffusion constant remains finite and show how the result can be obtained with a simple physical argument. Moreover, we show that the correction to the effective diffusion constant is of order 1/\\varepsilon and remarkably has some universal characteristics. Numerically we compare the analytic results obtained with exact numerical calculations for a number of interesting channel geometries.

  10. Measurement and Study of Lidar Ratio by Using a Raman Lidar in Central China

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-05-01

    Full Text Available We comprehensively evaluated particle lidar ratios (i.e., particle extinction to backscatter ratio at 532 nm over Wuhan in Central China by using a Raman lidar from July 2013 to May 2015. We utilized the Raman lidar data to obtain homogeneous aerosol lidar ratios near the surface through the Raman method during no-rain nights. The lidar ratios were approximately 57 ± 7 sr, 50 ± 5 sr, and 22 ± 4 sr under the three cases with obviously different pollution levels. The haze layer below 1.8 km has a large particle extinction coefficient (from 5.4e-4 m−1 to 1.6e-4 m−1 and particle backscatter coefficient (between 1.1e-05 m−1sr−1 and 1.7e-06 m−1sr−1 in the heavily polluted case. Furthermore, the particle lidar ratios varied according to season, especially between winter (57 ± 13 sr and summer (33 ± 10 sr. The seasonal variation in lidar ratios at Wuhan suggests that the East Asian monsoon significantly affects the primary aerosol types and aerosol optical properties in this region. The relationships between particle lidar ratios and wind indicate that large lidar ratio values correspond well with weak winds and strong northerly winds, whereas significantly low lidar ratio values are associated with prevailing southwesterly and southerly wind.

  11. Measurement and Study of Lidar Ratio by Using a Raman Lidar in Central China.

    Science.gov (United States)

    Wang, Wei; Gong, Wei; Mao, Feiyue; Pan, Zengxin; Liu, Boming

    2016-05-18

    We comprehensively evaluated particle lidar ratios (i.e., particle extinction to backscatter ratio) at 532 nm over Wuhan in Central China by using a Raman lidar from July 2013 to May 2015. We utilized the Raman lidar data to obtain homogeneous aerosol lidar ratios near the surface through the Raman method during no-rain nights. The lidar ratios were approximately 57 ± 7 sr, 50 ± 5 sr, and 22 ± 4 sr under the three cases with obviously different pollution levels. The haze layer below 1.8 km has a large particle extinction coefficient (from 5.4e-4 m(-1) to 1.6e-4 m(-1)) and particle backscatter coefficient (between 1.1e-05 m(-1)sr(-1) and 1.7e-06 m(-1)sr(-1)) in the heavily polluted case. Furthermore, the particle lidar ratios varied according to season, especially between winter (57 ± 13 sr) and summer (33 ± 10 sr). The seasonal variation in lidar ratios at Wuhan suggests that the East Asian monsoon significantly affects the primary aerosol types and aerosol optical properties in this region. The relationships between particle lidar ratios and wind indicate that large lidar ratio values correspond well with weak winds and strong northerly winds, whereas significantly low lidar ratio values are associated with prevailing southwesterly and southerly wind.

  12. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...

  13. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are

  14. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  15. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  16. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  17. Analytical assessment of some characteristic ratios for s-wave superconductors

    Science.gov (United States)

    Gonczarek, Ryszard; Krzyzosiak, Mateusz; Gonczarek, Adam; Jacak, Lucjan

    2018-04-01

    We evaluate some thermodynamic quantities and characteristic ratios that describe low- and high-temperature s-wave superconducting systems. Based on a set of fundamental equations derived within the conformal transformation method, a simple model is proposed and studied analytically. After including a one-parameter class of fluctuations in the density of states, the mathematical structure of the s-wave superconducting gap, the free energy difference, and the specific heat difference is found and discussed in an analytic manner. Both the zero-temperature limit T = 0 and the subcritical temperature range T ≲ T c are discussed using the method of successive approximations. The equation for the ratio R 1, relating the zero-temperature energy gap and the critical temperature, is formulated and solved numerically for various values of the model parameter. Other thermodynamic quantities are analyzed, including a characteristic ratio R 2, quantifying the dynamics of the specific heat jump at the critical temperature. It is shown that the obtained model results coincide with experimental data for low- T c superconductors. The prospect of application of the presented model in studies of high- T c superconductors and other superconducting systems of the new generation is also discussed.

  18. Optimal Design for Hybrid Ratio of Carbon/Basalt Hybrid Fiber Reinforced Resin Matrix Composites

    Directory of Open Access Journals (Sweden)

    XU Hong

    2017-08-01

    Full Text Available The optimum hybrid ratio range of carbon/basalt hybrid fiber reinforced resin composites was studied. Hybrid fiber composites with nine different hybrid ratios were prepared before tensile test.According to the structural features of plain weave, the unit cell's performance parameters were calculated. Finite element model was established by using SHELL181 in ANSYS. The simulated values of the sample stiffness in the model were approximately similar to the experimental ones. The stress nephogram shows that there is a critical hybrid ratio which divides the failure mechanism of HFRP into single failure state and multiple failure state. The tensile modulus, strength and limit tensile strain of HFRP with 45% resin are simulated by finite element method. The result shows that the tensile modulus of HFRP with 60% hybrid ratio increases by 93.4% compared with basalt fiber composites (BFRP, and the limit tensile strain increases by 11.3% compared with carbon fiber composites(CFRP.

  19. Toxicity ratios: Their use and abuse in predicting the risk from induced cancer

    International Nuclear Information System (INIS)

    Mays, C.W.; Taylor, G.N.; Lloyd, R.D.

    1986-01-01

    The toxicity ratio concept assumes the validity of certain relationships. In some examples for bone sarcoma induction, the approximate toxicity of 239 Pu in man can be calculated algebraically from the observed toxicity in the radium-dial painters and the ratio of 239 Pu/ 226 Ra toxicities in suitable laboratory mammals. In a species highly susceptible to bone sarcoma induction, the risk coefficients for both 239 Pu and 226 Ra are elevated, but the toxicity ratio of 239 Pu to 226 Ra tends to be similar to the ratio in resistant species. Among the tested species the toxicity ratio of 239 Pu to 226 Ra ranged from 6 to 22 (a fourfold range), whereas their relative sensitivities to 239 Pu varied by a factor of 150. The toxicity ratio approach can also be used to estimate the actinide risk to man from liver cancer, by comparing to the Thorotrast patients; from lung cancer, by comparing to the uranium miners and the atomic-bomb survivors; and from neutron-induced cancers, by comparing to cancers induced by gamma rays. The toxicity ratio can be used to predict the risk to man from a specific type of cancer that has been reliably induced by a reference radiation in humans and that can be induced by both the reference and the investigated radiation in suitable laboratory animals. 26 refs., 3 figs., 1 tab

  20. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-01-01

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  1. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  2. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  3. The Liquidity Coverage Ratio: the need for further complementary ratios?

    OpenAIRE

    Ojo, Marianne

    2013-01-01

    This paper considers components of the Liquidity Coverage Ratio – as well as certain prevailing gaps which may necessitate the introduction of a complementary liquidity ratio. The definitions and objectives accorded to the Liquidity Coverage Ratio (LCR) and Net Stable Funding Ratio (NSFR) highlight the focus which is accorded to time horizons for funding bank operations. A ratio which would focus on the rate of liquidity transformations and which could also serve as a complementary metric gi...

  4. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  5. Validity of the "sharp-kink approximation" for water and other fluids.

    Science.gov (United States)

    Garcia, R; Osborne, K; Subashi, E

    2008-07-10

    The contact angle of a liquid droplet on a solid surface is a direct measure of fundamental atomic-scale forces acting between liquid molecules and the solid surface. In this work, the validity is assessed of a simple equation, which approximately relates the contact angle of a liquid on a surface to its density, its surface tension, and the effective molecule-surface potential. This equation is derived in the sharp-kink approximation, where the density profile of the liquid is assumed to drop precipitously within one molecular diameter of the substrate. It is found that this equation satisfactorily reproduces the temperature-dependence of the contact angle for helium on alkali metal surfaces. The equation also seems be applicable to liquids such as water on solid surfaces such as gold and graphite, on the basis of a comparison of predicted and measured contact angles near room-temperature. Nevertheless, we conclude that, to fully test the equation's applicability to fluids such as water, it remains necessary to measure the contact angle's temperature-dependence. We hypothesize that the effects of electrostatic forces can increase with temperature, potentially driving the wetting temperature much higher and closer to the critical point, or lower, closer to room temperature, than predicted using current theories.

  6. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  7. Postzygotic incompatibilities between the pupfishes, Cyprinodon elegans and Cyprinodon variegatus: hybrid male sterility and sex ratio bias.

    Science.gov (United States)

    Tech, C

    2006-11-01

    I examined the intrinsic postzygotic incompatibilities between two pupfishes, Cyprinodon elegans and Cyprinodon variegatus. Laboratory hybridization experiments revealed evidence of strong postzygotic isolation. Male hybrids have very low fertility, and the survival of backcrosses into C. elegans was substantially reduced. In addition, several crosses produced female-biased sex ratios. Crosses involving C. elegans females and C. variegatus males produced only females, and in backcrosses involving hybrid females and C. elegans males, males made up approximately 25% of the offspring. All other crosses produced approximately 50% males. These sex ratios could be explained by genetic incompatibilities that occur, at least in part, on sex chromosomes. Thus, these results provide strong albeit indirect evidence that pupfish have XY chromosomal sex determination. The results of this study provide insight on the evolution of reproductive isolating mechanisms, particularly the role of Haldane's rule and the 'faster-male' theory in taxa lacking well-differentiated sex chromosomes.

  8. The Methane to Carbon Dioxide Ratio Produced during Peatland Decomposition and a Simple Approach for Distinguishing This Ratio

    Science.gov (United States)

    Chanton, J.; Hodgkins, S. B.; Cooper, W. T.; Glaser, P. H.; Corbett, J. E.; Crill, P. M.; Saleska, S. R.; Rich, V. I.; Holmes, B.; Hines, M. E.; Tfaily, M.; Kostka, J. E.

    2014-12-01

    Peatland organic matter is cellulose-like with an oxidation state of approximately zero. When this material decomposes by fermentation, stoichiometry dictates that CH4 and CO2 should be produced in a ratio approaching one. While this is generally the case in temperate zones, this production ratio is often departed from in boreal peatlands, where the ratio of belowground CH4/CO2 production varies between 0.1 and 1, indicating CO2 production by a mechanism in addition to fermentation. The in situ CO2/CH4 production ratio may be ascertained by analysis of the 13C isotopic composition of these products, because CO2 production unaccompanied by methane production produces CO2 with an isotopic composition similar to the parent organic matter while methanogenesis produces 13C depleted methane and 13C enriched CO2. The 13C enrichment in the subsurface CO2 pool is directly related to the amount of if formed from methane production and the isotopic composition of the methane itself. Excess CO2 production is associated with more acidic conditions, Sphagnum vegetation, high and low latitudes, methane production dominated by hydrogenotrophic methane production, 13C depleted methane, and generally, more nutrient depleted conditions. Three theories have been offered to explain these observations— 1) inhibition of acetate utilization, acetate build-up and diffusion to the surface and eventual aerobic oxidation, 2) the use of humic acids as electron acceptors, and the 3) utilization of organic oxygen to produce CO2. In support of #3, we find that 13C-NMR, Fourier transform infrared (FT IR) spectroscopy, and Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR-MS) clearly show the evolution of polysaccharides and cellulose towards more decomposed humified alkyl compounds stripped of organic oxygen utilized to form CO2. Such decomposition results in more negative carbon oxidation states varying from -1 to -2. Coincident with this reduction in oxidation state, is the

  9. Golden Ratio

    Indian Academy of Sciences (India)

    Keywords. Fibonacci numbers, golden ratio, Sanskrit prosody, solar panel. Abstract. Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany ...

  10. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  11. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  12. Influence of remaining fission products in low-decontaminated fuel on reactor core characteristics

    International Nuclear Information System (INIS)

    Ohki, Shigeo

    2002-07-01

    Design study of core, fuel and related fuel cycle system with low-decontaminated fuel has been performed in the framework of the feasibility study (F/S) on commercialized fast reactor cycle systems. This report summarizes the influence on core characteristics of remaining fission products (FPs) in low-decontaminated fuel related to the reprocessing systems nominated in F/S phase I. For simple treatment of the remaining FPs in core neutronics calculation the representative nuclide method parameterized by the FP equivalent coefficient and the FP volume fraction was developed, which enabled an efficient evaluation procedure. As a result of the investigation on the sodium cooled fast reactor with MOX fuel designed in fiscal year 1999, it was found that the pyrochemical reprocessing with molten salt (the RIAR method) brought the largest influence. Nevertheless, it was still within the allowable range. Assuming an infinite-times recycling, the alternations in core characteristics were evaluated as follows: increment of burnup reactivity by 0.5%Δk/kk', decrement of breeding ratio by 0.04, increment of sodium void reactivity by 0.1x10 -2 Δk/kk' and decrement of Doppler constant (in absolute value) by 0.7x10 -3 Tdk/dT. (author)

  13. Estimating the water table under the Radioactive Waste Management Site in Area 5 of the Nevada Test Site the Dupuit-Forcheimer approximation

    International Nuclear Information System (INIS)

    Lindstrom, T.F.; Barker, L.E.; Cawlfield, D.E.; Daffern, D.D.; Dozier, B.L.; Emer, D.F.; Strong, W.R.

    1992-01-01

    A two-dimensional steady-state water-flow equation for estimating the water table elevation under a thick, very dry vadose zone is developed and discussed. The Dupuit assumption is made. A prescribed downward vertical infiltration/evaporation condition is assumed at the atmosphere-soil interface. An approximation to the square of the elevation head, based upon multivariate cubic interpolation methods, is introduced. The approximation is forced to satisfy the governing elliptic (Poisson) partial differential equation over the domain of definition. The remaining coefficients are determined by interpolating the water table at eight ''boundary points.'' Several realistic scenarios approximating the water table under the Radioactive Waste Management Site (RWMS) in Area 5 of the Nevada Test Site (NTS) are discussed

  14. Fish remains and humankind: part two

    Directory of Open Access Journals (Sweden)

    Andrew K G Jones

    1998-07-01

    Full Text Available The significance of aquatic resources to past human groups is not adequately reflected in the published literature - a deficiency which is gradually being acknowledged by the archaeological community world-wide. The publication of the following three papers goes some way to redress this problem. Originally presented at an International Council of Archaeozoology (ICAZ Fish Remains Working Group meeting in York, U.K. in 1987, these papers offer clear evidence of the range of interest in ancient fish remains across the world. Further papers from the York meeting were published in Internet Archaeology 3 in 1997.

  15. Golden Ratio

    Indian Academy of Sciences (India)

    Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany structures. This ratio comes from Fibonacci numbers.In this article, we explore this ...

  16. Improved radiative corrections for (e,e'p) experiments: Beyond the peaking approximation and implications of the soft-photon approximation

    International Nuclear Information System (INIS)

    Weissbach, F.; Hencken, K.; Rohe, D.; Sick, I.; Trautmann, D.

    2006-01-01

    Analyzing (e,e ' p) experimental data involves corrections for radiative effects which change the interaction kinematics and which have to be carefully considered in order to obtain the desired accuracy. Missing momentum and energy due to bremsstrahlung have so far often been incorporated into the simulations and the experimental analyses using the peaking approximation. It assumes that all bremsstrahlung is emitted in the direction of the radiating particle. In this article we introduce a full angular Monte Carlo simulation method which overcomes this approximation. As a test, the angular distribution of the bremsstrahlung photons is reconstructed from H(e,e ' p) data. Its width is found to be underestimated by the peaking approximation and described much better by the approach developed in this work. The impact of the soft-photon approximation on the photon angular distribution is found to be minor as compared to the impact of the peaking approximation. (orig.)

  17. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  18. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data

  19. The association between higher education and approximate number system acuity.

    Science.gov (United States)

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2014-01-01

    Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.

  20. The Association Between Higher Education and Approximate Number System Acuity

    Directory of Open Access Journals (Sweden)

    Marcus eLindskog

    2014-05-01

    Full Text Available Humans are equipped with an Approximate Number System (ANS supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities, measured either early (1th year or late (3rd year in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.

  1. The association between higher education and approximate number system acuity

    Science.gov (United States)

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2014-01-01

    Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity. PMID:24904478

  2. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  3. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-06-01

    Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

  4. Atmospheric ammonia mixing ratios at an open-air cattle feeding facility.

    Science.gov (United States)

    Hiranuma, Naruki; Brooks, Sarah D; Thornton, Daniel C O; Auvermann, Brent W

    2010-02-01

    Mixing ratios of total and gaseous ammonia were measured at an open-air cattle feeding facility in the Texas Panhandle in the summers of 2007 and 2008. Samples were collected at the nominally upwind and downwind edges of the facility. In 2008, a series of far-field samples was also collected 3.5 km north of the facility. Ammonium concentrations were determined by two complementary laboratory methods, a novel application of visible spectrophotometry and standard ion chromatography (IC). Results of the two techniques agreed very well, and spectrophotometry is faster, easier, and cheaper than chromatography. Ammonia mixing ratios measured at the immediate downwind site were drastically higher (approximately 2900 parts per billion by volume [ppbv]) than thos measured at the upwind site (open-air animal feeding operations, especially under the hot and dry conditions present during these measurements.

  5. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  6. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...

  7. Golden Ratio

    Indian Academy of Sciences (India)

    Our attraction to another body increases if the body is sym- metrical and in proportion. If a face or a structure is in pro- portion, we are more likely to notice it and find it beautiful. The universal ratio of beauty is the 'Golden Ratio', found in many structures. This ratio comes from Fibonacci numbers. In this article, we explore this ...

  8. Long‐term trends in fall age ratios of black brant

    Science.gov (United States)

    Ward, David H.; Amundson, Courtney L.; Stehn, Robert A.; Dau, Christian P.

    2018-01-01

    Accurate estimates of the age composition of populations can inform past reproductive success and future population trajectories. We examined fall age ratios (juveniles:total birds) of black brant (Branta bernicla nigricans; brant) staging at Izembek National Wildlife Refuge near the tip of the Alaska Peninsula, southwest Alaska, USA, 1963 to 2015. We also investigated variation in fall age ratios associated with sampling location, an index of flock size, survey effort, day of season, observer, survey platform (boat‐ or land‐based) and tide stage. We analyzed data using logistic regression models implemented in a Bayesian framework. Mean predicted fall age ratio controlling for survey effort, day of year, and temporal and spatial variation was 0.24 (95% CL = 0.23, 0.25). Overall trend in age ratios was −0.6% per year (95% CL = −1.3%, 0.2%), resulting in an approximate 26% decline in the proportion of juveniles over the study period. We found evidence for variation across a range of variables implying that juveniles are not randomly distributed in space and time within Izembek Lagoon. Age ratios varied by location within the study area and were highly variable among years. They decreased with the number of birds aged (an index of flock size) and increased throughout September before leveling off in early October and declining in late October. Age ratios were similar among tide stages and observers and were lower during boat‐based (offshore) than land‐based (nearshore) surveys. Our results indicate surveys should be conducted annually during early to mid‐October to ensure the entire population is present and available for sampling, and throughout Izembek Lagoon to account for spatiotemporal variation in age ratios. Sampling should include a wide range of flock sizes representative of their distribution and occur in flocks located near and off shore. Further research evaluating the cause of declining age ratios in the fall population is necessary

  9. Symmetric Anderson impurity model: Magnetic susceptibility, specific heat and Wilson ratio

    Science.gov (United States)

    Zalom, Peter; Pokorný, Vladislav; Janiš, Václav

    2018-05-01

    We extend the spin-polarized effective-interaction approximation of the parquet renormalization scheme from Refs. [1,2] applied on the symmetric Anderson model by adding the low-temperature asymptotics of the total energy and the specific heat. We calculate numerically the Wilson ratio and determine analytically its asymptotic value in the strong-coupling limit. We demonstrate in this way that the exponentially small Kondo scale from the strong-coupling regime emerges in qualitatively the same way in the spectral function, magnetic susceptibility and the specific heat.

  10. Lung Abscess Remains a Life-Threatening Condition in Pediatrics – A Case Report

    Directory of Open Access Journals (Sweden)

    Chirteș Ioana Raluca

    2017-07-01

    Full Text Available Pulmonary abscess or lung abscess is a lung infection which destroys the lung parenchyma leading to cavitations and central necrosis in localised areas formed by thick-walled purulent material. It can be primary or secondary. Lung abscesses can occur at any age, but it seems that paediatric pulmonary abscess morbidity is lower than in adults. We present the case of a one year and 5-month-old male child admitted to our clinic for fever, loss of appetite and an overall altered general status. Laboratory tests revealed elevated inflammatory biomarkers, leukocytosis with neutrophilia, anaemia, thrombocytosis, low serum iron concentration and increased lactate dehydrogenase level. Despite wide-spectrum antibiotic therapy, the patient’s progress remained poor after seven days of treatment and a CT scan established the diagnosis of a large lung abscess. Despite changing the antibiotic therapy, surgical intervention was eventually needed. There was a slow but steady improvment and eventually, the patient was discharged after approximately five weeks.

  11. Approximate number word knowledge before the cardinal principle.

    Science.gov (United States)

    Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C

    2015-02-01

    Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  13. Do banks differently set their liquidity ratios based on their network characteristics? Do banks differently set their liquidity ratios based on their network characteristics?

    OpenAIRE

    Distinguin, Isabelle; Mahdavi-Ardekani, Aref; Tarazi, Amine

    2017-01-01

    This paper investigates the impact of interbank network topology on bank liquidity ratios. Whereas more emphasis has been put on liquidity requirements by regulators since the global financial crisis of 2007-2008, how differently shaped interbank networks impact individual bank liquidity behavior remains an open issue. We look at how bank interconnectedness within interbank loan and deposit networks affects their decision to hold more or less liquidity during normal times and distress times a...

  14. Pawlak algebra and approximate structure on fuzzy lattice.

    Science.gov (United States)

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.

  15. Dynamical cluster approximation plus semiclassical approximation study for a Mott insulator and d-wave pairing

    Science.gov (United States)

    Kim, SungKun; Lee, Hunpyo

    2017-06-01

    Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.

  16. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  17. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  18. Uniform analytic approximation of Wigner rotation matrices

    Science.gov (United States)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  19. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  20. Density-scaling and the Prigogine-Defay ratio in liquids.

    Science.gov (United States)

    Casalini, R; Gamache, R F; Roland, C M

    2011-12-14

    The term "strongly correlating liquids" refers to materials exhibiting near proportionality of fluctuations in the potential energy and the virial pressure, as seen in molecular dynamics simulations of liquids whose interactions are comprised primarily of van der Waals forces. Recently it was proposed that the Prigogine-Defay ratio, Π, of strongly correlating liquids should fall close to unity. We verify this prediction herein by showing that the degree to which relaxation times are a function T/ρ(γ), the ratio of temperature to density with the latter raised to a material constant (a property inherent to strongly correlating liquids) is reflected in values of Π closer to unity. We also show that the dynamics of strongly correlating liquids are governed more by density than by temperature. Thus, while Π may never strictly equal 1 for the glass transition, it is approximately unity for many materials, and thus can serve as a predictor of other dynamic behavior. For example, Π ≫ 1 is indicative of additional control parameters besides T/ρ(γ). © 2011 American Institute of Physics

  1. Approximation Properties of Certain Summation Integral Type Operators

    Directory of Open Access Journals (Sweden)

    Patel P.

    2015-03-01

    Full Text Available In the present paper, we study approximation properties of a family of linear positive operators and establish direct results, asymptotic formula, rate of convergence, weighted approximation theorem, inverse theorem and better approximation for this family of linear positive operators.

  2. Semiclassical initial value approximation for Green's function.

    Science.gov (United States)

    Kay, Kenneth G

    2010-06-28

    A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.

  3. The adiabatic approximation in multichannel scattering

    International Nuclear Information System (INIS)

    Schulte, A.M.

    1978-01-01

    Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)

  4. Minimal entropy approximation for cellular automata

    International Nuclear Information System (INIS)

    Fukś, Henryk

    2014-01-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)

  5. Nonlinear-drifted Brownian motion with multiple hidden states for remaining useful life prediction of rechargeable batteries

    Science.gov (United States)

    Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung

    2017-09-01

    Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.

  6. Lattice dynamics and thermophysical properties of h.c.p. Os and Ru from the quasi-harmonic approximation.

    Science.gov (United States)

    Palumbo, Mauro; Dal Corso, Andrea

    2017-10-04

    We report first-principles phonon frequencies and anharmonic thermodynamic properties of h.c.p. Os and Ru calculated within the quasi-harmonic approximation, including Grüneisen parameters, temperature-dependent lattice parameters, thermal expansion, and isobaric heat capacity. We discuss the differences between a full treatment of anisotropy and a simplified approach with a constant [Formula: see text] ratio. The results are systematically compared with the available theoretical and experimental data and an overall satisfactory agreement is obtained.

  7. Sex Ratio Elasticity Influences the Selection of Sex Ratio Strategy

    Science.gov (United States)

    Wang, Yaqiang; Wang, Ruiwu; Li, Yaotang; (Sam) Ma, Zhanshan

    2016-12-01

    There are three sex ratio strategies (SRS) in nature—male-biased sex ratio, female-biased sex ratio and, equal sex ratio. It was R. A. Fisher who first explained why most species in nature display a sex ratio of ½. Consequent SRS theories such as Hamilton’s local mate competition (LMC) and Clark’s local resource competition (LRC) separately explained the observed deviations from the seemingly universal 1:1 ratio. However, to the best of our knowledge, there is not yet a unified theory that accounts for the mechanisms of the three SRS. Here, we introduce the price elasticity theory in economics to define sex ratio elasticity (SRE), and present an analytical model that derives three SRSs based on the following assumption: simultaneously existing competitions for both resources A and resources B influence the level of SRE in both sexes differently. Consequently, it is the difference (between two sexes) in the level of their sex ratio elasticity that leads to three different SRS. Our analytical results demonstrate that the elasticity-based model not only reveals a highly plausible mechanism that explains the evolution of SRS in nature, but also offers a novel framework for unifying two major classical theories (i.e., LMC & LRC) in the field of SRS research.

  8. Function approximation using combined unsupervised and supervised learning.

    Science.gov (United States)

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  9. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  10. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  11. Face Recognition using Approximate Arithmetic

    DEFF Research Database (Denmark)

    Marso, Karol

    Face recognition is image processing technique which aims to identify human faces and found its use in various different fields for example in security. Throughout the years this field evolved and there are many approaches and many different algorithms which aim to make the face recognition as effective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....

  12. Alpha-in-air monitor for continuous monitoring based on alpha to beta ratio

    International Nuclear Information System (INIS)

    Somayaji, K.S.; Venkataramani, R.; Swaminathan, N.; Pushparaja

    1997-01-01

    Measurement of long-lived alpha activity collected on a filter paper in continuous air monitoring of ambient working environment is difficult due to interference from much larger concentrations of short-lived alpha emitting daughter products of 222 Rn and 220 Rn. However, the ratio between the natural alpha and beta activity is approximately constant and this constancy of the ratio is used to discriminate against short-lived natural radioactivity in continuous air monitoring. Detection system was specially designed for the purpose of simultaneous counting of alpha and beta activity deposited on the filter paper during continuous monitoring. The activity ratios were calculated and plotted against the monitoring duration up to about six hours. Monitoring was carried out in three facilities with different ventilation conditions. Presence of any long-lived alpha contamination on the filter paper results in increase in the alpha to beta ratio. Long-lived 239 Pu contamination of about 16 DAC.h could be detected after about 45 minutes of commencement of the sampling. The experimental results using prototype units have shown that the approach of using alpha to beta activity ratio method to detect long-lived alpha activity in the presence of short-lived natural activity is satisfactory. (author)

  13. On the universality of knot probability ratios

    Energy Technology Data Exchange (ETDEWEB)

    Janse van Rensburg, E J [Department of Mathematics and Statistics, York University, Toronto, Ontario M3J 1P3 (Canada); Rechnitzer, A, E-mail: rensburg@yorku.ca, E-mail: andrewr@math.ubc.ca [Department of Mathematics, University of British Columbia, 1984 Mathematics Road, Vancouver, BC V6T 1Z2 (Canada)

    2011-04-22

    Let p{sub n} denote the number of self-avoiding polygons of length n on a regular three-dimensional lattice, and let p{sub n}(K) be the number which have knot type K. The probability that a random polygon of length n has knot type K is p{sub n}(K)/p{sub n} and is known to decay exponentially with length (Sumners and Whittington 1988 J. Phys. A: Math. Gen. 21 1689-94, Pippenger 1989 Discrete Appl. Math. 25 273-8). Little is known rigorously about the asymptotics of p{sub n}(K), but there is substantial numerical evidence. It is believed that the entropic exponent, {alpha}, is universal, while the exponential growth rate is independent of the knot type but varies with the lattice. The amplitude, C{sub K}, depends on both the lattice and the knot type. The above asymptotic form implies that the relative probability of a random polygon of length n having prime knot type K over prime knot type L. In the thermodynamic limit this probability ratio becomes an amplitude ratio; it should be universal and depend only on the knot types K and L. In this communication we examine the universality of these probability ratios for polygons in the simple cubic, face-centred cubic and body-centred cubic lattices. Our results support the hypothesis that these are universal quantities. For example, we estimate that a long random polygon is approximately 28 times more likely to be a trefoil than be a figure-eight, independent of the underlying lattice, giving an estimate of the intrinsic entropy associated with knot types in closed curves. (fast track communication)

  14. A study of the consistent and the lumped source approximations in finite element neutron diffusion calculations

    International Nuclear Information System (INIS)

    Ozgener, B.; Azgener, H.A.

    1991-01-01

    In finite element formulations for the solution of the within-group neutron diffusion equation, two different treatments are possible for the group source term: the consistent source approximation (CSA) and the lumped source approximation (LSA). CSA results in intra-group scattering and fission matrices which have the same nondiagonal structure as the global coefficient matrix. This situation might be regarded as a disadvantage, compared to the conventional (i.e. finite difference) methods where the intra-group scattering and fission matrices are diagonal. To overcome this disadvantage, LSA could be used to diagonalize these matrices. LSA is akin to the lumped mass approximation of continuum mechanics. We concentrate on two different aspects of the source approximations. Although it has been reported that LSA does not modify the asymptotic h 2 convergence behaviour for linear elements, the effect of LSA on convergence of higher degree elements has not been investigated. Thus, we would be interested in determining, p, the asymptotic order of convergence, in: Δk |k eff (analytical) -k eff (finite element)| = Ch p (1) for finite element approximations of varying degree (N) with both of the source approximations. Since (1) is valid in the asymptotic limit, we must use ultra-fine meshes and quadruple precision arithmetic. For our order of convergence study, we used infinite cylindrical geometry with azimuthal symmetry. Hence, the effects of singularities remain uninvestigated. The second aspect we dwell on is the performance of LSA in bilinear 3-D finite element calculations, compared to CSA. LSA has been used quite extensively in 1- and 2-D even-parity transport and diffusion calculations. In this work, we will try to assess the relative merits of LSA and CSA in 3-D problems. (author)

  15. Stochastic quantization and mean field approximation

    International Nuclear Information System (INIS)

    Jengo, R.; Parga, N.

    1983-09-01

    In the context of the stochastic quantization we propose factorized approximate solutions for the Fokker-Planck equation for the XY and Zsub(N) spin systems in D dimensions. The resulting differential equation for a factor can be solved and it is found to give in the limit of t→infinity the mean field or, in the more general case, the Bethe-Peierls approximation. (author)

  16. Approximative solutions of stochastic optimization problem

    Czech Academy of Sciences Publication Activity Database

    Lachout, Petr

    2010-01-01

    Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf

  17. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  18. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  19. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    Science.gov (United States)

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  20. Thin-wall approximation in vacuum decay: A lemma

    Science.gov (United States)

    Brown, Adam R.

    2018-05-01

    The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.

  1. The H2/CH4 ratio during serpentinization cannot reliably identify biological signatures

    OpenAIRE

    Huang, Ruifang; Sun, Weidong; Liu, Jinzhong; Ding, Xing; Peng, Shaobang; Zhan, Wenhuan

    2016-01-01

    Serpentinization potentially contributes to the origin and evolution of life during early history of the Earth. Serpentinization produces molecular hydrogen (H2) that can be utilized by microorganisms to gain metabolic energy. Methane can be formed through reactions between molecular hydrogen and oxidized carbon (e.g., carbon dioxide) or through biotic processes. A simple criterion, the H2/CH4 ratio, has been proposed to differentiate abiotic from biotic methane, with values approximately lar...

  2. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  3. SEE rate estimation based on diffusion approximation of charge collection

    Science.gov (United States)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  4. The ratio Φ→K+K-/K0 anti K0

    International Nuclear Information System (INIS)

    Bramon, A.; Lucio, M.J.L.

    2000-01-01

    The ratio Φ→K + K - /K 0 K 0 is discussed and its present experimental value is compared with theoretical expectations. A difference larger than two standard deviations is observed. It is critically examined a number of mechanisms that could account for this discrepancy, which remains unexplained. Measurements at DAΦNE at the level of the per mille accuracy can clarify whether there exist any anomaly

  5. An improved saddlepoint approximation.

    Science.gov (United States)

    Gillespie, Colin S; Renshaw, Eric

    2007-08-01

    Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.

  6. Topology, calculus and approximation

    CERN Document Server

    Komornik, Vilmos

    2017-01-01

    Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...

  7. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  8. Coefficients Calculation in Pascal Approximation for Passive Filter Design

    Directory of Open Access Journals (Sweden)

    George B. Kasapoglu

    2018-02-01

    Full Text Available The recently modified Pascal function is further exploited in this paper in the design of passive analog filters. The Pascal approximation has non-equiripple magnitude, in contrast of the most well-known approximations, such as the Chebyshev approximation. A novelty of this work is the introduction of a precise method that calculates the coefficients of the Pascal function. Two examples are presented for the passive design to illustrate the advantages and the disadvantages of the Pascal approximation. Moreover, the values of the passive elements can be taken from tables, which are created to define the normalized values of these elements for the Pascal approximation, as Zverev had done for the Chebyshev, Elliptic, and other approximations. Although Pascal approximation can be implemented to both passive and active filter designs, a passive filter design is addressed in this paper, and the benefits and shortcomings of Pascal approximation are presented and discussed.

  9. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    Science.gov (United States)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative

  10. Stable Carbon Isotope Ratio (δ13C Measurement of Graphite Using EA-IRMS System

    Directory of Open Access Journals (Sweden)

    Andrius Garbaras

    2015-06-01

    Full Text Available δ13C values in non-irradiated natural graphite were measured. The measurements were carried out using an elemental analyzer combined with stable isotope ratio mass spectrometer (EA-IRMS. The samples were prepared with ground and non-ground graphite, the part of which was mixed with Mg (ClO42. The best combustion of graphite in the oxidation furnace of the elemental analyzer was achieved when the amount of pulverized graphite ranged from 200 to 490 µg and the mass ratio C:Mg(ClO42 was approximately 1:10. The method for the graphite burning avoiding the isotope fractionation is proposed.DOI: http://dx.doi.org/10.5755/j01.ms.21.2.6873

  11. Alkaline solution/binder ratio as a determining factor in the alkaline activation of aluminosilicates

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Santaquiteria, C., E-mail: ruiz.cs@ietcc.csic.es [Eduardo Torroja Institute (CSIC), c/Serrano Galvache, n Degree-Sign 4, 28033 Madrid (Spain); Skibsted, J. [Instrument Centre for Solid-State NMR Spectroscopy, Interdisciplinary Nanoscience Center (iNANO), Department of Chemistry, Aarhus University, DK-8000 Aarhus C (Denmark); Fernandez-Jimenez, A.; Palomo, A. [Eduardo Torroja Institute (CSIC), c/Serrano Galvache, n Degree-Sign 4, 28033 Madrid (Spain)

    2012-09-15

    This study investigates the effect of the alkaline solution/binder (S/B) ratio on the composition and nanostructure of the reaction products generated in the alkaline activation of aluminosilicates. The experiments used two mixtures of fly ash and dehydroxylated white clay and for each of these, varying proportions of the solution components. The alkali activator was an 8 M NaOH solution (with and without sodium silicate) used at three S/B ratios: 0.50, 0.75 and 1.25. The {sup 29}Si, {sup 27}Al MAS NMR and XRD characterisation of the reaction products reveal that for ratios nearest the value delivering suitable paste workability, the reaction-product composition and structure depend primarily on the nature and composition of the starting materials and the alkaline activator used. However, when an excess alkaline activator is present in the system, the reaction products tend to exhibit SiO{sub 2}/Al{sub 2}O{sub 3} ratios of approximately 1, irrespective of the composition of the starting binder or the alkaline activator.

  12. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  13. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  14. Systems Biology and Ratio-Based, Real-Time Disease Surveillance.

    Science.gov (United States)

    Fair, J M; Rivas, A L

    2015-08-01

    Most infectious disease surveillance methods are not well fit for early detection. To address such limitation, here we evaluated a ratio- and Systems Biology-based method that does not require prior knowledge on the identity of an infective agent. Using a reference group of birds experimentally infected with West Nile virus (WNV) and a problem group of unknown health status (except that they were WNV-negative and displayed inflammation), both groups were followed over 22 days and tested with a system that analyses blood leucocyte ratios. To test the ability of the method to discriminate small data sets, both the reference group (n = 5) and the problem group (n = 4) were small. The questions of interest were as follows: (i) whether individuals presenting inflammation (disease-positive or D+) can be distinguished from non-inflamed (disease-negative or D-) birds, (ii) whether two or more D+ stages can be detected and (iii) whether sample size influences detection. Within the problem group, the ratio-based method distinguished the following: (i) three (one D- and two D+) data classes; (ii) two (early and late) inflammatory stages; (iii) fast versus regular or slow responders; and (iv) individuals that recovered from those that remained inflamed. Because ratios differed in larger magnitudes (up to 48 times larger) than percentages, it is suggested that data patterns are likely to be recognized when disease surveillance methods are designed to measure inflammation and utilize ratios. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  15. Conditional Density Approximations with Mixtures of Polynomials

    DEFF Research Database (Denmark)

    Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre

    2015-01-01

    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...

  16. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  17. On Love's approximation for fluid-filled elastic tubes

    International Nuclear Information System (INIS)

    Caroli, E.; Mainardi, F.

    1980-01-01

    A simple procedure is set up to introduce Love's approximation for wave propagation in thin-walled fluid-filled elastic tubes. The dispersion relation for linear waves and the radial profile for fluid pressure are determined in this approximation. It is shown that the Love approximation is valid in the low-frequency regime. (author)

  18. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  19. Improving Stiffness-to-weight Ratio of Spot-welded Structures based upon Nonlinear Finite Element Modelling

    Science.gov (United States)

    Zhang, Shengyong

    2017-07-01

    Spot welding has been widely used for vehicle body construction due to its advantages of high speed and adaptability for automation. An effort to increase the stiffness-to-weight ratio of spot-welded structures is investigated based upon nonlinear finite element analysis. Topology optimization is conducted for reducing weight in the overlapping regions by choosing an appropriate topology. Three spot-welded models (lap, doubt-hat and T-shape) that approximate “typical” vehicle body components are studied for validating and illustrating the proposed method. It is concluded that removing underutilized material from overlapping regions can result in a significant increase in structural stiffness-to-weight ratio.

  20. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined.

  1. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    International Nuclear Information System (INIS)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined

  2. SFU-driven transparent approximation acceleration on GPUs

    NARCIS (Netherlands)

    Li, A.; Song, S.L.; Wijtvliet, M.; Kumar, A.; Corporaal, H.

    2016-01-01

    Approximate computing, the technique that sacrifices certain amount of accuracy in exchange for substantial performance boost or power reduction, is one of the most promising solutions to enable power control and performance scaling towards exascale. Although most existing approximation designs

  3. Approximation in generalized Hardy classes and resolution of inverse problems for tokamaks

    International Nuclear Information System (INIS)

    Fisher, Y.

    2011-11-01

    This thesis concerns both the theoretical and constructive resolution of inverse problems for isotropic diffusion equation in planar domains, simply and doubly connected. From partial Cauchy boundary data (potential, flux), we look for those quantities on the remaining part of the boundary, where no information is available, as well as inside the domain. The proposed approach proceeds by considering solutions to the diffusion equation as real parts of complex valued solutions to some conjugated Beltrami equation. These particular generalized analytic functions allow to introduce Hardy classes, where the inverse problem is stated as a best constrained approximation issue (bounded extrema problem), and thereby is regularized. Hence, existence and smoothness properties, together with density results of traces on the boundary, ensure well-posedness. An application is studied, to a free boundary problem for a magnetically confined plasma in the tokamak Tore Supra (CEA Cadarache France). The resolution of the approximation problem on a suitable basis of functions (toroidal harmonics) leads to a qualification criterion for the estimated plasma boundary. A descent algorithm makes it decrease, and refines the estimations. The method does not require any integration of the solution in the overall domain. It furnishes very accurate numerical results, and could be extended to other devices, like JET or ITER. (author)

  4. Ratio dependence in small number discrimination is affected by the experimental procedure

    Directory of Open Access Journals (Sweden)

    Christian eAgrillo

    2015-10-01

    Full Text Available Adults, infants and some non-human animals share an approximate number system (ANS to estimate numerical quantities, and are supposed to share a second, ‘object-tracking’, system (OTS that supports the precise representation of a small number of items (up to 3 or 4. In relative numerosity judgments, accuracy depends on the ratio of the two numerosities (Weber’s Law, for numerosities > 4 (the typical ANS range, while for numerosities ≤ 4 (OTS range there is usually no ratio effect. However, recent studies have found evidence for ratio effects for small numerosities, challenging the idea that the OTS might be involved for small number discrimination. Here we tested the hypothesis that the lack of ratio effect in the numbers 1-4 is largely dependent on the type of stimulus presentation.We investigated relative numerosity judgments in college students using three different procedures: a simultaneous presentation of intermingled and separate groups of dots in separate experiments, and a further experiment with sequential presentation. As predicted, in the large number range, ratio dependence was observed in all tasks. By contrast, in the small number range, ratio insensitivity was found in one task (sequential presentation. In a fourth experiment, we showed that the presence of intermingled distractors elicited a ratio effect, while easily distinguishable distractors did not. As the different ratio sensitivity for small and large numbers has been often interpreted in terms of the activation of the OTS and ANS, our results suggest that numbers 1-4 may be represented by both numerical systems and that the experimental context, such as the presence/absence of task-irrelevant items in the visual field, would determine which system is activated.

  5. Approximate Networking for Universal Internet Access

    Directory of Open Access Journals (Sweden)

    Junaid Qadir

    2017-12-01

    Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.

  6. Variational Gaussian approximation for Poisson data

    Science.gov (United States)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  7. PENGARUH PERUBAHAN RETURN ON ASSETS, PERUBAHAN DEBT TO EQUITY RATIO DAN PERUBAHAN CASH RATIO TERHADAP PERUBAHAN DIVIDEND PAYOUT RATIO

    Directory of Open Access Journals (Sweden)

    Yuli Soesetio

    2008-02-01

    Full Text Available Dividend Payout Ratio used to calculate all of revenue that will be accepted by stockholders as cash dividend, usually explained as percentage. This research was conducted to know several factors that affected change of Dividend Payout Ratio and to know the significance level and the correlation between dependent and independent variable. Analysis instrument used was parametric statistic. Based on the result of statistic test,  The Change of Return on Asset (X1, The Change of Debt to Equity Ratio (X2,  were able to explain dependent variable of the change Dividend Payout Ratio, and The Change of CashRatio can’t explain dependent variable of the change Dividend Payout Ratio

  8. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    W. Romeijnders; L. Stougie (Leen); M. van der Vlerk

    2014-01-01

    htmlabstractApproximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value.

  9. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    Romeijnders, W.; Stougie, L.; van der Vlerk, M.H.

    2014-01-01

    Approximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value. However,

  10. Radial distributions of surface mass density and mass-to-luminosity ratio in spiral galaxies

    Science.gov (United States)

    Sofue, Yoshiaki

    2018-03-01

    We present radial profiles of the surface mass density (SMD) in spiral galaxies directly calculated using rotation curves of two approximations of flat-disk (SMD-F) and spherical mass distribution (SMD-S). The SMDs are combined with surface brightness using photometric data to derive radial variations of the mass-to-luminosity ratio (ML). It is found that the ML generally has a central peak or a plateau, and decreases to a local minimum at R ˜ 0.1-0.2 h, where R is the radius and h is the scale radius of optical disk. The ML, then, increases rapidly until ˜0.5 h, and is followed by gradual rise till ˜2 h, remaining at around ˜2 [M_{⊙} L^{-1}_{⊙}] in the w1 band (infrared λ3.4 μm) and ˜ 10 [M_⊙ L_⊙ ^{-1}] in the r band (λ6200-7500 Å). Beyond this radius, the ML increases steeply with approaching the observed edges at R ˜ 5 h, attaining to as high values as ˜20 in w1 and ˜ 10^2 [M_⊙ L_⊙ ^{-1}] in the r band, which are indicative of dominant dark matter. The general properties of the ML distributions will be useful for constraining cosmological formation models of spiral galaxies.

  11. Scattering of electromagnetic waves from a half-space of randomly distributed discrete scatterers and polarized backscattering ratio law

    Science.gov (United States)

    Zhu, P. Y.

    1991-01-01

    The effective-medium approximation is applied to investigate scattering from a half-space of randomly and densely distributed discrete scatterers. Starting from vector wave equations, an approximation, called effective-medium Born approximation, a particular way, treating Green's functions, and special coordinates, of which the origin is set at the field point, are used to calculate the bistatic- and back-scatterings. An analytic solution of backscattering with closed form is obtained and it shows a depolarization effect. The theoretical results are in good agreement with the experimental measurements in the cases of snow, multi- and first-year sea-ice. The root product ratio of polarization to depolarization in backscattering is equal to 8; this result constitutes a law about polarized scattering phenomena in the nature.

  12. Magnus approximation in the adiabatic picture

    International Nuclear Information System (INIS)

    Klarsfeld, S.; Oteo, J.A.

    1991-01-01

    A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs

  13. Adaptive weak approximation of reflected and stopped diffusions

    KAUST Repository

    Bayer, Christian

    2010-01-01

    We study the weak approximation problem of diffusions, which are reflected at a subset of the boundary of a domain and stopped at the remaining boundary. First, we derive an error representation for the projected Euler method of Costantini, Pacchiarotti and Sartoretto [Costantini et al., SIAM J. Appl. Math., 58(1):73-102, 1998], based on which we introduce two new algorithms. The first one uses a correction term from the representation in order to obtain a higher order of convergence, but the computation of the correction term is, in general, not feasible in dimensions d > 1. The second algorithm is adaptive in the sense of Moon, Szepessy, Tempone and Zouraris [Moon et al., Stoch. Anal. Appl., 23:511-558, 2005], using stochastic refinement of the time grid based on a computable error expansion derived from the representation. Regarding the stopped diffusion, it is based in the adaptive algorithm for purely stopped diffusions presented in Dzougoutov, Moon, von Schwerin, Szepessy and Tempone [Dzougoutov et al., Lect. Notes Comput. Sci. Eng., 44, 59-88, 2005]. We give numerical examples underlining the theoretical results. © de Gruyter 2010.

  14. Space-efficient path-reporting approximate distance oracles

    DEFF Research Database (Denmark)

    Elkin, Michael; Neiman, Ofer; Wulff-Nilsen, Christian

    2016-01-01

    We consider approximate path-reporting distance oracles, distance labeling and labeled routing with extremely low space requirements, for general undirected graphs. For distance oracles, we show how to break the nlog⁡n space bound of Thorup and Zwick if approximate paths rather than distances need...

  15. What is the optimal value of the g-ratio for myelinated fibers in the rat CNS? A theoretical approach.

    Directory of Open Access Journals (Sweden)

    Taylor Chomiak

    2009-11-01

    Full Text Available The biological process underlying axonal myelination is complex and often prone to injury and disease. The ratio of the inner axonal diameter to the total outer diameter or g-ratio is widely utilized as a functional and structural index of optimal axonal myelination. Based on the speed of fiber conduction, Rushton was the first to derive a theoretical estimate of the optimal g-ratio of 0.6 [1]. This theoretical limit nicely explains the experimental data for myelinated axons obtained for some peripheral fibers but appears significantly lower than that found for CNS fibers. This is, however, hardly surprising given that in the CNS, axonal myelination must achieve multiple goals including reducing conduction delays, promoting conduction fidelity, lowering energy costs, and saving space.In this study we explore the notion that a balanced set-point can be achieved at a functional level as the micro-structure of individual axons becomes optimized, particularly for the central system where axons tend to be smaller and their myelin sheath thinner. We used an intuitive yet novel theoretical approach based on the fundamental biophysical properties describing axonal structure and function to show that an optimal g-ratio can be defined for the central nervous system (approximately 0.77. Furthermore, by reducing the influence of volume constraints on structural design by about 40%, this approach can also predict the g-ratio observed in some peripheral fibers (approximately 0.6.These results support the notion of optimization theory in nervous system design and construction and may also help explain why the central and peripheral systems have evolved different g-ratios as a result of volume constraints.

  16. What is the optimal value of the g-ratio for myelinated fibers in the rat CNS? A theoretical approach.

    Science.gov (United States)

    Chomiak, Taylor; Hu, Bin

    2009-11-13

    The biological process underlying axonal myelination is complex and often prone to injury and disease. The ratio of the inner axonal diameter to the total outer diameter or g-ratio is widely utilized as a functional and structural index of optimal axonal myelination. Based on the speed of fiber conduction, Rushton was the first to derive a theoretical estimate of the optimal g-ratio of 0.6 [1]. This theoretical limit nicely explains the experimental data for myelinated axons obtained for some peripheral fibers but appears significantly lower than that found for CNS fibers. This is, however, hardly surprising given that in the CNS, axonal myelination must achieve multiple goals including reducing conduction delays, promoting conduction fidelity, lowering energy costs, and saving space. In this study we explore the notion that a balanced set-point can be achieved at a functional level as the micro-structure of individual axons becomes optimized, particularly for the central system where axons tend to be smaller and their myelin sheath thinner. We used an intuitive yet novel theoretical approach based on the fundamental biophysical properties describing axonal structure and function to show that an optimal g-ratio can be defined for the central nervous system (approximately 0.77). Furthermore, by reducing the influence of volume constraints on structural design by about 40%, this approach can also predict the g-ratio observed in some peripheral fibers (approximately 0.6). These results support the notion of optimization theory in nervous system design and construction and may also help explain why the central and peripheral systems have evolved different g-ratios as a result of volume constraints.

  17. Experimental characterization of the concrete behaviour under high confinement: influence of the saturation ratio and of the water/cement ratio

    International Nuclear Information System (INIS)

    Vu, X.H.

    2007-08-01

    The objective of this thesis is to experimentally characterize the influence of the saturation ratio and of the water/cement ratio of concrete on its behaviour under high confinement. This thesis lies within a more general scope of the understanding of concrete behaviour under severe loading situations (near field detonation or ballistic impacts). A near field detonation or an impact on a concrete structure generate very high levels of stress associated with complex loading paths in the concrete material. To validate concrete behaviour models, experimental results are required. The work presented in this thesis concerns tests conducted using a static triaxial press that allows to obtain stress levels of the order of the giga Pascal. The porous character of concrete and the high confinement required on the one hand, a development of a specimen protection device, and on the other hand, a development of an instrumentation with strain gauges, which is unprecedented for such high confinements. Hydrostatic and triaxial tests, conducted on the one hand on model materials and on the other hand on concrete, allowed to validate the developed experimental procedures as well as the technique of strain and stress measurements. The studies concerning the influence of the saturation ratio and of the water/cement ratio of concrete on its behaviour required the formulation of a plain baseline concrete and of two modified concretes with different water/cement ratios. The analysis of triaxial tests performed on the baseline concrete shows that the saturation ratio of concrete has a major influence on its static behaviour under high confinement. This influence is particularly marked for the concrete loading capacity and for the shape of limit state curves for saturation ratios greater than 50%. The concrete loading capacity increases with the confinement pressure for tests on dry concrete whereas beyond a given confinement pressure, it remains limited for wet or saturated concrete

  18. Effect of anesthesia, positioning, time, and feeding on the proventriculus: keel ratio of clinically healthy parrots.

    Science.gov (United States)

    Dennison, Sophie E; Paul-Murphy, Joanne R; Yandell, Brian S; Adams, William M

    2010-01-01

    Healthy, adult Hispaniolan Amazon parrots (Amazona ventralis) were imaged on three occasions to determine the effects of anesthesia, patient rotation, feeding, and short/long-term temporal factors on the proventriculus:keel ratio. Increasing rotation up to 15 degrees from right lateral resulted in increased inability to measure the proventriculus in up to 44% of birds, meaning that the proventriculus:keel ratio could not be calculated from those radiographs. There was a significant difference between the proventriculus:keel ratio for individual parrots when quantified 3 weeks apart. Despite this difference, all ratios remained within normal limits. No significant effect was identified due to anesthesia, feeding, fasting, or repeated imaging through an 8-h period. Interobserver agreement for measurability and correlation for the proventriculus:keel ratio values was high. It is recommended that the proventriculus:keel ratio be calculated from anesthetized parrots to attain images in true lateral recumbency. Ratio fluctuations within the normal range between radiographs obtained on different dates may be observed in normal parrots.

  19. Aspects of three field approximations: Darwin, frozen, EMPULSE

    International Nuclear Information System (INIS)

    Boyd, J.K.; Lee, E.P.; Yu, S.S.

    1985-01-01

    The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability

  20. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  1. All-Norm Approximation Algorithms

    NARCIS (Netherlands)

    Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik

    2002-01-01

    A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation

  2. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  3. Approximate Noether symmetries and collineations for regular perturbative Lagrangians

    Science.gov (United States)

    Paliathanasis, Andronikos; Jamal, Sameerah

    2018-01-01

    Regular perturbative Lagrangians that admit approximate Noether symmetries and approximate conservation laws are studied. Specifically, we investigate the connection between approximate Noether symmetries and collineations of the underlying manifold. In particular we determine the generic Noether symmetry conditions for the approximate point symmetries and we find that for a class of perturbed Lagrangians, Noether symmetries are related to the elements of the Homothetic algebra of the metric which is defined by the unperturbed Lagrangian. Moreover, we discuss how exact symmetries become approximate symmetries. Finally, some applications are presented.

  4. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  5. Rainbows: Mie computations and the Airy approximation.

    Science.gov (United States)

    Wang, R T; van de Hulst, H C

    1991-01-01

    Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work.

  6. Square well approximation to the optical potential

    International Nuclear Information System (INIS)

    Jain, A.K.; Gupta, M.C.; Marwadi, P.R.

    1976-01-01

    Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)

  7. An adaptive-order particle filter for remaining useful life prediction of aviation piston pumps

    Directory of Open Access Journals (Sweden)

    Tongyang LI

    2018-05-01

    Full Text Available An accurate estimation of the remaining useful life (RUL not only contributes to an effective application of an aviation piston pump, but also meets the necessity of condition based maintenance (CBM. For the current RUL evaluation methods, a model-based method is inappropriate for the degradation process of an aviation piston pump due to difficulties of modeling, while a data-based method rarely presents high-accuracy prediction in a long period of time. In this work, an adaptive-order particle filter (AOPF prognostic process is proposed aiming at improving long-term prediction accuracy of RUL by combining both kinds of methods. A dynamic model is initialized by a data-driven or empirical method. When a new observation comes, the prior state distribution is approximated by a current model. The order of the current model is updated adaptively by fusing the information of the observation. Monte Carlo simulation is employed for estimating the posterior probability density function of future states of the pump’s degradation. With updating the order number adaptively, the method presents a higher precision in contrast with those of traditional methods. In a case study, the proposed AOPF method is adopted to forecast the degradation status of an aviation piston pump with experimental return oil flow data, and the analytical results show the effectiveness of the proposed AOPF method. Keywords: Adaptive prognosis, Condition based maintenance (CBM, Particle filter (PF, Piston pump, Remaining useful life (RUL

  8. Spatial Variability and Application of Ratios between BTEX in Two Canadian Cities

    Directory of Open Access Journals (Sweden)

    Lindsay Miller

    2011-01-01

    Full Text Available Spatial monitoring campaigns of volatile organic compounds were carried out in two similarly sized urban industrial cities, Windsor and Sarnia, ON, Canada. For Windsor, data were obtained for all four seasons at approximately 50 sites in each season (winter, spring, summer, and fall over a three-year period (2004, 2005, and 2006 for a total of 12 sampling sessions. Sampling in Sarnia took place at 37 monitoring sites in fall 2005. In both cities, passive sampling was done using 3M 3500 organic vapor samplers. This paper characterizes benzene, toluene, ethylbenzene, o, and (m + p-xylene (BTEX concentrations and relationships among BTEX species in the two cities during the fall sampling periods. BTEX concentration levels and rank order among the species were similar between the two cities. In Sarnia, the relationships between the BTEX species varied depending on location. Correlation analysis between land use and concentration ratios showed a strong influence from local industries. Use one of the ratios between the BTEX species to diagnose photochemical age may be biased due to point source emissions, for example, 53 tonnes of benzene and 86 tonnes of toluene in Sarnia. However, considering multiple ratios leads to better conclusions regarding photochemical aging. Ratios obtained in the sampling campaigns showed significant deviation from those obtained at central monitoring stations, with less difference in the (m + p/E ratio but better overall agreement in Windsor than in Sarnia.

  9. Soft network materials with isotropic negative Poisson's ratios over large strains.

    Science.gov (United States)

    Liu, Jianxing; Zhang, Yihui

    2018-01-31

    Auxetic materials with negative Poisson's ratios have important applications across a broad range of engineering areas, such as biomedical devices, aerospace engineering and automotive engineering. A variety of design strategies have been developed to achieve artificial auxetic materials with controllable responses in the Poisson's ratio. The development of designs that can offer isotropic negative Poisson's ratios over large strains can open up new opportunities in emerging biomedical applications, which, however, remains a challenge. Here, we introduce deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1, in an isotropic manner, with a tunable strain range from 0% to ∼90%. The designs rely on a network construction in a periodic lattice topology, which incorporates zigzag microstructures as building blocks to connect lattice nodes. Combined experimental and theoretical studies on broad classes of network topologies illustrate the wide-ranging utility of these concepts. Quantitative mechanics modeling under both infinitesimal and finite deformations allows the development of a rigorous design algorithm that determines the necessary network geometries to yield target Poisson ratios over desired strain ranges. Demonstrative examples in artificial skin with both the negative Poisson's ratio and the nonlinear stress-strain curve precisely matching those of the cat's skin and in unusual cylindrical structures with engineered Poisson effect and shape memory effect suggest potential applications of these network materials.

  10. Uncertainty relations for approximation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-05-27

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  11. Uncertainty relations for approximation and estimation

    International Nuclear Information System (INIS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-01-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  12. The CN/C15N isotopic ratio towards dark clouds

    Science.gov (United States)

    Hily-Blant, P.; Pineau des Forêts, G.; Faure, A.; Le Gal, R.; Padovani, M.

    2013-09-01

    Understanding the origin of the composition of solar system cosmomaterials is a central question, not only in the cosmochemistry and astrochemistry fields, and requires various approaches to be combined. Measurements of isotopic ratios in cometary materials provide strong constraints on the content of the protosolar nebula. Their relation with the composition of the parental dark clouds is, however, still very elusive. In this paper, we bring new constraints based on the isotopic composition of nitrogen in dark clouds, with the aim of understanding the chemical processes that are responsible for the observed isotopic ratios. We have observed and detected the fundamental rotational transition of C15N towards two starless dark clouds, L1544 and L1498. We were able to derive the column density ratio of C15N over 13CN towards the same clouds and obtain the CN/C15N isotopic ratios, which were found to be 500 ± 75 for both L1544 and L1498. These values are therefore marginally consistent with the protosolar value of 441. Moreover, this ratio is larger than the isotopic ratio of nitrogen measured in HCN. In addition, we present model calculations of the chemical fractionation of nitrogen in dark clouds, which make it possible to understand how CN can be deprived of 15N and HCN can simultaneously be enriched in heavy nitrogen. The non-fractionation of N2H+, however, remains an open issue, and we propose some chemical way of alleviating the discrepancy between model predictions and the observed ratios. Appendices are available in electronic form at http://www.aanda.orgThe reduced spectra (in FITS format) are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/557/A65

  13. Normal frontal lobe gray matter-white matter CT volume ratio in children

    International Nuclear Information System (INIS)

    Thompson, J.R.; Engelhart, J.; Hasso, A.N.; Hinshaw, D.B. Jr.

    1985-01-01

    We attempted to establish a computed tomographic value representing the normal volume ratio of gray matter to white matter (G/W) in children in order to have a baseline for studying various developmental disorders such as white matter hypoplasia. The records of 150 children 16 years of age or younger who had normal cranial computed tomography were reviewed. From these a group of 119 were excluded for various reasons. The remaining 3 were presumed to have normal brains. Using the region of interest function for tracing gray and white matter boundaries, superior and ventral to the foramen of Munro area, measurements were determined for consecutive adjacent frontal slices. Volumes were then calculated for both gray and white matter. A volume ratio of 2.010 (sigma=0.349), G/W, was then derived from each of 31 children. The clinical value of this ratio will be determined by future investigation. (orig.)

  14. The H2/CH4 ratio during serpentinization cannot reliably identify biological signatures.

    Science.gov (United States)

    Huang, Ruifang; Sun, Weidong; Liu, Jinzhong; Ding, Xing; Peng, Shaobang; Zhan, Wenhuan

    2016-09-26

    Serpentinization potentially contributes to the origin and evolution of life during early history of the Earth. Serpentinization produces molecular hydrogen (H 2 ) that can be utilized by microorganisms to gain metabolic energy. Methane can be formed through reactions between molecular hydrogen and oxidized carbon (e.g., carbon dioxide) or through biotic processes. A simple criterion, the H 2 /CH 4 ratio, has been proposed to differentiate abiotic from biotic methane, with values approximately larger than 40 for abiotic methane and values of serpentinization experiments at 200 °C and 0.3 kbar. However, it is not clear whether the criterion is applicable at a wider range of temperatures. In this study, we performed sixteen experiments at 311-500 °C and 3.0 kbar using natural ground peridotite. Our results demonstrate that the H 2 /CH 4 ratios strongly depend on temperature. At 311 °C and 3.0 kbar, the H 2 /CH 4 ratios ranged from 58 to 2,120, much greater than the critical value of 40. By contrast, at 400-500 °C, the H 2 /CH 4 ratios were much lower, ranging from 0.1 to 8.2. The results of this study suggest that the H 2 /CH 4 ratios cannot reliably discriminate abiotic from biotic methane.

  15. The H2/CH4 ratio during serpentinization cannot reliably identify biological signatures

    Science.gov (United States)

    Huang, Ruifang; Sun, Weidong; Liu, Jinzhong; Ding, Xing; Peng, Shaobang; Zhan, Wenhuan

    2016-09-01

    Serpentinization potentially contributes to the origin and evolution of life during early history of the Earth. Serpentinization produces molecular hydrogen (H2) that can be utilized by microorganisms to gain metabolic energy. Methane can be formed through reactions between molecular hydrogen and oxidized carbon (e.g., carbon dioxide) or through biotic processes. A simple criterion, the H2/CH4 ratio, has been proposed to differentiate abiotic from biotic methane, with values approximately larger than 40 for abiotic methane and values of serpentinization experiments at 200 °C and 0.3 kbar. However, it is not clear whether the criterion is applicable at a wider range of temperatures. In this study, we performed sixteen experiments at 311-500 °C and 3.0 kbar using natural ground peridotite. Our results demonstrate that the H2/CH4 ratios strongly depend on temperature. At 311 °C and 3.0 kbar, the H2/CH4 ratios ranged from 58 to 2,120, much greater than the critical value of 40. By contrast, at 400-500 °C, the H2/CH4 ratios were much lower, ranging from 0.1 to 8.2. The results of this study suggest that the H2/CH4 ratios cannot reliably discriminate abiotic from biotic methane.

  16. Diophantine approximation

    CERN Document Server

    Schmidt, Wolfgang M

    1980-01-01

    "In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)

  17. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  18. Approximation of the inverse G-frame operator

    Indian Academy of Sciences (India)

    ... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  19. Optical approximation in the theory of geometric impedance

    International Nuclear Information System (INIS)

    Stupakov, G.; Bane, K.L.F.; Zagorodnov, I.

    2007-02-01

    In this paper we introduce an optical approximation into the theory of impedance calculation, one valid in the limit of high frequencies. This approximation neglects diffraction effects in the radiation process, and is conceptually equivalent to the approximation of geometric optics in electromagnetic theory. Using this approximation, we derive equations for the longitudinal impedance for arbitrary offsets, with respect to a reference orbit, of source and test particles. With the help of the Panofsky-Wenzel theorem we also obtain expressions for the transverse impedance (also for arbitrary offsets). We further simplify these expressions for the case of the small offsets that are typical for practical applications. Our final expressions for the impedance, in the general case, involve two dimensional integrals over various cross-sections of the transition. We further demonstrate, for several known axisymmetric examples, how our method is applied to the calculation of impedances. Finally, we discuss the accuracy of the optical approximation and its relation to the diffraction regime in the theory of impedance. (orig.)

  20. APPROXIMATION OF FREE-FORM CURVE – AIRFOIL SHAPE

    Directory of Open Access Journals (Sweden)

    CHONG PERK LIN

    2013-12-01

    Full Text Available Approximation of free-form shape is essential in numerous engineering applications, particularly in automotive and aircraft industries. Commercial CAD software for the approximation of free-form shape is based almost exclusively on parametric polynomial and rational parametric polynomial. The parametric curve is defined by vector function of one independent variable R(u = (x(u, y(u, z(u, where 0≤u≤1. Bézier representation is one of the parametric functions, which is widely used in the approximating of free-form shape. Given a string of points with the assumption of sufficiently dense to characterise airfoil shape, it is desirable to approximate the shape with Bézier representation. The expectation is that the representation function is close to the shape within an acceptable working tolerance. In this paper, the aim is to explore the use of manual and automated methods for approximating section curve of airfoil with Bézier representation.

  1. Current drive and profile control in low aspect ratio tokamaks

    International Nuclear Information System (INIS)

    Chan, V.S.; Chiu, S.C.; Lin-Liu, Y.R.; Miller, R.L.; Turnbull, A.D.

    1995-07-01

    The key to the theoretically predicted high performance of a low aspect ratio tokamak (LAT) is its ability to operate at very large plasma current*I p . The plasma current at low aspect ratios follows the approximate formula: I p ∼ (5a 2 B t /Rqψ) [(1 + κ 2 )/2] [A/(A - 1)] where A quadruple-bond R/a which was derived from equilibrium studies. For constant qψ and B t , I p can increase by an order of magnitude over the case of tokamaks with A approx-gt 2.5. The large current results in a significantly enhanced β t (quadruple-bond β N I p /aB t ) possibly of order unity. It also compensates for the reduction in A to maintain the same confinement performance assuming the confinement time τ follows the generic form ∼ HI p P -1 / 2 R 3 / 2 κ 1 / 2 . The initiation and maintenance of such a large current is therefore a key issue for LATs

  2. Kadav Moun PSA (:60) (Human Remains)

    Centers for Disease Control (CDC) Podcasts

    2010-02-18

    This is an important public health announcement about safety precautions for those handling human remains. Language: Haitian Creole.  Created: 2/18/2010 by Centers for Disease Control and Prevention (CDC).   Date Released: 2/18/2010.

  3. Using laser-induced breakdown spectroscopy to assess preservation quality of archaeological bones by measurement of calcium-to-fluorine ratios.

    Science.gov (United States)

    Rusak, David Alexander; Marsico, Ryan Matthew; Taroli, Brett Louis

    2011-10-01

    We determined calcium-to-fluorine (Ca/F) signal ratios at the surface and in the depth dimension in approximately 6000-year-old sheep and cattle bones using Ca I 671.8 and F I 685.6 emission lines. Because the bones had been previously analyzed for collagen preservation quality by measurement of C/N ratios at the Oxford Radiocarbon Accelerator Unit, we were able to examine the correlation between our ratios and quality of preservation. In the bones analyzed in this experiment, the Ca I 671.8/F I 685.6 ratio was generally lower and decreased with successive laser pulses into poorly preserved bones while the ratio was generally higher and increased with successive laser pulses into well-preserved bones. After 210 successive pulses, a discriminator value for this ratio (5.70) could be used to distinguish well-preserved and poorly preserved bones regardless of species. © 2011 Society for Applied Spectroscopy

  4. Conference on Abstract Spaces and Approximation

    CERN Document Server

    Szökefalvi-Nagy, B; Abstrakte Räume und Approximation; Abstract spaces and approximation

    1969-01-01

    The present conference took place at Oberwolfach, July 18-27, 1968, as a direct follow-up on a meeting on Approximation Theory [1] held there from August 4-10, 1963. The emphasis was on theoretical aspects of approximation, rather than the numerical side. Particular importance was placed on the related fields of functional analysis and operator theory. Thirty-nine papers were presented at the conference and one more was subsequently submitted in writing. All of these are included in these proceedings. In addition there is areport on new and unsolved problems based upon a special problem session and later communications from the partici­ pants. A special role is played by the survey papers also presented in full. They cover a broad range of topics, including invariant subspaces, scattering theory, Wiener-Hopf equations, interpolation theorems, contraction operators, approximation in Banach spaces, etc. The papers have been classified according to subject matter into five chapters, but it needs littl...

  5. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    Science.gov (United States)

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.

  6. Sharpening Sharpe Ratios

    OpenAIRE

    William N. Goetzmann; Jonathan E. Ingersoll Jr.; Matthew I. Spiegel; Ivo Welch

    2002-01-01

    It is now well known that the Sharpe ratio and other related reward-to-risk measures may be manipulated with option-like strategies. In this paper we derive the general conditions for achieving the maximum expected Sharpe ratio. We derive static rules for achieving the maximum Sharpe ratio with two or more options, as well as a continuum of derivative contracts. The optimal strategy has a truncated right tail and a fat left tail. We also derive dynamic rules for increasing the Sharpe ratio. O...

  7. The Radial Distribution of Star Formation in Galaxies at Z approximately 1 from the 3D-HST Survey

    Science.gov (United States)

    Nelson, Erica June; vanDokkum, Pieter G.; Momcheva, Ivelina; Brammer, Gabriel; Lundgren, Britt; Skelton, Rosalind E.; Whitaker, Katherine E.; DaCunha, Elisabete; Schreiber, Natascha Foerster; Franx, Marijn; hide

    2013-01-01

    The assembly of galaxies can be described by the distribution of their star formation as a function of cosmic time. Thanks to the WFC3 grism on the Hubble Space Telescope (HST) it is now possible to measure this beyond the local Universe. Here we present the spatial distribution of H emission for a sample of 54 strongly star-forming galaxies at z 1 in the 3D-HST Treasury survey. By stacking the H emission, we find that star formation occurred in approximately exponential distributions at z approximately 1, with a median Sersic index of n = 1.0 +/- 0.2. The stacks are elongated with median axis ratios of b/a = 0.58 +/- 0.09 in H consistent with (possibly thick) disks at random orientation angles. Keck spectra obtained for a subset of eight of the galaxies show clear evidence for rotation, with inclination corrected velocities of 90.330 km s(exp 1-). The most straightforward interpretation of our results is that star formation in strongly star-forming galaxies at z approximately 1 generally occurred in disks. The disks appear to be scaled-up versions of nearby spiral galaxies: they have EW(H alpha) at approximately 100 A out to the solar orbit and they have star formation surface densities above the threshold for driving galactic scale winds.

  8. Diagonal Pade approximations for initial value problems

    International Nuclear Information System (INIS)

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab

  9. A test of the adhesion approximation for gravitational clustering

    Science.gov (United States)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  10. Hydrogen: Beyond the Classic Approximation

    International Nuclear Information System (INIS)

    Scivetti, Ivan

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  11. Jet length/velocity ratio: a new index for echocardiographic evaluation of chronic aortic regurgitation.

    Science.gov (United States)

    Güvenç, Tolga Sinan; Karaçimen, Denizhan; Erer, Hatice Betül; İlhan, Erkan; Sayar, Nurten; Karakuş, Gültekin; Çekirdekçi, Elif; Eren, Mehmet

    2015-01-01

    Management of aortic regurgitation depends on the assessment for severity. Echocardiography remains as the most widely available tool for evaluation of aortic regurgitation. In this manuscript, we describe a novel parameter, jet length/velocity ratio, for the diagnosis of severe aortic regurgitation. A total of 30 patients with aortic regurgitation were included to this study. Severity of aortic regurgitation was assessed with an aortic regurgitation index incorporating five echocardiographic parameters. Jet length/velocity ratio is calculated as the ratio of maximum jet penetrance to mean velocity of regurgitant flow. Jet length/velocity ratio was significantly higher in patients with severe aortic regurgitation (2.03 ± 0.53) compared to patients with less than severe aortic regurgitation (1.24 ± 0.32, P < 0.001). Correlation of jet length/velocity ratio with aortic regurgitation index was very good (r(2) = 0.86) and correlation coefficient was higher for jet length/velocity ratio compared to vena contracta, jet width/LVOT ratio and pressure half time. For a cutoff value of 1.61, jet length/velocity ratio had a sensitivity of 92% and specificity of 88%, with an AUC value of 0.955. Jet length/velocity ratio is a novel parameter that can be used to assess severity of chronic aortic regurgitation. Main limitation for usage of this novel parameter is jet impringement to left ventricular wall. © 2014, Wiley Periodicals, Inc.

  12. Utilizing Dietary Micronutrient Ratios in Nutritional Research May be More Informative than Focusing on Single Nutrients

    Directory of Open Access Journals (Sweden)

    Owen J. Kelly

    2018-01-01

    Full Text Available The 2015 US dietary guidelines advise the importance of good dietary patterns for health, which includes all nutrients. Micronutrients are rarely, if ever, consumed separately, they are not tissue specific in their actions and at the molecular level they are multitaskers. Metabolism functions within a seemingly random cellular milieu however ratios are important, for example, the ratio of adenosine triphosphate to adenosine monophosphate, or oxidized to reduced glutathione. Health status is determined by simple ratios, such as the waist hip ratio, or ratio of fat mass to lean mass. Some nutrient ratios exist and remain controversial such as the omega-6/omega-3 fatty acid ratio and the sodium/potassium ratio. Therefore, examining ratios of micronutrients may convey more information about how diet and health outcomes are related. Summarized micronutrient intake data, from food only, from the National Health and Nutrition Examination Survey, were used to generate initial ratios. Overall, in this preliminary analysis dietary ratios of micronutrients showed some differences between intakes and recommendations. Principles outlined here could be used in nutritional epidemiology and in basic nutritional research, rather than focusing on individual nutrient intakes. This paper presents the concept of micronutrient ratios to encourage change in the way nutrients are regarded.

  13. Air/fuel ratio visualization in a diesel spray

    Science.gov (United States)

    Carabell, Kevin David

    1993-01-01

    To investigate some features of high pressure diesel spray ignition, we have applied a newly developed planar imaging system to a spray in an engine-fed combustion bomb. The bomb is designed to give flow characteristics similar to those in a direct injection diesel engine yet provide nearly unlimited optical access. A high pressure electronic unit injector system with on-line manually adjustable main and pilot injection features was used. The primary scalar of interest was the local air/fuel ratio, particularly near the spray plumes. To make this measurement quantitative, we have developed a calibration LIF technique. The development of this technique is the key contribution of this dissertation. The air/fuel ratio measurement was made using biacetyl as a seed in the air inlet to the engine. When probed by a tripled Nd:YAG laser the biacetyl fluoresces, with a signal proportional to the local biacetyl concentration. This feature of biacetyl enables the fluorescent signal to be used as as indicator of local fuel vapor concentration. The biacetyl partial pressure was carefully controlled, enabling estimates of the local concentration of air and the approximate local stoichiometry in the fuel spray. The results indicate that the image quality generated with this method is sufficient for generating air/fuel ratio contours. The processes during the ignition delay have a marked effect on ignition and the subsequent burn. These processes, vaporization and pre-flame kinetics, very much depend on the mixing of the air and fuel. This study has shown that poor mixing and over-mixing of the air and fuel will directly affect the type of ignition. An optimal mixing arrangement exists and depends on the swirl ratio in the engine, the number of holes in the fuel injector and the distribution of fuel into a pilot and main injection. If a short delay and a diffusion burn is desired, the best mixing parameters among those surveyed would be a high swirl ratio, a 4-hole nozzle and a

  14. Simultaneous approximation in scales of Banach spaces

    International Nuclear Information System (INIS)

    Bramble, J.H.; Scott, R.

    1978-01-01

    The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods

  15. Abnormal X : autosome ratio, but normal X chromosome inactivation in human triploid cultures

    Directory of Open Access Journals (Sweden)

    Norwood Thomas H

    2006-07-01

    Full Text Available Abstract Background X chromosome inactivation (XCI is that aspect of mammalian dosage compensation that brings about equivalence of X-linked gene expression between females and males by inactivating one of the two X chromosomes (Xi in normal female cells, leaving them with a single active X (Xa as in male cells. In cells with more than two X's, but a diploid autosomal complement, all X's but one, Xa, are inactivated. This phenomenon is commonly thought to suggest 1 that normal development requires a ratio of one Xa per diploid autosomal set, and 2 that an early event in XCI is the marking of one X to be active, with remaining X's becoming inactivated by default. Results Triploids provide a test of these ideas because the ratio of one Xa per diploid autosomal set cannot be achieved, yet this abnormal ratio should not necessarily affect the one-Xa choice mechanism for XCI. Previous studies of XCI patterns in murine triploids support the single-Xa model, but human triploids mostly have two-Xa cells, whether they are XXX or XXY. The XCI patterns we observe in fibroblast cultures from different XXX human triploids suggest that the two-Xa pattern of XCI is selected for, and may have resulted from rare segregation errors or Xi reactivation. Conclusion The initial X inactivation pattern in human triploids, therefore, is likely to resemble the pattern that predominates in murine triploids, i.e., a single Xa, with the remaining X's inactive. Furthermore, our studies of XIST RNA accumulation and promoter methylation suggest that the basic features of XCI are normal in triploids despite the abnormal X:autosome ratio.

  16. Monitoring buried remains with a transparent 3D half bird's eye view of ground penetrating radar data in the Zeynel Bey tomb in the ancient city of Hasankeyf, Turkey

    International Nuclear Information System (INIS)

    Kadioglu, Selma; Kadioglu, Yusuf Kagan; Akyol, Ali Akin

    2011-01-01

    The aim of this paper is to show a new monitoring approximation for ground penetrating radar (GPR) data. The method was used to define buried archaeological remains inside and outside the Zeynel Bey tomb in Hasankeyf, an ancient city in south-eastern Turkey. The study examined whether the proposed GPR method could yield useful results at this highly restricted site, which has a maximum diameter inside the tomb of 4 m. A transparent three-dimensional (3D) half bird's eye view was constructed from a processed parallel-aligned two-dimensional GPR profile data set by using an opaque approximation instead of linear opacity. Interactive visualizations of transparent 3D sub-data volumes were conducted. The amplitude-colour scale was balanced by the amplitude range of the buried remains in a depth range, and appointed a different opaque value for this range, in order to distinguish the buried remains from one another. Therefore, the maximum amplitude values of the amplitude-colour scale were rearranged with the same colour range. This process clearly revealed buried remains in depth slices and transparent 3D data volumes. However, the transparent 3D half bird's eye views of the GPR data better revealed the remains than the depth slices of the same data. In addition, the results showed that the half bird's eye perspective was important in order to image the buried remains. Two rectangular walls were defined, one within and the other perpendicularly, in the basement structure of the Zeynel Bey tomb, and a cemetery was identified aligned in the east–west direction at the north side of the tomb. The transparent 3D half bird's eye view of the GPR data set also determined the buried walls outside the tomb. The findings of the excavation works at the Zeynel Bey tomb successfully overlapped with the new visualization results

  17. On transparent potentials: a Born approximation study

    International Nuclear Information System (INIS)

    Coudray, C.

    1980-01-01

    In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy

  18. Gravitational recoil from binary black hole mergers: The close-limit approximation

    International Nuclear Information System (INIS)

    Sopuerta, Carlos F.; Yunes, Nicolas; Laguna, Pablo

    2006-01-01

    The coalescence of a binary black hole system is one of the main sources of gravitational waves that present and future detectors will study. Apart from the energy and angular momentum that these waves carry, for unequal-mass binaries there is also a net flux of linear momentum that implies a recoil velocity of the resulting final black hole in the opposite direction. Due to the relevance of this phenomenon in astrophysics, in particular, for galaxy merger scenarios, there have been several attempts to estimate the magnitude of this velocity. Since the main contribution to the recoil comes from the last orbit and plunge, an approximation valid at the last stage of coalescence is well motivated for this type of calculation. In this paper, we present a computation of the recoil velocity based on the close-limit approximation scheme, which gives excellent results for head-on and grazing collisions of black holes when compared to full numerical relativistic calculations. We obtain a maximum recoil velocity of ∼57 km/s for a symmetric mass ratio η=M 1 M 2 /(M 1 +M 2 ) 2 ∼0.19 and an initial proper separation of 4M, where M is the total Arnowitt-Deser-Misner (ADM) mass of the system. This separation is the maximum at which the close-limit approximation is expected to provide accurate results. Therefore, it cannot account for the contributions due to inspiral and initial merger. If we supplement this estimate with post-Newtonian (PN) calculations up to the innermost stable circular orbit, we obtain a lower bound for the recoil velocity, with a maximum around 80 km/s. This is a lower bound because it neglects the initial merger phase. We can however obtain a rough estimate by using PN methods or the close-limit approximation. Since both methods are known to overestimate the amount of radiation, we obtain in this way an upper bound for the recoil with maxima in the range of 214-240 km/s. We also provide nonlinear fits to these estimated upper and lower bounds. These

  19. Approximate supernova remnant dynamics with cosmic ray production

    Science.gov (United States)

    Voelk, H. J.; Drury, L. O.; Dorfi, E. A.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.

  20. Approximate supernova remnant dynamics with cosmic ray production

    International Nuclear Information System (INIS)

    Voelk, H.J.; Drury, L.O.; Dorfi, E.A.

    1985-01-01

    Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probable sources of cosmic rays. Recent shock acceleration models treating the cosmic rays (CR's) as test particles nb a prescribed supernova remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the interstellar medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation

  1. Second-to-fourth digit ratio and impulsivity: a comparison between offenders and nonoffenders.

    Science.gov (United States)

    Hanoch, Yaniv; Gummerum, Michaela; Rolison, Jonathan

    2012-01-01

    Personality characteristics, particularly impulsive tendencies, have long been conceived as the primary culprit in delinquent behavior. One crucial question to emerge from this line of work is whether impulsivity has a biological basis. To test this possibility, 44 male offenders and 46 nonoffenders completed the Eysenck Impulsivity Questionnaire, and had their 2D∶4D ratio measured. Offenders exhibited smaller right hand digit ratio measurements compared to non-offenders, but higher impulsivity scores. Both impulsivity and 2D∶4D ratio measurements significantly predicted criminality (offenders vs. nonoffenders). Controlling for education level, the 2D∶4D ratio measurements had remained a significant predictor of criminality, while impulsivity scores no longer predicted criminality significantly. Our data, thus, indicates that impulsivity but not 2D∶4D ratio measurements relate to educational attainment. As offenders varied in their number of previous convictions and the nature of their individual crimes, we also tested for differences in 2D∶4D ratio and impulsivity among offenders. Number of previous convictions did not correlate significantly with the 2D∶4D ratio measurements or impulsivity scores. Our study established a link between a biological marker and impulsivity among offenders (and lack thereof among non-offenders), which emphasise the importance of studying the relationship between biological markers, impulsivity and criminal behavior.

  2. Flow and Pollutant Transport in Urban Street Canyons of Different Aspect Ratios with Ground Heating: Large-Eddy Simulation

    Science.gov (United States)

    Li, Xian-Xiang; Britter, Rex E.; Norford, Leslie K.; Koh, Tieh-Yong; Entekhabi, Dara

    2012-02-01

    A validated large-eddy simulation model was employed to study the effect of the aspect ratio and ground heating on the flow and pollutant dispersion in urban street canyons. Three ground-heating intensities (neutral, weak and strong) were imposed in street canyons of aspect ratio 1, 2, and 0.5. The detailed patterns of flow, turbulence, temperature and pollutant transport were analyzed and compared. Significant changes of flow and scalar patterns were caused by ground heating in the street canyon of aspect ratio 2 and 0.5, while only the street canyon of aspect ratio 0.5 showed a change in flow regime (from wake interference flow to skimming flow). The street canyon of aspect ratio 1 does not show any significant change in the flow field. Ground heating generated strong mixing of heat and pollutant; the normalized temperature inside street canyons was approximately spatially uniform and somewhat insensitive to the aspect ratio and heating intensity. This study helps elucidate the combined effects of urban geometry and thermal stratification on the urban canyon flow and pollutant dispersion.

  3. Variation of strontium stable isotope ratios and origins of strontium in Japanese vegetables and comparison with Chinese vegetables.

    Science.gov (United States)

    Aoyama, Keisuke; Nakano, Takanori; Shin, Ki-Cheol; Izawa, Atsunobu; Morita, Sakie

    2017-12-15

    To evaluate the utility of 87 Sr/ 86 Sr ratio for determining the geographical provenance of vegetables, we compared 87 Sr/ 86 Sr ratios and Sr concentrations in five vegetable species grown in Japan and China, and we also examined the relationships between 87 Sr/ 86 Sr ratios in vegetables, the soil-exchangeable pool, irrigation water, and fertilizer from 20 Japanese agricultural areas. The vegetable 87 Sr/ 86 Sr ratios in Japan were similar for all species within a given agricultural area, but tended to be low in northeast Japan and high in southwest Japan. The median 87 Sr/ 86 Sr ratio in Japanese vegetables was similar to that in fertilizer, suggesting that in addition to rock-derived Sr, vegetables contain Sr derived from fertilizers. In most cases, the 87 Sr/ 86 Sr ratios for the Japanese and Chinese vegetables differed by approximately 0.710. Linear discriminant analysis using both 87 Sr/ 86 Sr and the Sr concentration allowed more accurate discrimination between vegetables from the two countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Dynamic response of low aspect ratio piezoelectric microcantilevers actuated in different liquid environments

    International Nuclear Information System (INIS)

    Vázquez, J; Rivera, M A; Hernando, J; Sánchez-Rojas, J L

    2009-01-01

    The response of commercial piezoelectric AFM probes for potential applications in the field of chemical or biological sensors operating in liquids is investigated using laser Doppler vibrometry. The present work investigates the roles played in the frequency response by the density and the viscosity of different water–glycerol mixtures, in a frequency range of up to 1 MHz in air. Since the width of the tested probes is relatively large (and hence the aspect ratio remains small), inertial loading effects dominate viscous effects, unlike in cantilevers characterized by larger aspect ratios. Measurements are compared with results provided by a simplified computer model of a probe immersed in an inviscid surrounding fluid

  5. Geometric convergence of some two-point Pade approximations

    International Nuclear Information System (INIS)

    Nemeth, G.

    1983-01-01

    The geometric convergences of some two-point Pade approximations are investigated on the real positive axis and on certain infinite sets of the complex plane. Some theorems concerning the geometric convergence of Pade approximations are proved, and bounds on geometric convergence rates are given. The results may be interesting considering the applications both in numerical computations and in approximation theory. As a specific case, the numerical calculations connected with the plasma dispersion function may be performed. (D.Gy.)

  6. Standard filter approximations for low power Continuous Wavelet Transforms.

    Science.gov (United States)

    Casson, Alexander J; Rodriguez-Villegas, Esther

    2010-01-01

    Analogue domain implementations of the Continuous Wavelet Transform (CWT) have proved popular in recent years as they can be implemented at very low power consumption levels. This is essential for use in wearable, long term physiological monitoring systems. Present analogue CWT implementations rely on taking mathematical a approximation of the wanted mother wavelet function to give a filter transfer function that is suitable for circuit implementation. This paper investigates the use of standard filter approximations (Butterworth, Chebyshev, Bessel) as an alternative wavelet approximation technique. This extends the number of approximation techniques available for generating analogue CWT filters. An example ECG analysis shows that signal information can be successfully extracted using these CWT approximations.

  7. Ordering, symbols and finite-dimensional approximations of path integrals

    International Nuclear Information System (INIS)

    Kashiwa, Taro; Sakoda, Seiji; Zenkin, S.V.

    1994-01-01

    We derive general form of finite-dimensional approximations of path integrals for both bosonic and fermionic canonical systems in terms of symbols of operators determined by operator ordering. We argue that for a system with a given quantum Hamiltonian such approximations are independent of the type of symbols up to terms of O(ε), where ε of is infinitesimal time interval determining the accuracy of the approximations. A new class of such approximations is found for both c-number and Grassmannian dynamical variables. The actions determined by the approximations are non-local and have no classical continuum limit except the cases of pq- and qp-ordering. As an explicit example the fermionic oscillator is considered in detail. (author)

  8. Quark-diquark approximation of the three-quark structure of baryons in the quark confinement model

    International Nuclear Information System (INIS)

    Efimov, G.V.; Ivanov, M.A.; Lyubovitskij, V.E.

    1990-01-01

    Octet (1 + /2) and decuplet (3 + /2) of baryons as relativistic three-quark states are investigated in the quark confinement model (QCM), the relativistic quark model, based on some assumptions about hadronization and quark confinement. The quark-diquark approximation of the three-quark structure of baryons is proposed. In the framework of this approach the description of the main low-energy characteristics of baryons as magnetic moments, electromagnetic radii and form factors, ratio of axial and vector constants in semileptonic baryon octet decays, strong form factors and decay widths is given. The obtained results are in agreement with experimental data. 31 refs.; 4 figs.; 5 tabs

  9. The Annuity Puzzle Remains a Puzzle

    NARCIS (Netherlands)

    Peijnenburg, J.M.J.; Werker, Bas; Nijman, Theo

    We examine incomplete annuity menus and background risk as possible drivers of divergence from full annuitization. Contrary to what is often suggested in the literature, we find that full annuitization remains optimal if saving is possible after retirement. This holds irrespective of whether real or

  10. Hardness of approximation for strip packing

    DEFF Research Database (Denmark)

    Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin

    2017-01-01

    Strip packing is a classical packing problem, where the goal is to pack a set of rectangular objects into a strip of a given width, while minimizing the total height of the packing. The problem has multiple applications, for example, in scheduling and stock-cutting, and has been studied extensively......)-approximation by two independent research groups [FSTTCS 2016,WALCOM 2017]. This raises a questionwhether strip packing with polynomially bounded input data admits a quasi-polynomial time approximation scheme, as is the case for related twodimensional packing problems like maximum independent set of rectangles or two...

  11. Adaptive control using neural networks and approximate models.

    Science.gov (United States)

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  12. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Preschool acuity of the approximate number system correlates with school math ability.

    Science.gov (United States)

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2011-11-01

    Previous research shows a correlation between individual differences in people's school math abilities and the accuracy with which they rapidly and nonverbally approximate how many items are in a scene. This finding is surprising because the Approximate Number System (ANS) underlying numerical estimation is shared with infants and with non-human animals who never acquire formal mathematics. However, it remains unclear whether the link between individual differences in math ability and the ANS depends on formal mathematics instruction. Earlier studies demonstrating this link tested participants only after they had received many years of mathematics education, or assessed participants' ANS acuity using tasks that required additional symbolic or arithmetic processing similar to that required in standardized math tests. To ask whether the ANS and math ability are linked early in life, we measured the ANS acuity of 200 3- to 5-year-old children using a task that did not also require symbol use or arithmetic calculation. We also measured children's math ability and vocabulary size prior to the onset of formal math instruction. We found that children's ANS acuity correlated with their math ability, even when age and verbal skills were controlled for. These findings provide evidence for a relationship between the primitive sense of number and math ability starting early in life. 2011 Blackwell Publishing Ltd.

  15. The Hartree-Fock seniority approximation

    International Nuclear Information System (INIS)

    Gomez, J.M.G.; Prieto, C.

    1986-01-01

    A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)

  16. The clandestine multiple graves in Malaysia: The first mass identification operation of human skeletal remains.

    Science.gov (United States)

    Mohd Noor, Mohd Suhani; Khoo, Lay See; Zamaliana Alias, Wan Zafirah; Hasmi, Ahmad Hafizam; Ibrahim, Mohamad Azaini; Mahmood, Mohd Shah

    2017-09-01

    The first ever mass identification operation of skeletal remains conducted for the clandestine graves in Malaysia consisted of 165 individuals unearthed from 28 human trafficking transit camps located in Wang Kelian, along the Thai-Malaysia border. A DVI response was triggered in which expert teams comprising of pathologists, anthropologists, odontologists, radiologists and DNA experts were gathered at the identified operation centre. The Department of Forensic Medicine, Hospital Sultanah Bahiyah, Alor Star, Kedah, located approximately 75km away from Wang Kelian, was temporarily converted into a victim identification centre (VIC) as it is the nearest available forensic facility to the mass grave site. The mortuary operation was conducted over a period of 3 months from June to September 2015, and was divided into two phases; phase 1 involving the postmortem examination of the remains of 116 suspected individuals and for phase 2 the remains of 49 suspected individuals. The fact that the graves were of unknown individuals afforded the mass identification operation a sufficient duration of 2 weeks as preparatory phase enabling procedurals and daily victim identification workflow to be established, and the setting up of a temporary body storage for the designated mortuary. The temporary body storage has proven to be a significant factor in enabling the successful conclusion of the VIC operation to the final phase of temporary controlled burials. Recognition from two international observers, Mr. Andréas Patiño Umaña, from the International Committee of Red Cross (ICRC) and Prof. Noel Woodford from Victoria Institute of Forensic Medicine (VIFM) had proven the mortuary operation was in compliance to the international quality and standards. The overall victim identification and mortuary operation identified a number of significant challenges, in particular the management of commingled human remains as well as the compilation of postmortem data in the absence of

  17. Quasi-fractional approximation to the Bessel functions

    International Nuclear Information System (INIS)

    Guerrero, P.M.L.

    1989-01-01

    In this paper the authors presents a simple Quasi-Fractional Approximation for Bessel Functions J ν (x), (- 1 ≤ ν < 0.5). This has been obtained by extending a method published which uses simultaneously power series and asymptotic expansions. Both functions, exact and approximated, coincide in at least two digits for positive x, and ν between - 1 and 0,4

  18. Sodium-to-Potassium Ratio and Blood Pressure, Hypertension, and Related Factors12

    Science.gov (United States)

    Perez, Vanessa; Chang, Ellen T.

    2014-01-01

    The potential cost-effectiveness and feasibility of dietary interventions aimed at reducing hypertension risk are of considerable interest and significance in public health. In particular, the effectiveness of restricted sodium or increased potassium intake on mitigating hypertension risk has been demonstrated in clinical and observational research. The role that modified sodium or potassium intake plays in influencing the renin-angiotensin system, arterial stiffness, and endothelial dysfunction remains of interest in current research. Up to the present date, no known systematic review has examined whether the sodium-to-potassium ratio or either sodium or potassium alone is more strongly associated with blood pressure and related factors, including the renin-angiotensin system, arterial stiffness, the augmentation index, and endothelial dysfunction, in humans. This article presents a systematic review and synthesis of the randomized controlled trials and observational research related to this issue. The main findings show that, among the randomized controlled trials reviewed, the sodium-to-potassium ratio appears to be more strongly associated with blood pressure outcomes than either sodium or potassium alone in hypertensive adult populations. Recent data from the observational studies reviewed provide additional support for the sodium-to-potassium ratio as a superior metric to either sodium or potassium alone in the evaluation of blood pressure outcomes and incident hypertension. It remains unclear whether this is true in normotensive populations and in children and for related outcomes including the renin-angiotensin system, arterial stiffness, the augmentation index, and endothelial dysfunction. Future study in these populations is warranted. PMID:25398734

  19. Solving Ratio-Dependent Predator-Prey System with Constant Effort Harvesting Using Homotopy Perturbation Method

    Directory of Open Access Journals (Sweden)

    Abdoul R. Ghotbi

    2008-01-01

    Full Text Available Due to wide range of interest in use of bioeconomic models to gain insight into the scientific management of renewable resources like fisheries and forestry, homotopy perturbation method is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort prey harvesting. The results are compared with the results obtained by Adomian decomposition method. The results show that, in new model, there are less computations needed in comparison to Adomian decomposition method.

  20. Scattering theory and effective medium approximations to heterogeneous materials

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1977-01-01

    The formal analogy existing between problems studied in the microscopic theory of disordered alloys and problems concerned with the effective (macroscopic) behavior of heterogeneous materials is discussed. Attention is focused on (1) analogous approximations (effective medium approximations) developed for the microscopic problems by scattering theory concepts and techniques, but for the macroscopic problems principally by intuitive means, (2) the link, provided by scattering theory, of the intuitively developed approximations to a well-defined perturbative analysis, (3) the possible presence of conditionally convergent integrals in effective medium approximations

  1. Using of microvertebrate remains in reconstruction of late quaternary (Holocene paleoclimate, Eastern Iran

    Directory of Open Access Journals (Sweden)

    Mansour Aliabadian

    2015-09-01

    when possible , with the aid of measuring microscope having accuracy 0.001 mm. One of the main goals of the detailed analysis on dental remains is obtaining the changes of teeth size during time and space (Mashkour and Hashemi 2008 . KS remains were recovered out by water sieving a column of three geological sieves with decreasing size of the mesh from top to bottom: 1 cm, 0.5 cm and 0.2 cm. Furthermore, all obtained information, which depending on the type of the skeletal remains has been entered in tables of excel for statistical analysis. Combination of morphometric with morphological studies and their identification keys were used to identify of the remains. Based on these methods, known examples in both archeological sites were belonging to Gerbillinae and Tatera indica species .     Discussion of Results & Conclusions   The effect of climate change on Tatera indica species was found for the first time in 1973 in the western regions of Iran and Dehloran plain (10,000-3800 years ago (Redding 1978 . This region has 200 to 399 mm of rainfall per year rivers, streams, marshes and channels which represents wet conditions in most of the year. In this area, in addition of Tatera indica species, Nesokia indica, Mus musculus, Gerbillus nanus and Meriones crassus were identified. The remains of Tatera indica species with Nesokia and Mus were found also in Shahre shoukhteh in Sistan which wa s reported approximately 6000 years ago (Chaline and Helmer 1974 . Presence of Tatera indica in KS site and also in other central, western, southwestern and eastern Iran during the mid to late Holocene can be show that climatic and environmental conditions in the southern half part of the country has not changed from 9000 years to recent (Alley et al. 1997 .   Finding the dental and cranial remains of Tatera indica in TN of Mashhad and in another archeological site such as Kohandejh in north east of Iran (Nishapur can be indicate the change climate probably was intense in

  2. Using of microvertebrate remains in reconstruction of late quaternary (Holocene paleoclimate, Eastern Iran

    Directory of Open Access Journals (Sweden)

    Narges Hashemi

    2015-10-01

    possible , with the aid of measuring microscope having accuracy 0.001 mm. One of the main goals of the detailed analysis on dental remains is obtaining the changes of teeth size during time and space (Mashkour and Hashemi 2008 . KS remains were recovered out by water sieving a column of three geological sieves with decreasing size of the mesh from top to bottom: 1 cm, 0.5 cm and 0.2 cm. Furthermore, all obtained information, which depending on the type of the skeletal remains has been entered in tables of excel for statistical analysis. Combination of morphometric with morphological studies and their identification keys were used to identify of the remains. Based on these methods, known examples in both archeological sites were belonging to Gerbillinae and Tatera indica species .     Discussion of Results & Conclusions   The effect of climate change on Tatera indica species was found for the first time in 1973 in the western regions of Iran and Dehloran plain (10,000-3800 years ago (Redding 1978 . This region has 200 to 399 mm of rainfall per year; rivers, streams, marshes and channels which represents wet conditions in most of the year. In this area, in addition of Tatera indica species, Nesokia indica, Mus musculus, Gerbillus nanus and Meriones crassus were identified. The remains of Tatera indica species with Nesokia and Mus were found also in Shahre shoukhteh in Sistan which wa s reported approximately 6000 years ago (Chaline and Helmer 1974 . Presence of Tatera indica in KS site and also in other central, western, southwestern and eastern Iran during the mid to late Holocene can be show that climatic and environmental conditions in the southern half part of the country has not changed from 9000 years to recent (Alley et al. 1997 .   Finding the dental and cranial remains of Tatera indica in TN of Mashhad and in another archeological site such as Kohandejh in north east of Iran (Nishapur can be indicate the change climate probably was intense in 2,000 years ago in

  3. Approximate modal analysis using Fourier decomposition

    International Nuclear Information System (INIS)

    Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana

    2010-01-01

    The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.

  4. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  5. Development of the relativistic impulse approximation

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1985-01-01

    This talk contains three parts. Part I reviews the developments which led to the relativistic impulse approximation for proton-nucleus scattering. In Part II, problems with the impulse approximation in its original form - principally the low energy problem - are discussed and traced to pionic contributions. Use of pseudovector covariants in place of pseudoscalar ones in the NN amplitude provides more satisfactory low energy results, however, the difference between pseudovector and pseudoscalar results is ambiguous in the sense that it is not controlled by NN data. Only with further theoretical input can the ambiguity be removed. Part III of the talk presents a new development of the relativistic impulse approximation which is the result of work done in the past year and a half in collaboration with J.A. Tjon. A complete NN amplitude representation is developed and a complete set of Lorentz invariant amplitudes are calculated based on a one-meson exchange model and appropriate integral equations. A meson theoretical basis for the important pair contributions to proton-nucleus scattering is established by the new developments. 28 references

  6. Local approximation of a metapopulation's equilibrium.

    Science.gov (United States)

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  7. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  8. Pion-nucleus cross sections approximation

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Polanski, A.; Sosnin, A.N.

    1990-01-01

    Analytical approximation of pion-nucleus elastic and inelastic interaction cross-section is suggested, with could be applied in the energy range exceeding several dozens of MeV for nuclei heavier than beryllium. 3 refs.; 4 tabs

  9. The measurement of mass spectrometric peak height ratio of helium isotope in trace samples

    International Nuclear Information System (INIS)

    Sun Mingliang

    1989-01-01

    An experiment study on the measurement of mass spectrometric peak height ratio of helium isotope in the trace gaseous sample is discussed by using the gas purification line designed by the authors and model VG-5400 static-vacuum noble gas mass spectrometer imported and air helium as a standard. The results show that the amount of He and Ne in natural gas sample is 99% after purification. When the amount of He in Mass Spectrometer is more than 4 x 10 -7 cm 3 STP, it's sensitivity remains stable, about 10 -4 A/cm 3 STP He and the precision of 3 He/ 4 He ratio within the following 17 days is 1.32%. The 'ABA' pattern and experiment condition in the measurement of mass spectrometric peak height ratio of He isotope are presented

  10. Juveniles' Motivations for Remaining in Prostitution

    Science.gov (United States)

    Hwang, Shu-Ling; Bedford, Olwen

    2004-01-01

    Qualitative data from in-depth interviews were collected in 1990-1991, 1992, and 2000 with 49 prostituted juveniles remanded to two rehabilitation centers in Taiwan. These data are analyzed to explore Taiwanese prostituted juveniles' feelings about themselves and their work, their motivations for remaining in prostitution, and their difficulties…

  11. Heritable Variation for Sex Ratio under Environmental Sex Determination in the Common Snapping Turtle (Chelydra Serpentina)

    Science.gov (United States)

    Janzen, F. J.

    1992-01-01

    The magnitude of quantitative genetic variation for primary sex ratio was measured in families extracted from a natural population of the common snapping turtle (Chelydra serpentina), which possesses temperature-dependent sex determination (TSD). Eggs were incubated at three temperatures that produced mixed sex ratios. This experimental design provided estimates of the heritability of sex ratio in multiple environments and a test of the hypothesis that genotype X environment (G X E) interactions may be maintaining genetic variation for sex ratio in this population of C. serpentina. Substantial quantitative genetic variation for primary sex ratio was detected in all experimental treatments. These results in conjunction with the occurrence of TSD in this species provide support for three critical assumptions of Fisher's theory for the microevolution of sex ratio. There were statistically significant effects of family and incubation temperature on sex ratio, but no significant interaction was observed. Estimates of the genetic correlations of sex ratio across environments were highly positive and essentially indistinguishable from +1. These latter two findings suggest that G X E interaction is not the mechanism maintaining genetic variation for sex ratio in this system. Finally, although substantial heritable variation exists for primary sex ratio of C. serpentina under constant temperatures, estimates of the effective heritability of primary sex ratio in nature are approximately an order of magnitude smaller. Small effective heritability and a long generation time in C. serpentina imply that evolution of sex ratios would be slow even in response to strong selection by, among other potential agents, any rapid and/or substantial shifts in local temperatures, including those produced by changes in the global climate. PMID:1592234

  12. Approximal morphology as predictor of approximal caries in primary molar teeth

    DEFF Research Database (Denmark)

    Cortes, A; Martignon, S; Qvist, V

    2018-01-01

    consent was given, participated. Upper and lower molar teeth of one randomly selected side received a 2-day temporarily separation. Bitewing radiographs and silicone impressions of interproximal area (IPA) were obtained. One-year procedures were repeated in 52 children (84%). The morphology of the distal...... surfaces of the first molar teeth and the mesial surfaces on the second molar teeth (n=208) was scored from the occlusal aspect on images from the baseline resin models resulting in four IPA variants: concave-concave; concave-convex; convex-concave, and convex-convex. Approximal caries on the surface...

  13. Finite Element Approximation of the FENE-P Model

    OpenAIRE

    Barrett , John ,; Boyaval , Sébastien

    2017-01-01

    We extend our analysis on the Oldroyd-B model in Barrett and Boyaval [1] to consider the finite element approximation of the FENE-P system of equations, which models a dilute polymeric fluid, in a bounded domain $D $\\subset$ R d , d = 2 or 3$, subject to no flow boundary conditions. Our schemes are based on approximating the pressure and the symmetric conforma-tion tensor by either (a) piecewise constants or (b) continuous piecewise linears. In case (a) the velocity field is approximated by c...

  14. Nuclear data processing, analysis, transformation and storage with Pade-approximants

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.

    1992-01-01

    A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)

  15. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas

    2015-09-07

    The capacity of the intensity-modulation direct-detection (IM-DD) free-space optical channel with both average and peak intensity constraints is studied. A new capacity lower bound is derived by using a truncated-Gaussian input distribution. Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high-SNR asymptotic capacity of the channel under either a peak or an average constraint is small. This leads to a simple approximation of the high SNR capacity. Additionally, a new capacity upper bound is derived using sphere-packing arguments. This bound is tight at high SNR for a channel with a dominant peak constraint.

  16. Lattice quantum chromodynamics with approximately chiral fermions

    International Nuclear Information System (INIS)

    Hierl, Dieter

    2008-05-01

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the Θ + pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  17. Lattice quantum chromodynamics with approximately chiral fermions

    Energy Technology Data Exchange (ETDEWEB)

    Hierl, Dieter

    2008-05-15

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  18. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  19. Modeling C-band single scattering properties of hydrometeors using discrete-dipole approximation and T-matrix method

    International Nuclear Information System (INIS)

    Tyynelae, Jani; Nousiainen, Timo; Goeke, Sabine; Muinonen, Karri

    2009-01-01

    We study the applicability of the discrete-dipole approximation by modeling centimeter (C-band) radar echoes for hydrometeors, and compare the results to exact theories. We use ice and water particles of various shapes with varying water-content to investigate how the backscattering, extinction, and absorption cross sections change as a function of particle radius. We also compute radar parameters, such as the differential reflectivity, the linear depolarization ratio, and the copolarized correlation coefficient. We find that using discrete-dipole approximation (DDA) to model pure ice and pure water particles at the C-band, is a lot more accurate than particles containing both ice and water. For coated particles, a large grid-size is recommended so that the coating is modeled adequately. We also find that the absorption cross section is significantly less accurate than the scattering and backscattering cross sections. The accuracy of DDA can be increased by increasing the number of dipoles, but also by using the filtered coupled dipole-option for the polarizability. This halved the relative errors in cross sections.

  20. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)