An improved approximation ratio for the minimum latency problem
Goemans, M.; Kleinberg, J. [MIT, Cambridge, MA (United States)
1996-12-31
Given a tour visiting n points in a metric space, the latency of one of these points p is the distance traveled in the tour before reaching p. The minimum latency problem asks for a tour passing through n given points for which the total latency of the n points is minimum; in effect, we are seeking the tour with minimum average {open_quotes}arrival time.{close_quotes} This problem has been studied in the operations research literature, where it has also been termed the {open_quotes}delivery-man problem{close_quotes} and the {open_quotes}traveling repairman problem.{close_quotes} The approximability of the minimum latency problem was first considered by Sahni and Gonzalez in 1976; however, unlike the classical traveling salesman problem, it is not easy to give any constant-factor approximation algorithm for the minimum latency problem. Recently, Blum, Chalasani, Coppersmith, Pulleyblank, Raghavan, and Sudan gave the first such algorithm, obtaining an approximation ratio of 144. In this work, we present an algorithm which improves this ratio to 21.55. The development of our algorithm involves a number of techniques that seem to be of interest from the perspective of the traveling salesman problem and its variants more generally.
Bota, C.; Cǎruntu, B.; Bundǎu, O.
2013-10-01
In this paper we applied the Squared Remainder Minimization Method (SRMM) to find analytic approximate polynomial solutions for Riccati differential equations. Two examples are included to demonstrated the validity and applicability of the method. The results are compared to those obtained by other methods.
Stable strontium isotopic ratios from archaeological organic remains from the Thorsberg peat bog
Nosch, Marie-Louise Bech; von Carnap-Bornheim, Claus; Grupe, Gisela;
2007-01-01
Pilot study analysing stable strontium isotopic ratios from Iron Age textile and leather finds from the Thorsberg peat bog.......Pilot study analysing stable strontium isotopic ratios from Iron Age textile and leather finds from the Thorsberg peat bog....
Matthew Lorig; Ronnie Sircar
2015-01-01
We study the finite horizon Merton portfolio optimization problem in a general local-stochastic volatility setting. Using model coefficient expansion techniques, we derive approximations for the both the value function and the optimal investment strategy. We also analyze the `implied Sharpe ratio' and derive a series approximation for this quantity. The zeroth-order approximation of the value function and optimal investment strategy correspond to those obtained by Merton (1969) when the risky...
Approximation Formula for Easy Calculation of Signal-to-Noise Ratio of Sigma-Delta Modulators
2011-01-01
The signal-to-noise ratio (SNR) is one of the most significant measures of performance of the sigma-delta modulators. An approximate formula for calculation of signal-to-noise ratio of an arbitrary sigma-delta modulator (SDM) has been proposed. Our approach for signal-to-noise ratio computation does not require modulator modeling and simulation. The proposed formula is compared with SNR calculations based on output bitstream obtained by simulations, and the reasons for small discrepancies are...
Araki, Fujio; Kumagai, Kozo; Iseri, Takumi; Kawano, Tsutomu (National Hospital of Kumamoto (Japan))
1991-04-01
Dose calculation of irregularly shaped fields can be made by the Clarkson technique, which however requires considerable time and is thus not practical. We investigated a simple approximation method for determining field factors (F{sub A}) and tissue-peak ratios (TPRs) for irregularly shaped fields. By this method, we approximated scatter dose by the ratio of area for an irregularly shaped field to that for the overall field (without blocking). Maximum error of equivalent square fields as determined by this method for irregularly shaped fields was -1.3% for field factors, +2.1% for TPRs and +1.4% for the F{sub A} x TPRs. (author).
A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio
Günther, Elisabeth; Megow, Nicole; Wiese, Andreas
2012-01-01
We propose a new approach to competitive analysis in online scheduling by introducing the novel concept of online approximation schemes. Such a scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines, and we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. We also general- ize our techniques to arbitrary monomial cost functions and apply them to the makespan objective. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations. We also contribute algorithmic means to compute the actual value of the best possi- ble competitive ratio up to an arbitrary accuracy. This strongly contrasts all previous manually obta...
Experiments using machine learning to approximate likelihood ratios for mixture models
Cranmer, K.; Pavez, J.; Louppe, G.; Brooks, W. K.
2016-10-01
Likelihood ratio tests are a key tool in many fields of science. In order to evaluate the likelihood ratio the likelihood function is needed. However, it is common in fields such as High Energy Physics to have complex simulations that describe the distribution while not having a description of the likelihood that can be directly evaluated. In this setting it is impossible or computationally expensive to evaluate the likelihood. It is, however, possible to construct an equivalent version of the likelihood ratio that can be evaluated by using discriminative classifiers. We show how this can be used to approximate the likelihood ratio when the underlying distribution is a weighted sum of probability distributions (e.g. signal plus background model). We demonstrate how the results can be considerably improved by decomposing the ratio and use a set of classifiers in a pairwise manner on the components of the mixture model and how this can be used to estimate the unknown coefficients of the model, such as the signal contribution.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases
Tight Approximation Ratio of a General Greedy Splitting Algorithm for the Minimum k-Way Cut Problem
Xiao, Mingyu; Yao, Andrew C
2008-01-01
For an edge-weighted connected undirected graph, the minimum $k$-way cut problem is to find a subset of edges of minimum total weight whose removal separates the graph into $k$ connected components. The problem is NP-hard when $k$ is part of the input and W[1]-hard when $k$ is taken as a parameter. A simple algorithm for approximating a minimum $k$-way cut is to iteratively increase the number of components of the graph by $h-1$, where $2 \\le h \\le k$, until the graph has $k$ components. The approximation ratio of this algorithm is known for $h \\le 3$ but is open for $h \\ge 4$. In this paper, we consider a general algorithm that iteratively increases the number of components of the graph by $h_i-1$, where $h_1 \\le h_2 \\le ... \\le h_q$ and $\\sum_{i=1}^q (h_i-1) = k-1$. We prove that the approximation ratio of this general algorithm is $2 - (\\sum_{i=1}^q {h_i \\choose 2})/{k \\choose 2}$, which is tight. Our result implies that the approximation ratio of the simple algorithm is $2-h/k + O(h^2/k^2)$ in general and...
Escher, Jutta E
2010-01-01
Motivated by the renewed interest in the surrogate nuclear reactions approach, an indirect method for determining compound-nuclear reaction cross sections, the prospects for determining (n, gamma) cross sections for deformed rare-earth and actinide nuclei are investigated. A nuclear-reaction model is employed to simulate physical quantities that are typically measured in surrogate experiments and used to assess the validity of the Weisskopf-Ewing and ratio approximations, which are typically employed in the analysis of surrogate reactions. The expected accuracy of (n,gamma) cross sections extracted from typical surrogate measurements is discussed and limitations of the approximate methods are illustrated. Suggestions for moving beyond presently-employed approximations are made.
Bakholdina, Varvara Yu; Movsesian, Alla A; Sineva, Irina M
2016-07-01
Sexual dimorphism in the relative length of the second-to-fourth digits (the digit ratio, or 2D:4D) in humans has been reported in many studies. The aim of our study was to ascertain possibility of using the 2D:4D ratio as an additional marker for sex determination in the study of human skeletal remains. We have studied 2D:4D ratios obtained from measurements of finger phalanges and metacarpal bones in Russian (45 adult males and 26 adult females) and German (58 adult males and 29 adult females) skeletal series. The difference in 2D:4D ratio between the male and female subsamples in both skeletal series was not statistically significant. Analysis of variance revealed that the 2D:4D ratios in our sample varied more by ethnicity than by the sexual identity of the skeletal material. Our results suggest that the 2D:4D ratio cannot be used as an appropriate trait for the sex determination of human skeletal remains. Am. J. Hum. Biol. 28:591-593, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Roager, Henrik Munch; Licht, Tine Rask; Poulsen, Sanne;
2014-01-01
with central obesity and components of metabolic syndrome could be grouped into two discrete groups simply by their relative abundance of Prevotella spp. divided by Bacteroides spp. (P/B ratio) obtained by quantitative PCR analysis. Furthermore, we showed that these groups remained stable during a 6-month......, controlled dietary intervention, where the effect of consuming a diet in accord with the new Nordic diet (NND) recommendations as opposed to consuming the average Danish diet (ADD) on the gut microbiota was investigated. In this study, subjects (with and without stratification according to P/B ratio) did...
Licht, Tine R.; Poulsen, Sanne K.; Larsen, Thomas M.; Bahl, Martin I.
2014-01-01
It has been suggested that the human gut microbiota can be divided into enterotypes based on the abundance of specific bacterial groups; however, the biological significance and stability of these enterotypes remain unresolved. Here, we demonstrated that subjects (n = 62) 18 to 65 years old with central obesity and components of metabolic syndrome could be grouped into two discrete groups simply by their relative abundance of Prevotella spp. divided by Bacteroides spp. (P/B ratio) obtained by quantitative PCR analysis. Furthermore, we showed that these groups remained stable during a 6-month, controlled dietary intervention, where the effect of consuming a diet in accord with the new Nordic diet (NND) recommendations as opposed to consuming the average Danish diet (ADD) on the gut microbiota was investigated. In this study, subjects (with and without stratification according to P/B ratio) did not reveal significant changes in 35 selected bacterial taxa quantified by quantitative PCR (ADD compared to NND) resulting from the dietary interventions. However, we found higher total plasma cholesterol within the high-P/B group than in the low-P/B group after the intervention. We propose that stratification of humans based simply on their P/B ratio could allow better assessment of possible effects of interventions on the gut microbiota and physiological biomarkers. PMID:24296500
Sago, Norichika; Nakano, Hiroyuki
2016-01-01
We revisit the accuracy of the post-Newtonian (PN) approximation and its region of validity for quasi-circular orbits of a point particle in the Kerr spacetime, by using an analytically known highest post-Newtonian order gravitational energy flux and accurate numerical results in the black hole perturbation approach. It is found that regions of validity become larger for higher PN order results although there are several local maximums in regions of validity for relatively low-PN order results. This might imply that higher PN order calculations are also encouraged for comparable-mass binaries.
Carrasco-Hernandez, Roberto; Smedley, Andrew R. D.; Webb, Ann R.
2016-05-01
Two radiative transfer models are presented that simplify calculations of street canyon spectral irradiances with minimum data input requirements, allowing better assessment of urban exposures than can be provided by standard unobstructed radiation measurements alone. Fast calculations improve the computational performance of radiation models, when numerous repetitions are required in time and location. The core of the models is the calculation of the spectral diffuse-to-global ratios (DGR) from an unobstructed global spectral measurement. The models are based on, and have been tested against, outcomes of the SMARTS2 algorithm (i.e. Simple Model of the Atmospheric Radiative Transfer of Sunshine). The modelled DGRs can then be used to partition global spectral irradiance values into their direct and diffuse components for different solar zenith angles. Finally, the effects of canyon obstructions can be evaluated independently on the direct and diffuse components, which are then recombined to give the total canyon irradiance. The first model allows ozone and aerosol inputs, while the second provides a further simplification, restricted to average ozone and aerosol contents but specifically designed for faster calculations. To assess the effect of obstructions and validate the calculations, a set of experiments with simulated obstructions (simulated canyons) were performed. The greatest source of uncertainty in the simplified calculations is in the treatment of diffuse radiation. The measurement-model agreement is therefore dependent on the region of the sky obscured and ranges from <5 % at all wavelengths to 20-40 % (wavelength dependent) when diffuse sky only is visible from the canyon.
Drewes, Julia L.; Meulendyke, Kelly A.; Liao, Zhaohao; Witwer, Kenneth W.; Gama, Lucio; Ubaida-Mohien, Ceereena; Li, Ming; Notarangelo, Francesca M.; Tarwater, Patrick M.; Schwarcz, Robert; Graham, David R.; Zink, M. Christine
2015-01-01
Activation of the kynurenine pathway (KP) of tryptophan catabolism likely contributes to HIV-associated neurological disorders. However, KP activation in brain tissue during HIV infection has been understudied, and the effect of combination anti-retroviral therapy (cART) on KP induction in the brain is unknown. To examine these questions, tryptophan, kynurenine, 3-hydroxykynurenine, quinolinic acid, and serotonin levels were measured longitudinally during SIV infection in striatum and CSF from untreated and cART-treated pigtailed macaques. mRNA levels of KP enzymes also were measured in striatum. In untreated macaques, elevations in KP metabolites coincided with transcriptional induction of upstream enzymes in the KP. Striatal KP induction was also temporally associated - but did not directly correlate - with serotonin losses in the brain. CSF quinolinic acid/tryptophan ratios were found to be the earliest predictor of neurological disease in untreated SIV-infected macaques, outperforming other KP metabolites as well as the putative biomarkers Interleukin-6 (IL-6) and Monocyte chemoattractant protein-1 (MCP-1). Finally, cART did not restore KP metabolites to control levels in striatum despite control of virus, though CSF metabolite levels were normalized in most animals. Overall these results demonstrate that cerebral KP activation is only partially resolved with cART, and that CSF QUIN/TRP ratios are an early, predictive biomarker of CNS disease. PMID:25776527
Rusydi, Febdian; Shukri, Ganes; Saputro, Adithya G.; Agusta, Mohammad K.; Dipojono, Hermawan K.; Suprijadi, Suprijadi
2017-04-01
We study the Q/B-band dipole strength of zinc tetrabenzoporphyrin (ZnTBP) using density functional theory (DFT) in various solvents. The solvents are modeled using the polarized continuum model (PCM). The dipole strength calculations are approached by a two-level system, where the Q-band is described by the HOMO → LUMO electronic transition and the B-band by the HOMO-1 → LUMO electronic transition. We compare the results with the experimental data of the Q/B-band intensity ratio. We also perform time-dependent DFT coupled with PCM to calculate the Q/B-band oscillator strength ratio of ZnTBP. The results of both methods show a general trend with respect to the experimental Q/B-band intensity ratio in solvents, except for the calculation in the water solvent. Even so, the approximation is a good starting point for studying the UV-vis spectrum based on DFT study alone.
Nakata, Manabu; Okada, Takashi; Komai, Yoshinori; Nohara, Hiroki [Kyoto Univ. (Japan). Hospital
1996-08-01
Modern linear accelerators have four independent jaws and multileaf collimators (MLC) of 1 cm width at the isocenter. Asymmetric fields defined by such independent jaws and irregular multileaf collimated fields can be used to match adjacent fields or to spare the spinal cord in external photon beam radiotherapy. We have developed a new approximate algorithm for depth dose calculations at the collimator rotation axis. The program is based on Clarkson`s principle, and uses a more accurate modification of Day`s method for asymmetric fields. Using this method, tissue-maximum ratios (TMR) and field factors of ten kinds of asymmetric fields and ten different irregular multileaf collimated fields were calculated and compared with the measured data for 6 MV and 15 MV photon beams. The dose accuracy with the general A/Pe method was about 3%, however, with the new modified Day`s method, accuracy was within 1.7% for TMR and 1.2% for field factors. The calculated TMR and field factors were found to be in good agreement with measurements for both the 6 MV and 15 MV photon beams. (author)
Zhang, Zhongyang; Berti, Emanuele
2011-01-01
We study the effect of black hole spin on the accuracy of the post-Newtonian approximation. We focus on the gravitational energy flux for the quasicircular, equatorial, extreme mass-ratio inspiral of a compact object into a Kerr black hole of mass M and spin J. For a given dimensionless spin a=J/M^2 (in geometrical units), the energy flux depends only on the orbital velocity v or (equivalently) on the Boyer-Lindquist orbital radius r. We investigate the formal region of validity of the Taylor post-Newtonian expansion of the energy flux (which is known up to order v^8 beyond the quadrupole formula), generalizing previous work by two of us. The "error function" used to determine the region of validity of the post-Newtonian expansion can have two qualitatively different kinds of behavior, and we deal with these two cases separately. We find that, at any fixed post-Newtonian order, the edge of the region of validity (as measured by v/v_{ISCO}, where v_{ISCO} is the orbital velocity at the innermost stable circula...
On badly approximable complex numbers
Esdahl-Schou, Rune; Kristensen, S.
We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...
On badly approximable complex numbers
Esdahl-Schou, Rune; Kristensen, S.
We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Approximate Representations and Approximate Homomorphisms
Moore, Cristopher
2010-01-01
Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities in terms of the ratio d / d_min where d_min is the dimension of the smallest nontrivial representation of G. As an application, we bound the extent to which a function f : G -> H can be an approximate homomorphism where H is another finite group. We show that if H's representations are significantly smaller than G's, no such f can be much more homomorphic than a random function. We interpret these results as showing that if G is quasirandom, that is, if d_min is large, then G cannot be embedded in a small number of dimensi...
Andrew K G Jones
1997-08-01
Full Text Available The four papers in this issue represent a trawl of the reports presented to the Fourth meeting of the International Council for Archaeozoology (ICAZ Fish Remains Working Group, which met at the University of York in 1987. The conference discussed material from many parts of the world - from Australasia to the north-west coast of America - and many eras, ranging in date from the early Pleistocene to the 1980s. It demonstrated both the variety of work being carried out and the growing interest in ancient fish remains. Internet Archaeology plans to publish other batches of papers from this conference. These reports will demonstrate the effort being made to distinguish between assemblages of fish remains which have been deposited by people and those which occur in ancient deposits as a result of the action of other agents. To investigate this area, experiments with modern material and observations of naturally occurring fish bone assemblages are supplemented with detailed analysis of ancient and modern fish remains. The papers published here illustrate the breadth of research into osteology, biogeography, documentary research, and the practicalities of recovering fish remains. Read, digest and enjoy them! Using the Internet for publishing research papers is not only ecologically sound (saving paper, etc. it disseminates scholarship to anyone anywhere on the planet with access to what is gradually becoming necessary technology in the late 20th century. Hopefully, future groups of papers will include video and audio material recorded at the conference, and so enable those who could not attend to gain further insights into the meeting and the scholarship underpinning this area of research.
[PALEOPATHOLOGY OF HUMAN REMAINS].
Minozzi, Simona; Fornaciari, Gino
2015-01-01
Many diseases induce alterations in the human skeleton, leaving traces of their presence in ancient remains. Paleopathological examination of human remains not only allows the study of the history and evolution of the disease, but also the reconstruction of health conditions in the past populations. This paper describes the most interesting diseases observed in skeletal samples from the Roman Imperial Age necropoles found in urban and suburban areas of Rome during archaeological excavations in the last decades. The diseases observed were grouped into the following categories: articular diseases, traumas, infections, metabolic or nutritional diseases, congenital diseases and tumours, and some examples are reported for each group. Although extensive epidemiological investigation in ancient skeletal records is impossible, the palaeopathological study allowed to highlight the spread of numerous illnesses, many of which can be related to the life and health conditions of the Roman population.
Remaining Life Expectancy With and Without Polypharmacy
Wastesson, Jonas W; Canudas-Romo, Vladimir; Lindahl-Jacobsen, Rune
2016-01-01
OBJECTIVES: To investigate the remaining life expectancy with and without polypharmacy for Swedish women and men aged 65 years and older. DESIGN: Age-specific prevalence of polypharmacy from the nationwide Swedish Prescribed Drug Register (SPDR) combined with life tables from Statistics Sweden...... was used to calculate the survival function and remaining life expectancy with and without polypharmacy according to the Sullivan method. SETTING: Nationwide register-based study. PARTICIPANTS: A total of 1,347,564 individuals aged 65 years and older who had been prescribed and dispensed a drug from July 1...... to September 30, 2008. MEASUREMENTS: Polypharmacy was defined as the concurrent use of 5 or more drugs. RESULTS: At age 65 years, approximately 8 years of the 20 remaining years of life (41%) can be expected to be lived with polypharmacy. More than half of the remaining life expectancy will be spent...
Parasite remains in archaeological sites.
Bouchet, Françoise; Guidon, Niéde; Dittmar, Katharina; Harter, Stephanie; Ferreira, Luiz Fernando; Chaves, Sergio Miranda; Reinhard, Karl; Araújo, Adauto
2003-01-01
Organic remains can be found in many different environments. They are the most significant source for paleoparasitological studies as well as for other paleoecological reconstruction. Preserved paleoparasitological remains are found from the driest to the moistest conditions. They help us to understand past and present diseases and therefore contribute to understanding the evolution of present human sociality, biology, and behavior. In this paper, the scope of the surviving evidence will be briefy surveyed, and the great variety of ways it has been preserved in different environments will be discussed. This is done to develop to the most appropriated techniques to recover remaining parasites. Different techniques applied to the study of paleoparasitological remains, preserved in different environments, are presented. The most common materials used to analyze prehistoric human groups are reviewed, and their potential for reconstructing ancient environment and disease are emphasized. This paper also urges increased cooperation among archaeologists, paleontologists, and paleoparasitologists.
Parasite remains in archaeological sites
Françoise Bouchet
2003-01-01
Full Text Available Organic remains can be found in many different environments. They are the most significant source for paleoparasitological studies as well as for other paleoecological reconstruction. Preserved paleoparasitological remains are found from the driest to the moistest conditions. They help us to understand past and present diseases and therefore contribute to understanding the evolution of present human sociality, biology, and behavior. In this paper, the scope of the surviving evidence will be briefly surveyed, and the great variety of ways it has been preserved in different environments will be discussed. This is done to develop to the most appropriated techniques to recover remaining parasites. Different techniques applied to the study of paleoparasitological remains, preserved in different environments, are presented. The most common materials used to analyze prehistoric human groups are reviewed, and their potential for reconstructing ancient environment and disease are emphasized. This paper also urges increased cooperation among archaeologists, paleontologists, and paleoparasitologists.
Organic Chemicals Remain High Prices
无
2007-01-01
@@ Phenol In early April 2007, China's phenol price remained bullish, and with the restart of phenol/acetone units in Sinopec Beijing Yanhua Petrochemical being ahead of schedule, there were few trading actions in the market, and the price of phenol dropped considerably afterwards.
Alarms Remain,Efforts Continue
Alice
2007-01-01
@@ China must come to terms with the fact that it has quality problems in at least 1% of its products.Though there is no country in the world that can completely avoid problems,given the responsible role China plays on the intemational stage,China should stop to take a look at itself and find ways to improve.China must examine herself carefully,when looking at the production chain;we have to keep aware that some alarms still remain.
Ultrafast Approximation for Phylogenetic Bootstrap
Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt
2013-01-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and
Diophantine approximation and badly approximable sets
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
Peter Read
2013-08-01
Full Text Available In most cultures the dead and their living relatives are held in a dialogic relationship. The dead have made it clear, while living, what they expect from their descendants. The living, for their part, wish to honour the tombs of their ancestors; at the least, to keep the graves of the recent dead from disrepair. Despite the strictures, the living can fail their responsibilities, for example, by migration to foreign countries. The peripatetic Chinese are one of the few cultures able to overcome the dilemma of the wanderer or the exile. With the help of a priest, an Australian Chinese migrant may summon the soul of an ancestor from an Asian grave to a Melbourne temple, where the spirit, though removed from its earthly vessel, will rest and remain at peace. Amongst cultures in which such practices are not culturally appropriate, to fail to honour the family dead can be exquisitely painful. Violence is the cause of most failure.
Leike, Reimar H
2016-01-01
In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a ranking function that quantifies how "embarrassing" it is to communicate a given approximation. We show that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. We find that this ranking is equivalent to the Kullback-Leibler divergence that is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. We hope that our elementary derivation settles the apparent confusion. We show for example that when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many su...
Silicon photonics: some remaining challenges
Reed, G. T.; Topley, R.; Khokhar, A. Z.; Thompson, D. J.; Stanković, S.; Reynolds, S.; Chen, X.; Soper, N.; Mitchell, C. J.; Hu, Y.; Shen, L.; Martinez-Jimenez, G.; Healy, N.; Mailis, S.; Peacock, A. C.; Nedeljkovic, M.; Gardes, F. Y.; Soler Penades, J.; Alonso-Ramos, C.; Ortega-Monux, A.; Wanguemert-Perez, G.; Molina-Fernandez, I.; Cheben, P.; Mashanovich, G. Z.
2016-03-01
This paper discusses some of the remaining challenges for silicon photonics, and how we at Southampton University have approached some of them. Despite phenomenal advances in the field of Silicon Photonics, there are a number of areas that still require development. For short to medium reach applications, there is a need to improve the power consumption of photonic circuits such that inter-chip, and perhaps intra-chip applications are viable. This means that yet smaller devices are required as well as thermally stable devices, and multiple wavelength channels. In turn this demands smaller, more efficient modulators, athermal circuits, and improved wavelength division multiplexers. The debate continues as to whether on-chip lasers are necessary for all applications, but an efficient low cost laser would benefit many applications. Multi-layer photonics offers the possibility of increasing the complexity and effectiveness of a given area of chip real estate, but it is a demanding challenge. Low cost packaging (in particular, passive alignment of fibre to waveguide), and effective wafer scale testing strategies, are also essential for mass market applications. Whilst solutions to these challenges would enhance most applications, a derivative technology is emerging, that of Mid Infra-Red (MIR) silicon photonics. This field will build on existing developments, but will require key enhancements to facilitate functionality at longer wavelengths. In common with mainstream silicon photonics, significant developments have been made, but there is still much left to do. Here we summarise some of our recent work towards wafer scale testing, passive alignment, multiplexing, and MIR silicon photonics technology.
Rašin, Andrija
1994-01-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
On Element SDD Approximability
Avron, Haim; Toledo, Sivan
2009-01-01
This short communication shows that in some cases scalar elliptic finite element matrices cannot be approximated well by an SDD matrix. We also give a theoretical analysis of a simple heuristic method for approximating an element by an SDD matrix.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Approximation of distributed delays
Lu, Hao; Eberard, Damien; Simon, Jean-Pierre
2010-01-01
We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.
Diophantine approximation and badly approximable sets
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X...
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Approximating Graphic TSP by Matchings
Mömke, Tobias
2011-01-01
We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Achieser, N I
2004-01-01
A pioneer of many modern developments in approximation theory, N. I. Achieser designed this graduate-level text from the standpoint of functional analysis. The first two chapters address approximation problems in linear normalized spaces and the ideas of P. L. Tchebysheff. Chapter III examines the elements of harmonic analysis, and Chapter IV, integral transcendental functions of the exponential type. The final two chapters explore the best harmonic approximation of functions and Wiener's theorem on approximation. Professor Achieser concludes this exemplary text with an extensive section of pr
Expectation Consistent Approximate Inference
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
Approximate Modified Policy Iteration
Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu
2012-01-01
Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...
On approximating multi-criteria TSP
Manthey, Bodo; Albers, S.; Marion, J.-Y.
2009-01-01
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP), whose performances are independent of the number $k$ of criteria and come close to the approximation ratios obtained for TSP with a single objective function. We present randomized app
Approximate calculation of integrals
Krylov, V I
2006-01-01
A systematic introduction to the principal ideas and results of the contemporary theory of approximate integration, this volume approaches its subject from the viewpoint of functional analysis. In addition, it offers a useful reference for practical computations. Its primary focus lies in the problem of approximate integration of functions of a single variable, rather than the more difficult problem of approximate integration of functions of more than one variable.The three-part treatment begins with concepts and theorems encountered in the theory of quadrature. The second part is devoted to t
Approximate and renormgroup symmetries
Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling
2009-07-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximating Stationary Statistical Properties
Xiaoming WANG
2009-01-01
It is well-known that physical laws for large chaotic dynamical systems are revealed statistically. Many times these statistical properties of the system must be approximated numerically. The main contribution of this manuscript is to provide simple and natural criterions on numerical methods (temporal and spatial discretization) that are able to capture the stationary statistical properties of the underlying dissipative chaotic dynamical systems asymptotically. The result on temporal approximation is a recent finding of the author, and the result on spatial approximation is a new one. Applications to the infinite Prandtl number model for convection and the barotropic quasi-geostrophic model are also discussed.
Malvina Baica
1985-01-01
Full Text Available The author uses a new modification of Jacobi-Perron Algorithm which holds for complex fields of any degree (abbr. ACF, and defines it as Generalized Euclidean Algorithm (abbr. GEA to approximate irrationals.
Approximations in Inspection Planning
Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.
2000-01-01
Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....
The Karlqvist approximation revisited
Tannous, C
2015-01-01
The Karlqvist approximation signaling the historical beginning of magnetic recording head theory is reviewed and compared to various approaches progressing from Green, Fourier, Conformal mapping that obeys the Sommerfeld edge condition at angular points and leads to exact results.
Approximations in Inspection Planning
Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.
2000-01-01
Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Approximation Behooves Calibration
da Silva Ribeiro, André Manuel; Poulsen, Rolf
2013-01-01
Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....
The Human Remains from HMS Pandora
D.P. Steptoe
2002-04-01
Full Text Available In 1977 the wreck of HMS Pandora (the ship that was sent to re-capture the Bounty mutineers was discovered off the north coast of Queensland. Since 1983, the Queensland Museum Maritime Archaeology section has carried out systematic excavation of the wreck. During the years 1986 and 1995-1998, more than 200 human bone and bone fragments were recovered. Osteological investigation revealed that this material represented three males. Their ages were estimated at approximately 17 +/-2 years, 22 +/-3 years and 28 +/-4 years, with statures of 168 +/-4cm, 167 +/-4cm, and 166cm +/-3cm respectively. All three individuals were probably Caucasian, although precise determination of ethnicity was not possible. In addition to poor dental hygiene, signs of chronic diseases suggestive of rickets and syphilis were observed. Evidence of spina bifida was seen on one of the skeletons, as were other skeletal anomalies. Various taphonomic processes affecting the remains were also observed and described. Compact bone was observed under the scanning electron microscope and found to be structurally coherent. Profiles of the three skeletons were compared with historical information about the 35 men lost with the ship, but no precise identification could be made. The investigation did not reveal the cause of death. Further research, such as DNA analysis, is being carried out at the time of publication.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Maksim Duškin
2015-11-01
Full Text Available Approximation and supposition This article compares exponents of approximation (expressions like Russian около, примерно, приблизительно, более, свыше and the words expressing supposition (for example Russian скорее всего, наверное, возможно. These words are often confused in research, in particular researchers often mention exponents of supposition in case of exponents of approximation. Such approach arouses some objections. The author intends to demonstrate in this article a notional difference between approximation and supposition, therefore the difference between exponents of these two notions. This difference could be described by specifying different attitude of approximation and supposition to the notion of knowledge. Supposition implies speaker’s ignorance of the exact number, while approximation does not mean such ignorance. The article offers examples proving this point of view.
Approximate Privacy: Foundations and Quantification
Feigenbaum, Joan; Schapira, Michael
2009-01-01
Increasing use of computers and networks in business, government, recreation, and almost all aspects of daily life has led to a proliferation of online sensitive data about individuals and organizations. Consequently, concern about the privacy of these data has become a top priority, particularly those data that are created and used in electronic commerce. There have been many formulations of privacy and, unfortunately, many negative results about the feasibility of maintaining privacy of sensitive data in realistic networked environments. We formulate communication-complexity-based definitions, both worst-case and average-case, of a problem's privacy-approximation ratio. We use our definitions to investigate the extent to which approximate privacy is achievable in two standard problems: the second-price Vickrey auction and the millionaires problem of Yao. For both the second-price Vickrey auction and the millionaires problem, we show that not only is perfect privacy impossible or infeasibly costly to achieve...
George Marsaglia
2006-05-01
Full Text Available This article extends and amplifies on results from a paper of over forty years ago. It provides software for evaluating the density and distribution functions of the ratio z/w for any two jointly normal variates z,w, and provides details on methods for transforming a general ratio z/w into a standard form, (a+x/(b+y , with x and y independent standard normal and a, b non-negative constants. It discusses handling general ratios when, in theory, none of the moments exist yet practical considerations suggest there should be approximations whose adequacy can be verified by means of the included software. These approximations show that many of the ratios of normal variates encountered in practice can themselves be taken as normally distributed. A practical rule is developed: If a < 2.256 and 4 < b then the ratio (a+x/(b+y is itself approximately normally distributed with mean μ = a/(1.01b − .2713 and variance 2 = (a2 + 1/(b2 + .108b − 3.795 − μ2.
George Marsaglia
2006-05-01
Full Text Available This article extends and amplifies on results from a paper of over forty years ago. It provides software for evaluating the density and distribution functions of the ratio z/w for any two jointly normal variates z,w, and provides details on methods for transforming a general ratio z/w into a standard form, (a+x/(b+y , with x and y independent standard normal and a, b non-negative constants. It discusses handling general ratios when, in theory, none of the moments exist yet practical considerations suggest there should be approximations whose adequacy can be verified by means of the included software. These approximations show that many of the ratios of normal variates encountered in practice can themselves be taken as normally distributed. A practical rule is developed: If a < 2.256 and 4 < b then the ratio (a+x/(b+y is itself approximately normally distributed with mean μ = a/(1.01b - .2713 and variance σ2 = (a2 + 1/(b2 + .108b - 3.795 μ2.
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Diophantine approximations on fractals
Einsiedler, Manfred; Shapira, Uri
2009-01-01
We exploit dynamical properties of diagonal actions to derive results in Diophantine approximations. In particular, we prove that the continued fraction expansion of almost any point on the middle third Cantor set (with respect to the natural measure) contains all finite patterns (hence is well approximable). Similarly, we show that for a variety of fractals in [0,1]^2, possessing some symmetry, almost any point is not Dirichlet improvable (hence is well approximable) and has property C (after Cassels). We then settle by similar methods a conjecture of M. Boshernitzan saying that there are no irrational numbers x in the unit interval such that the continued fraction expansions of {nx mod1 : n is a natural number} are uniformly eventually bounded.
Monotone Boolean approximation
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Norton, Andrew H.
1991-01-01
Local spline approximants offer a means for constructing finite difference formulae for numerical solution of PDEs. These formulae seem particularly well suited to situations in which the use of conventional formulae leads to non-linear computational instability of the time integration. This is explained in terms of frequency responses of the FDF.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Approximation by Cylinder Surfaces
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Topology, calculus and approximation
Komornik, Vilmos
2017-01-01
Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Optimization and approximation
Pedregal, Pablo
2017-01-01
This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.
Topics in Metric Approximation
Leeb, William Edward
This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.
Scott's Lake Excavation Letters on Human Remains
US Fish and Wildlife Service, Department of the Interior — This is two letters written about the repatriation of Santee Indian human remains and funerary objects to Santee Sioux Tribe. Includes an inventory of human remains...
Chalasani, P.; Saias, I. [Los Alamos National Lab., NM (United States); Jha, S. [Carnegie Mellon Univ., Pittsburgh, PA (United States)
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Approximate Bayesian computation.
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Some Undecidable Problems on Approximability of NP Optimization Problems
黄雄
1996-01-01
In this paper some undecidable problems on approximability of NP optimization problems are investigated.In particular,the following problems are all undecidable:(1) Given an NP optimization problem,is it approximable in polynomial time?(2)For any polynomial-time computable function r(n),given a polynomial time approximable NP optimization problem,has it a polynomial-time approximation algorithm with approximation performance ratio r(n) (r(n)-approximable)?(3)For any polynomial-time computable functions r(n),r'(n),where r'(n)
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Approximation by Cylinder Surfaces
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points...... in the projection within a tolerance given by the reference curve, and the rulings are lines perpendicular to the projection plane. Application of the method in ship design is given....
S-Approximation: A New Approach to Algebraic Approximation
M. R. Hooshmandasl
2014-01-01
Full Text Available We intend to study a new class of algebraic approximations, called S-approximations, and their properties. We have shown that S-approximations can be used for applied problems which cannot be modeled by inclusion based approximations. Also, in this work, we studied a subclass of S-approximations, called Sℳ-approximations, and showed that this subclass preserves most of the properties of inclusion based approximations but is not necessarily inclusionbased. The paper concludes by studying some basic operations on S-approximations and counting the number of S-min functions.
Luminescence of thermally altered human skeletal remains
Krap, Tristan; Nota, Kevin; Wilk, Leah; van de Goot, Frank; Ruijter, Jan; Duijst, Wilma; Oostra, Roelof Jan
2017-01-01
Literature on luminescent properties of thermally altered human remains is scarce and contradictory. Therefore, the luminescence of heated bone was systemically reinvestigated. A heating experiment was conducted on fresh human bone, in two different media, and cremated human remains were recovered
Mammalian Remains from Indian Sites on Aruba
Hooijer, D.A.
1960-01-01
Mr. H. R. VAN HEEKEREN and Mr. C. J. DU RY, of the Rijksmuseum voor Volkenkunde at Leiden, entrusted me with the identification of some animal remains collected from Indian sites on Aruba by Professor J. P. B. DE JOSSELIN DE JONG in 1923. These remains relate for the most part to marine turtles (Che
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Operators of Approximations and Approximate Power Set Spaces
ZHANG Xian-yong; MO Zhi-wen; SHU Lan
2004-01-01
Boundary inner and outer operators are introduced; and union, intersection, complement operators of approximations are redefined. The approximation operators have a good property of maintaining union, intersection, complement operators, so the rough set theory has been enriched from the operator-oriented and set-oriented views. Approximate power set spaces are defined, and it is proved that the approximation operators are epimorphisms from power set space to approximate power set spaces. Some basic properties of approximate power set space are got by epimorphisms in contrast to power set space.
On Approximating Four Covering and Packing Problems
Ashley, Mary; Berman, Piotr; Chaovalitwongse, Wanpracha; DasGupta, Bhaskar; Kao, Ming-Yang; 10.1016/j.jcss.2009.01.002
2011-01-01
In this paper, we consider approximability issues of the following four problems: triangle packing, full sibling reconstruction, maximum profit coverage and 2-coverage. All of them are generalized or specialized versions of set-cover and have applications in biology ranging from full-sibling reconstructions in wild populations to biomolecular clusterings; however, as this paper shows, their approximability properties differ considerably. Our inapproximability constant for the triangle packing problem improves upon the previous results; this is done by directly transforming the inapproximability gap of Haastad for the problem of maximizing the number of satisfied equations for a set of equations over GF(2) and is interesting in its own right. Our approximability results on the full siblings reconstruction problems answers questions originally posed by Berger-Wolf et al. and our results on the maximum profit coverage problem provides almost matching upper and lower bounds on the approximation ratio, answering a...
Luminescence of thermally altered human skeletal remains.
Krap, Tristan; Nota, Kevin; Wilk, Leah S; van de Goot, Franklin R W; Ruijter, Jan M; Duijst, Wilma; Oostra, Roelof-Jan
2017-07-01
Literature on luminescent properties of thermally altered human remains is scarce and contradictory. Therefore, the luminescence of heated bone was systemically reinvestigated. A heating experiment was conducted on fresh human bone, in two different media, and cremated human remains were recovered from a modern crematory. Luminescence was excited with light sources within the range of 350 to 560 nm. The excitation light was filtered out by using different long pass filters, and the luminescence was analysed by means of a scoring method. The results show that temperature, duration and surrounding medium determine the observed emission intensity and bandwidth. It is concluded that the luminescent characteristic of bone can be useful for identifying thermally altered human remains in a difficult context as well as yield information on the perimortem and postmortem events.
Asynchronous stochastic approximation with differential inclusions
David S. Leslie
2012-01-01
Full Text Available The asymptotic pseudo-trajectory approach to stochastic approximation of Benaïm, Hofbauer and Sorin is extended for asynchronous stochastic approximations with a set-valued mean field. The asynchronicity of the process is incorporated into the mean field to produce convergence results which remain similar to those of an equivalent synchronous process. In addition, this allows many of the restrictive assumptions previously associated with asynchronous stochastic approximation to be removed. The framework is extended for a coupled asynchronous stochastic approximation process with set-valued mean fields. Two-timescales arguments are used here in a similar manner to the original work in this area by Borkar. The applicability of this approach is demonstrated through learning in a Markov decision process.
Fish remains and humankind: part two
Andrew K G Jones
1998-07-01
Full Text Available The significance of aquatic resources to past human groups is not adequately reflected in the published literature - a deficiency which is gradually being acknowledged by the archaeological community world-wide. The publication of the following three papers goes some way to redress this problem. Originally presented at an International Council of Archaeozoology (ICAZ Fish Remains Working Group meeting in York, U.K. in 1987, these papers offer clear evidence of the range of interest in ancient fish remains across the world. Further papers from the York meeting were published in Internet Archaeology 3 in 1997.
Hybrid Stochastic Models for Remaining Lifetime Prognosis
2004-08-01
literature for techniques and comparisons. Os- ogami and Harchol-Balter [70], Perros [73], Johnson [36], and Altiok [5] provide excellent summaries of...and type of PH-distribution approximation for c2 > 0.5 is not as obvious. In order to use the minimum distance estimation, Perros [73] indicated that...moment-matching techniques. Perros [73] indicated that the maximum likelihood and minimum distance techniques require nonlinear optimization. Johnson
International Conference Approximation Theory XV
Schumaker, Larry
2017-01-01
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...
Predicting the remaining service life of concrete
Clifton, J.F.
1991-11-01
Nuclear power plants are providing, currently, about 17 percent of the U.S. electricity and many of these plants are approaching their licensed life of 40 years. The U.S. Nuclear Regulatory Commission and the Department of Energy`s Oak Ridge National Laboratory are carrying out a program to develop a methodology for assessing the remaining safe-life of the concrete components and structures in nuclear power plants. This program has the overall objective of identifying potential structural safety issues, as well as acceptance criteria, for use in evaluations of nuclear power plants for continued service. The National Institute of Standards and Technology (NIST) is contributing to this program by identifying and analyzing methods for predicting the remaining life of in-service concrete materials. This report examines the basis for predicting the remaining service lives of concrete materials of nuclear power facilities. Methods for predicting the service life of new and in-service concrete materials are analyzed. These methods include (1) estimates based on experience, (2) comparison of performance, (3) accelerated testing, (4) stochastic methods, and (5) mathematical modeling. New approaches for predicting the remaining service lives of concrete materials are proposed and recommendations for their further development given. Degradation processes are discussed based on considerations of their mechanisms, likelihood of occurrence, manifestations, and detection. They include corrosion, sulfate attack, alkali-aggregate reactions, frost attack, leaching, radiation, salt crystallization, and microbiological attack.
Juveniles' Motivations for Remaining in Prostitution
Hwang, Shu-Ling; Bedford, Olwen
2004-01-01
Qualitative data from in-depth interviews were collected in 1990-1991, 1992, and 2000 with 49 prostituted juveniles remanded to two rehabilitation centers in Taiwan. These data are analyzed to explore Taiwanese prostituted juveniles' feelings about themselves and their work, their motivations for remaining in prostitution, and their difficulties…
Identification of ancient remains through genomic sequencing
Blow, Matthew J.; Zhang, Tao; Woyke, Tanja; Speller, Camilla F.; Krivoshapkin, Andrei; Yang, Dongya Y.; Derevianko, Anatoly; Rubin, Edward M.
2008-01-01
Studies of ancient DNA have been hindered by the preciousness of remains, the small quantities of undamaged DNA accessible, and the limitations associated with conventional PCR amplification. In these studies, we developed and applied a genomewide adapter-mediated emulsion PCR amplification protocol for ancient mammalian samples estimated to be between 45,000 and 69,000 yr old. Using 454 Life Sciences (Roche) and Illumina sequencing (formerly Solexa sequencing) technologies, we examined over 100 megabases of DNA from amplified extracts, revealing unbiased sequence coverage with substantial amounts of nonredundant nuclear sequences from the sample sources and negligible levels of human contamination. We consistently recorded over 500-fold increases, such that nanogram quantities of starting material could be amplified to microgram quantities. Application of our protocol to a 50,000-yr-old uncharacterized bone sample that was unsuccessful in mitochondrial PCR provided sufficient nuclear sequences for comparison with extant mammals and subsequent phylogenetic classification of the remains. The combined use of emulsion PCR amplification and high-throughput sequencing allows for the generation of large quantities of DNA sequence data from ancient remains. Using such techniques, even small amounts of ancient remains with low levels of endogenous DNA preservation may yield substantial quantities of nuclear DNA, enabling novel applications of ancient DNA genomics to the investigation of extinct phyla. PMID:18426903
Kadav Moun PSA (:60) (Human Remains)
2010-02-18
This is an important public health announcement about safety precautions for those handling human remains. Language: Haitian Creole. Created: 2/18/2010 by Centers for Disease Control and Prevention (CDC). Date Released: 2/18/2010.
The case for fencing remains intact.
Packer, C; Swanson, A; Canney, S; Loveridge, A; Garnett, S; Pfeifer, M; Burton, A C; Bauer, H; MacNulty, D
2013-11-01
Creel et al. argue against the conservation effectiveness of fencing based on a population measure that ignores the importance of top predators to ecosystem processes. Their statistical analyses consider, first, only a subset of fenced reserves and, second, an incomplete examination of 'costs per lion.' Our original conclusions remain unaltered.
Removing the remaining ridges in fingerprint segmentation
ZHU En; ZHANG Jian-ming; YIN Jian-ping; ZHANG Guo-min; HU Chun-feng
2006-01-01
Fingerprint segmentation is an important step in fingerprint recognition and is usually aimed to identify non-ridge regions and unrecoverable low quality ridge regions and exclude them as background so as to reduce the time expenditure of image processing and avoid detecting false features. In high and in low quality ridge regions, often are some remaining ridges which are the afterimages of the previously scanned finger and are expected to be excluded from the foreground. However, existing segmentation methods generally do not take the case into consideration, and often, the remaining ridge regions are falsely classified as foreground by segmentation algorithm with spurious features produced erroneously including unrecoverable regions as foreground. This paper proposes two steps for fingerprint segmentation aimed at removing the remaining ridge region from the foreground. The non-ridge regions and unrecoverable low quality ridge regions are removed as background in the first step, and then the foreground produced by the first step is further analyzed for possible remove of the remaining ridge region. The proposed method proved effective in avoiding detecting false ridges and in improving minutiae detection.
Why Agricultural Educators Remain in the Classroom
Crutchfield, Nina; Ritz, Rudy; Burris, Scott
2013-01-01
The purpose of this study was to identify and describe factors that are related to agricultural educator career retention and to explore the relationships between work engagement, work-life balance, occupational commitment, and personal and career factors as related to the decision to remain in the teaching profession. The target population for…
Essential Qualities of Math Teaching Remain Unknown
Cavanagh, Sean
2008-01-01
According to a new federal report, the qualities of an effective mathematics teacher remain frustratingly elusive. The report of the National Mathematics Advisory Panel does not show what college math content and coursework are most essential for teachers. While the report offered numerous conclusions about math curriculum, cognition, and…
Juveniles' Motivations for Remaining in Prostitution
Hwang, Shu-Ling; Bedford, Olwen
2004-01-01
Qualitative data from in-depth interviews were collected in 1990-1991, 1992, and 2000 with 49 prostituted juveniles remanded to two rehabilitation centers in Taiwan. These data are analyzed to explore Taiwanese prostituted juveniles' feelings about themselves and their work, their motivations for remaining in prostitution, and their difficulties…
Entanglement in the Born-Oppenheimer Approximation
Izmaylov, Artur F
2016-01-01
The role of electron-nuclear entanglement on the validity of the Born-Oppenheimer (BO) approximation is investigated. While nonadiabatic couplings generally lead to entanglement and to a failure of the BO approximation, surprisingly the degree of electron-nuclear entanglement is found to be uncorrelated with the degree of validity of the BO approximation. This is because while the degree of entanglement of BO states is determined by their deviation from the corresponding states in the crude BO approximation, the accuracy of the BO approximation is dictated, instead, by the deviation of the BO states from the exact electron-nuclear states. In fact, in the context of a minimal avoided crossing model, extreme cases are identified where an adequate BO state is seen to be maximally entangled, and where the BO approximation fails but the associated BO state remains approximately unentangled. Further, the BO states are found to not preserve the entanglement properties of the exact electron-nuclear eigenstates, and t...
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Nonlinear Approximation Using Gaussian Kernels
Hangelbroek, Thomas
2009-01-01
It is well-known that non-linear approximation has an advantage over linear schemes in the sense that it provides comparable approximation rates to those of the linear schemes, but to a larger class of approximands. This was established for spline approximations and for wavelet approximations, and more recently for homogeneous radial basis function (surface spline) approximations. However, no such results are known for the Gaussian function. The crux of the difficulty lies in the necessity to vary the tension parameter in the Gaussian function spatially according to local information about the approximand: error analysis of Gaussian approximation schemes with varying tension are, by and large, an elusive target for approximators. We introduce and analyze in this paper a new algorithm for approximating functions using translates of Gaussian functions with varying tension parameters. Our scheme is sophisticated to a degree that it employs even locally Gaussians with varying tensions, and that it resolves local ...
Remaining Phosphorus Estimate Through Multiple Regression Analysis
M. E. ALVES; A. LAVORENTI
2006-01-01
The remaining phosphorus (Prem), P concentration that remains in solution after shaking soil with 0.01 mol L-1 CaCl2 containing 60 μg mL-1 P, is a very useful index for studies related to the chemistry of variable charge soils. Although the Prem determination is a simple procedure, the possibility of estimating accurate values of this index from easily and/or routinely determined soil properties can be very useful for practical purposes. The present research evaluated the Premestimation through multiple regression analysis in which routinely determined soil chemical data, soil clay content and soil pH measured in 1 mol L-1 NaF (pHNaF) figured as Prem predictor variables. The Prem can be estimated with acceptable accuracy using the above-mentioned approach, and PHNaF not only substitutes for clay content as a predictor variable but also confers more accuracy to the Prem estimates.
Contact allergy to rubber accelerators remains prevalent
Schwensen, J F; Menné, T; Johansen, J D
2016-01-01
INTRODUCTION: Chemicals used for the manufacturing of rubber are known causes of allergic contact dermatitis on the hands. Recent European studies have suggested a decrease in thiuram contact allergy. Moreover, while an association with hand dermatitis is well established, we have recently observ.......2% (19/54) and 35.4% (17/48) of the cases respectively. CONCLUSION: Contact allergy to rubber accelerators remains prevalent. Clinicians should be aware of the hitherto unexplored clinical association with facial dermatitis....
[Professional confidentiality: speak out or remain silent? ].
Daubigney, Jean-claude
2014-01-01
People who work with children, in their daily tasks, must choose whether to disclose information entrusted to them. However, they are subject to the law, which authorises or imposes speaking out or remaining silent. In terms of ethics, they can seek the best possible response while respecting professional secrecy when meeting an individual, in a situation, in a place or at a particular time. They must then take responsibility for that decision.
Terminology for houses and house remains
Rosberg, Karin
2013-01-01
In order to obtain lucidity, it is essential to choose adequate terminology when speaking of prehistoric houses. The understanding of house construction requires a terminology with a focus on construction. Very often, archaeologists instead use a terminology with a focus on the remains, and use an inadequate terminology for constructions, indicating that they do not fully consider how the constructions work. The article presents some suggestions for adequate construction terminology.
Forms of Approximate Radiation Transport
Brunner, G
2002-01-01
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
Approximation by Multivariate Singular Integrals
Anastassiou, George A
2011-01-01
Approximation by Multivariate Singular Integrals is the first monograph to illustrate the approximation of multivariate singular integrals to the identity-unit operator. The basic approximation properties of the general multivariate singular integral operators is presented quantitatively, particularly special cases such as the multivariate Picard, Gauss-Weierstrass, Poisson-Cauchy and trigonometric singular integral operators are examined thoroughly. This book studies the rate of convergence of these operators to the unit operator as well as the related simultaneous approximation. The last cha
Why do some cores remain starless ?
Anathpindika, S
2016-01-01
Physical conditions that could render a core starless(in the local Universe) is the subject of investigation in this work. To this end we studied the evolution of four starless cores, B68, L694-2, L1517B, L1689, and L1521F, a VeLLO. The density profile of a typical core extracted from an earlier simulation developed to study core-formation in a molecular cloud was used for the purpose. We demonstrate - (i) cores contracted in quasistatic manner over a timescale on the order of $\\sim 10^{5}$ years. Those that remained starless did briefly acquire a centrally concentrated density configuration that mimicked the density profile of a unstable Bonnor Ebert sphere before rebounding, (ii) three of our test cores viz. L694-2, L1689-SMM16 and L1521F remained starless despite becoming thermally super-critical. On the contrary B68 and L1517B remained sub-critical; L1521F collapsed to become a VeLLO only when gas-cooling was enhanced by increasing the size of dust-grains. This result is robust, for other cores viz. B68, ...
Approximations of fractional Brownian motion
Li, Yuqiang; 10.3150/10-BEJ319
2012-01-01
Approximations of fractional Brownian motion using Poisson processes whose parameter sets have the same dimensions as the approximated processes have been studied in the literature. In this paper, a special approximation to the one-parameter fractional Brownian motion is constructed using a two-parameter Poisson process. The proof involves the tightness and identification of finite-dimensional distributions.
Approximation by planar elastic curves
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Eda, Masaki; Higuchi, Hiroyoshi
2004-07-01
Many albatross remains have been found in the Japanese Islands and the surrounding areas, such as Sakhalin and South Korea. These remains are interesting for two reasons: numerous sites from which albatross remains have been found are located in coastal regions of the Far East where no albatrosses have been distributed recently, and there are some sites in which albatross remains represent a large portion of avian remains, although albatrosses are not easily preyed upon by human beings. We collected data on albatross remains from archaeological sites in the Far East regions during the Holocene and arranged the remains geographically, temporally and in terms of quantity. Based on these results, we showed that coastal areas along the Seas of Okhotsk and Japan have rarely been used by albatrosses in Modern times, though formerly there were many albatrosses. We proposed two explanations for the shrinkage of their distributional range: excessive hunting in the breeding areas, and distributional changes of prey for albatrosses.
Functional hand proportion is approximated by the Fibonacci series.
Choo, K W-Q; Quah, W-K; Chang, G-H; Chan, J Y
2012-08-01
The debatable relationship of functional human hand proportion with the Fibonacci series has remained an obscure scientific enigma short of clinical interest. The main difficulty of proving such a relationship lies in defining what should constitute true functional proportion. In this study, we re-evaluate this unique relationship using hand flexion creases as anatomical surrogates for the functional axes of joint rotation. Standardised desktop photocopies of palmar views of both hands in full digital extension and abduction were obtained from 100 healthy male volunteers of Chinese ethnicity. The functional axes were represented by the distal digital crease (distal interphalangeal joint, DIPJ), proximal digital crease (proximal interphalangeal joint, PIPJ), as well as the midpoint between the palmar digital and transverse palmar creases (metacarpophalangeal joint, MCPJ). The ratio of DIPJ-Fingertip:PIPJ-DIPJ:MCPJ-PIPJ (p3:p2:p1) was measured by two independent observers and represented as standard deviation about the mean, and then compared to the theoretical ratio of 1:1:2. Our results showed that, for the 2nd to 5th digits, the p2:p3 ratios were 0.97 ± ± 0.09, 1.10 ± 0.10, 1.04 ± 0.12, and 0.80 ± 0.08, respectively; whilst the p1:p2 ratios were 1.91 ± 0.17, 1.98 ± 0.14, 1.89 ± 0.16, and 2.09 ± 0.24, respectively. When the data were analysed for all digits, they showed a combined p3:p2:p1 ratio of 1:0.98:2.01. In conclusion, our results suggest that functional human hand proportion, as defined by flexion creases, is approximated by the Fibonacci series.
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
The remains of a spinning, hyperbolic encounter
De Vittori, Lorenzo; Gupta, Anuradha; Jetzer, Philippe
2014-01-01
We review a recently proposed approach to construct gravitational wave (GW) polarization states of unbound spinning compact binaries. Through this rather simple method, we are able to include corrections due to the dominant order spin-orbit interactions, in the quadrupolar approximation and in a semi-analytic way. We invoke the 1.5 post-Newtonian (PN) accurate quasi-Keplerian parametrization for the radial part of the dynamics and impose its temporal evolution in the PN accurate polarization states equations. Further, we compute 1PN accurate amplitude corrections for the polarization states of non-spinning compact binaries on hyperbolic orbits. As an interesting application, we perform comparisons with previously available results for both the GW signals in the case of non-spinning binaries and the theoretical prediction for the amplitude of the memory effect on the metric after the hyperbolic passage.
So close: remaining challenges to eradicating polio.
Toole, Michael J
2016-03-14
The Global Polio Eradication Initiative, launched in 1988, is close to achieving its goal. In 2015, reported cases of wild poliovirus were limited to just two countries - Afghanistan and Pakistan. Africa has been polio-free for more than 18 months. Remaining barriers to global eradication include insecurity in areas such as Northwest Pakistan and Eastern and Southern Afghanistan, where polio cases continue to be reported. Hostility to vaccination is either based on extreme ideologies, such as in Pakistan, vaccination fatigue by parents whose children have received more than 15 doses, and misunderstandings about the vaccine's safety and effectiveness such as in Ukraine. A further challenge is continued circulation of vaccine-derived poliovirus in populations with low immunity, with 28 cases reported in 2015 in countries as diverse as Madagascar, Ukraine, Laos, and Myanmar. This paper summarizes the current epidemiology of wild and vaccine-derived poliovirus, and describes the remaining challenges to eradication and innovative approaches being taken to overcome them.
Does hypertension remain after kidney transplantation?
Gholamreza Pourmand
2015-05-01
Full Text Available Hypertension is a common complication of kidney transplantation with the prevalence of 80%. Studies in adults have shown a high prevalence of hypertension (HTN in the first three months of transplantation while this rate is reduced to 50- 60% at the end of the first year. HTN remains as a major risk factor for cardiovascular diseases, lower graft survival rates and poor function of transplanted kidney in adults and children. In this retrospective study, medical records of 400 kidney transplantation patients of Sina Hospital were evaluated. Patients were followed monthly for the 1st year, every two months in the 2nd year and every three months after that. In this study 244 (61% patients were male. Mean ± SD age of recipients was 39.3 ± 13.8 years. In most patients (40.8% the cause of end-stage renal disease (ESRD was unknown followed by HTN (26.3%. A total of 166 (41.5% patients had been hypertensive before transplantation and 234 (58.5% had normal blood pressure. Among these 234 individuals, 94 (40.2% developed post-transplantation HTN. On the other hand, among 166 pre-transplant hypertensive patients, 86 patients (56.8% remained hypertensive after transplantation. Totally 180 (45% patients had post-transplantation HTN and 220 patients (55% didn't develop HTN. Based on the findings, the incidence of post-transplantation hypertension is high, and kidney transplantation does not lead to remission of hypertension. On the other hand, hypertension is one of the main causes of ESRD. Thus, early screening of hypertension can prevent kidney damage and reduce further problems in renal transplant recipients.
BDD Minimization for Approximate Computing
Soeken, Mathias; Grosse, Daniel; Chandrasekharan, Arun; Drechsler, Rolf
2016-01-01
We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the app...
Tree wavelet approximations with applications
XU Yuesheng; ZOU Qingsong
2005-01-01
We construct a tree wavelet approximation by using a constructive greedy scheme(CGS). We define a function class which contains the functions whose piecewise polynomial approximations generated by the CGS have a prescribed global convergence rate and establish embedding properties of this class. We provide sufficient conditions on a tree index set and on bi-orthogonal wavelet bases which ensure optimal order of convergence for the wavelet approximations encoded on the tree index set using the bi-orthogonal wavelet bases. We then show that if we use the tree index set associated with the partition generated by the CGS to encode a wavelet approximation, it gives optimal order of convergence.
Remaining Useful Lifetime (RUL - Probabilistic Predictive Model
Ephraim Suhir
2011-01-01
Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.
Smart Point Cloud: Definition and Remaining Challenges
Poux, F.; Hallot, P.; Neuville, R.; Billen, R.
2016-10-01
Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.
SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES
F. Poux
2016-10-01
Full Text Available Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.
Turbidite plays` immaturity means big potential remains
Pettingill, H.S. [Repsol Exploracion SA, Madrid (Spain)
1998-10-05
The international exploration and production industry is increasingly focusing on deepwater plays. Turbidites are not the only reservoir type that occurs in deepwater frontiers, but they are the primary reservoir type of those plays. A worldwide data base assembled from published information on 925 fields and discoveries with deepwater clastic reservoirs (turbidites sensu lato) has been employed to investigate the large-scale exploration and production trends. Coverage of the Former Soviet Union, China, and the Indian subcontinent has been minor, but with the large data base of fields and discoveries from the rest of the world, the broad conclusions should remain valid. This article describes the global turbidite play in terms of: (1) basins of the world where turbidite fields have been discovered; (2) the five largest basins in terms of total discovered resources; and (3) a summary of trap type, which is a critical geological factor in turbidite fields. The second article will summarize a population of the world`s 43 largest turbidite fields and discoveries.
Diophantine approximation and automorphic spectrum
Ghosh, Anish; Nevo, Amos
2010-01-01
The present paper establishes qunatitative estimates on the rate of diophantine approximation in homogeneous varieties of semisimple algebraic groups. The estimates established generalize and improve previous ones, and are sharp in a number of cases. We show that the rate of diophantine approximation is controlled by the spectrum of the automorphic representation, and is thus subject to the generalised Ramanujan conjectures.
Some results in Diophantine approximation
the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered...
Beyond the random phase approximation
Olsen, Thomas; Thygesen, Kristian S.
2013-01-01
We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...
Uniform approximation by (quantum) polynomials
Drucker, A.; de Wolf, R.
2011-01-01
We show that quantum algorithms can be used to re-prove a classical theorem in approximation theory, Jackson's Theorem, which gives a nearly-optimal quantitative version of Weierstrass's Theorem on uniform approximation of continuous functions by polynomials. We provide two proofs, based respectivel
Compressible Quasi-geostrophic Convection without the Anelastic Approximation
Calkins, M. A.; Marti, P.; Julien, K. A.
2014-12-01
Fluid compressibility is known to be an important, non-negligible component of the dynamics of many planetary atmospheres and stellar convection zones, yet imposes severe computational constraints on numerical simulations of the compressible Navier-Stokes equations (NSE). An often employed reduced form of the NSE are the anelastic equations, which maintain fluid compressibility in the form of a depth varying, adiabatic background state onto which the perturbations cannot feed back. We present the linear theory of compressible rotating convection in a local-area, plane layer geometry. An important dimensionless parameter in convection is the ratio of kinematic viscosity to thermal diffusivity, or the Prandtl number, Pr. It is shown that the anelastic approximation cannot capture the linear instability of gases with Prandtl numbers less than approximately 0.5 in the limit of rapid rotation; the time derivative of the density fluctuation appearing in the conservation of mass equation remains important for these cases and cannot be neglected. An alternative compressible, geostrophically balanced equation set has been derived and preliminary results utilizing this new equation set are presented. Notably, this new set of equations satisfies the Proudman-Taylor theorem on small axial scales even for strongly compressible flows, does not require the flow to be nearly adiabatic, and thus allows for feedback onto the background state.
Global approximation of convex functions
Azagra, D
2011-01-01
We show that for every (not necessarily bounded) open convex subset $U$ of $\\R^n$, every (not necessarily Lipschitz or strongly) convex function $f:U\\to\\R$ can be approximated by real analytic convex functions, uniformly on all of $U$. In doing so we provide a technique which transfers results on uniform approximation on bounded sets to results on uniform approximation on unbounded sets, in such a way that not only convexity and $C^k$ smoothness, but also local Lipschitz constants, minimizers, order, and strict or strong convexity, are preserved. This transfer method is quite general and it can also be used to obtain new results on approximation of convex functions defined on Riemannian manifolds or Banach spaces. We also provide a characterization of the class of convex functions which can be uniformly approximated on $\\R^n$ by strongly convex functions.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Remaining phosphorus estimated by pedotransfer function
Joice Cagliari
2011-02-01
Full Text Available Although the determination of remaining phosphorus (Prem is simple, accurate values could also be estimated with a pedotransfer function (PTF aiming at the additional use of soil analysis data and/or Prem replacement by an even simpler determination. The purpose of this paper was to develop a pedotransfer function to estimate Prem values of soils of the State of São Paulo based on properties with easier or routine laboratory determination. A pedotransfer function was developed by artificial neural networks (ANN from a database of Prem values, pH values measured in 1 mol L-1 NaF solution (pH NaF and soil chemical and physical properties of samples collected during soil classification activities carried out in the State of São Paulo by the Agronomic Institute of Campinas (IAC. Furthermore, a pedotransfer function was developed by regressing Prem values against the same predictor variables of the ANN-based PTF. Results showed that Prem values can be calculated more accurately with the ANN-based pedotransfer function with the input variables pH NaF values along with the sum of exchangeable bases (SB and the exchangeable aluminum (Al3+ soil content. In addition, the accuracy of the Prem estimates by ANN-based PTF were more sensitive to increases in the experimental database size. Although the database used in this study was not comprehensive enough for the establishment of a definitive pedotrasnfer function for Prem estimation, results indicated the inclusion of Prem and pH NaF measurements among the soil testing evaluations as promising ind order to provide a greater database for the development of an ANN-based pedotransfer function for accurate Prem estimates from pH NaF, SB, and Al3+ values.
Ghost Remains After Black Hole Eruption
2009-05-01
NASA's Chandra X-ray Observatory has found a cosmic "ghost" lurking around a distant supermassive black hole. This is the first detection of such a high-energy apparition, and scientists think it is evidence of a huge eruption produced by the black hole. This discovery presents astronomers with a valuable opportunity to observe phenomena that occurred when the Universe was very young. The X-ray ghost, so-called because a diffuse X-ray source has remained after other radiation from the outburst has died away, is in the Chandra Deep Field-North, one of the deepest X-ray images ever taken. The source, a.k.a. HDF 130, is over 10 billion light years away and existed at a time 3 billion years after the Big Bang, when galaxies and black holes were forming at a high rate. "We'd seen this fuzzy object a few years ago, but didn't realize until now that we were seeing a ghost", said Andy Fabian of the Cambridge University in the United Kingdom. "It's not out there to haunt us, rather it's telling us something - in this case what was happening in this galaxy billions of year ago." Fabian and colleagues think the X-ray glow from HDF 130 is evidence for a powerful outburst from its central black hole in the form of jets of energetic particles traveling at almost the speed of light. When the eruption was ongoing, it produced prodigious amounts of radio and X-radiation, but after several million years, the radio signal faded from view as the electrons radiated away their energy. HDF 130 Chandra X-ray Image of HDF 130 However, less energetic electrons can still produce X-rays by interacting with the pervasive sea of photons remaining from the Big Bang - the cosmic background radiation. Collisions between these electrons and the background photons can impart enough energy to the photons to boost them into the X-ray energy band. This process produces an extended X-ray source that lasts for another 30 million years or so. "This ghost tells us about the black hole's eruption long after
The Log-Linear Return Approximation, Bubbles, and Predictability
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
2012-01-01
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional....... Finally, we show that a bubble model in which expected returns are constant can explain the predictability of stock returns from the dividend-price ratio that many previous studies have documented....
Improved Approximation for Breakpoint Graph Decomposition and Sorting by Reversals
Rizzi, Romeo; Caprara, Alberto
2002-01-01
! instances of SBR on permutations with n elements. Our result uses the best known approximation algorithms for Stable Set on graphs with maximum degree 4 as well as for Set Packing where the maximum size of a set is 6. Any improvement in the ratio achieved by these approximation algorithms will yield...
Rytov approximation in electron scattering
Krehl, Jonas; Lubk, Axel
2017-06-01
In this work we introduce the Rytov approximation in the scope of high-energy electron scattering with the motivation of developing better linear models for electron scattering. Such linear models play an important role in tomography and similar reconstruction techniques. Conventional linear models, such as the phase grating approximation, have reached their limits in current and foreseeable applications, most importantly in achieving three-dimensional atomic resolution using electron holographic tomography. The Rytov approximation incorporates propagation effects which are the most pressing limitation of conventional models. While predominately used in the weak-scattering regime of light microscopy, we show that the Rytov approximation can give reasonable results in the inherently strong-scattering regime of transmission electron microscopy.
Rollout sampling approximate policy iteration
Dimitrakakis, C.; Lagoudakis, M.G.
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a
Approximate common divisors via lattices
Cohn, Henry
2011-01-01
We analyze the multivariate generalization of Howgrave-Graham's algorithm for the approximate common divisor problem. In the m-variable case with modulus N and approximate common divisor of size N^beta, this improves the size of the error tolerated from N^(beta^2) to N^(beta^((m+1)/m)), under a commonly used heuristic assumption. This gives a more detailed analysis of the hardness assumption underlying the recent fully homomorphic cryptosystem of van Dijk, Gentry, Halevi, and Vaikuntanathan. While these results do not challenge the suggested parameters, a 2^sqrt(n) approximation algorithm for lattice basis reduction in n dimensions could be used to break these parameters. We have implemented our algorithm, and it performs better in practice than the theoretical analysis suggests. Our results fit into a broader context of analogies between cryptanalysis and coding theory. The multivariate approximate common divisor problem is the number-theoretic analogue of noisy multivariate polynomial interpolation, and we ...
Approximate Implicitization Using Linear Algebra
Oliver J. D. Barrowclough
2012-01-01
Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
Binary nucleation beyond capillarity approximation
Kalikmanov, V.I.
2010-01-01
Large discrepancies between binary classical nucleation theory (BCNT) and experiments result from adsorption effects and inability of BCNT, based on the phenomenological capillarity approximation, to treat small clusters. We propose a model aimed at eliminating both of these deficiencies. Adsorption
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Nonlinear approximation with redundant dictionaries
Borup, Lasse; Nielsen, M.; Gribonval, R.
2005-01-01
In this paper we study nonlinear approximation and data representation with redundant function dictionaries. In particular, approximation with redundant wavelet bi-frame systems is studied in detail. Several results for orthonormal wavelets are generalized to the redundant case. In general......, for a wavelet bi-frame system the approximation properties are limited by the number of vanishing moments of the system. In some cases this can be overcome by oversampling, but at a price of replacing the canonical expansion by another linear expansion. Moreover, for special non-oversampled wavelet bi-frames we...... can obtain good approximation properties not restricted by the number of vanishing moments, but again without using the canonical expansion....
Mathematical algorithms for approximate reasoning
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Improved Approximation for Orienting Mixed Graphs
Gamzu, Iftah
2012-01-01
An instance of the maximum mixed graph orientation problem consists of a mixed graph and a collection of source-target vertex pairs. The objective is to orient the undirected edges of the graph so as to maximize the number of pairs that admit a directed source-target path. This problem has recently arisen in the study of biological networks, and it also has applications in communication networks. In this paper, we identify an interesting local-to-global orientation property. This property enables us to modify the best known algorithms for maximum mixed graph orientation and some of its special structured instances, due to Elberfeld et al. (CPM '11), and obtain improved approximation ratios. We further proceed by developing an algorithm that achieves an even better approximation guarantee for the general setting of the problem. Finally, we study several well-motivated variants of this orientation problem.
Twisted inhomogeneous Diophantine approximation and badly approximable sets
Harrap, Stephen
2010-01-01
For any real pair i, j geq 0 with i+j=1 let Bad(i, j) denote the set of (i, j)-badly approximable pairs. That is, Bad(i, j) consists of irrational vectors x:=(x_1, x_2) in R^2 for which there exists a positive constant c(x) such that max {||qx_1||^(-i), ||qx_2||^(-j)} > c(x)/q for all q in N. Building on a result of Kurzweil, a new characterization of the set Bad(i, j) in terms of `well-approximable' vectors in the area of `twisted' inhomogeneous Diophantine approximation is established. In addition, it is shown that Bad^x(i, j), the `twisted' inhomogeneous analogue of Bad(i, j), has full Hausdorff dimension 2 when x is chosen from the set Bad(i, j).
SOME CONVERSE RESULTS ON ONESIDED APPROXIMATION: JUSTIFICATIONS
Wang Jianli; Zhou Songping
2003-01-01
The present paper deals with best onesided approximation rate in Lp spaces ～En (f)Lp of f ∈ C2π. Although it is clear that the estimate ～En(f)Lp≤C ‖f‖ Lp cannot be correct for all f ∈ Lp2π in case p＜∞, the question whether ～En (f)Lp ≤Cω (f, n-1 )Lp or ～En(f)Lp ≤CEn(f)Lp holds for f ∈ C2π remains totally untouched.Therefore it forms a basic problem to justify onesided approximation. The present paper will provide an answer to settle down the basis.
Approximation algorithms for some vehicle routing problems
Bazgan, Cristina; Hassin, Refael; Monnot, Jérôme
2005-01-01
We study vehicle routing problems with constraints on the distance traveled by each vehicle or on the number of vehicles. The objective is either to minimize the total distance traveled by vehicles or to minimize the number of vehicles used. We design constant differential approximation algorithms for kVRP. Note that, using the differential bound for METRIC 3VRP, we obtain the randomized standard ratio . This is an improvement of the best-known bound of 2 given by Haimovich et al. (Vehicle Ro...
Approximation algorithm for multiprocessor parallel job scheduling
陈松乔; 黄金贵; 陈建二
2002-01-01
Pk|fix|Cmax problem is a new scheduling problem based on the multiprocessor parallel job, and it is proved to be NP-hard problem when k≥3. This paper focuses on the case of k=3. Some new observations and new techniques for P3|fix|Cmax problem are offered. The concept of semi-normal schedulings is introduced, and a very simple linear time algorithm Semi-normal Algorithm for constructing semi-normal schedulings is developed. With the method of the classical Graham List Scheduling, a thorough analysis of the optimal scheduling on a special instance is provided, which shows that the algorithm is an approximation algorithm of ratio of 9/8 for any instance of P3|fix|Cmax problem, and improves the previous best ratio of 7/6 by M.X.Goemans.
Radiocarbon analysis of human remains: a review of forensic applications.
Ubelaker, Douglas H
2014-11-01
Radiocarbon analysis of organic materials, with the comparison of values with those of the post-1950 modern bomb curve, has proven useful in forensic science to help evaluate the antiquity of evidence. Applications are particularly helpful in the study of human remains, especially with those displaying advanced decomposition of soft tissues. Radiocarbon analysis can reveal if the remains relate to the modern, post-1950 era and if so, also provide information needed to evaluate the death and birth date. Sample selection and interpretation of results must be guided by knowledge of the formation and remodeling of different human tissues, as well as contextual information and the approximate age at death of the individual represented. Dental enamel does not remodel and thus captures dietary radiocarbon values at the time of juvenile formation. Most other human tissues do remodel but at differing rates and therefore collectively offer key information relative to the estimation of the death date. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Reinforcement Learning via AIXI Approximation
Veness, Joel; Hutter, Marcus; Silver, David
2010-01-01
This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains.
Approximate Matching of Hierarchial Data
Augsten, Nikolaus
The goal of this thesis is to design, develop, and evaluate new methods for the approximate matching of hierarchical data represented as labeled trees. In approximate matching scenarios two items should be matched if they are similar. Computing the similarity between labeled trees is hard...... as in addition to the data values also the structure must be considered. A well-known measure for comparing trees is the tree edit distance. It is computationally expensive and leads to a prohibitively high run time. Our solution for the approximate matching of hierarchical data are pq-grams. The pq...... formally proof that the pq-gram index can be incrementally updated based on the log of edit operations without reconstructing intermediate tree versions. The incremental update is independent of the data size and scales to a large number of changes in the data. We introduce windowed pq...
Concept Approximation between Fuzzy Ontologies
无
2006-01-01
Fuzzy ontologies are efficient tools to handle fuzzy and uncertain knowledge on the semantic web; but there are heterogeneity problems when gaining interoperability among different fuzzy ontologies. This paper uses concept approximation between fuzzy ontologies based on instances to solve the heterogeneity problems. It firstly proposes an instance selection technology based on instance clustering and weighting to unify the fuzzy interpretation of different ontologies and reduce the number of instances to increase the efficiency. Then the paper resolves the problem of computing the approximations of concepts into the problem of computing the least upper approximations of atom concepts. It optimizes the search strategies by extending atom concept sets and defining the least upper bounds of concepts to reduce the searching space of the problem. An efficient algorithm for searching the least upper bounds of concept is given.
Diophantine approximation and Dirichlet series
Queffélec, Hervé
2013-01-01
This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
Approximate Sparse Regularized Hyperspectral Unmixing
Chengzhi Deng
2014-01-01
Full Text Available Sparse regression based unmixing has been recently proposed to estimate the abundance of materials present in hyperspectral image pixel. In this paper, a novel sparse unmixing optimization model based on approximate sparsity, namely, approximate sparse unmixing (ASU, is firstly proposed to perform the unmixing task for hyperspectral remote sensing imagery. And then, a variable splitting and augmented Lagrangian algorithm is introduced to tackle the optimization problem. In ASU, approximate sparsity is used as a regularizer for sparse unmixing, which is sparser than l1 regularizer and much easier to be solved than l0 regularizer. Three simulated and one real hyperspectral images were used to evaluate the performance of the proposed algorithm in comparison to l1 regularizer. Experimental results demonstrate that the proposed algorithm is more effective and accurate for hyperspectral unmixing than state-of-the-art l1 regularizer.
Evaluating the remaining strength factor for repaired pipeline
Freire, J.L.F.; Vieira, R.D [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil); Diniz, J.L.A. [Fluke Engenharia Ltda., Macae, RJ (Brazil)
2005-07-01
This paper discusses and brings experimental evidence to the application of the remaining strength factor (RSF) to pipeline that has undergone metal thickness loss due to erosion, corrosion or grinding. The RSF is defined as the ratio between pressures that plastically collapse pipeline segments with and without metal loss (damage), and it may also be extended to damaged pipes that have been repaired with composite sleeves. Data from burst tests performed on undamaged, damaged and damage-repaired pipe specimens are utilized to back up the use of the RSF concept, allowing better insight into the safety factor derived from the application of standard fitness-for-purpose methodologies. The paper concludes by showing that the RSF may be used to establish a quantitative measurement of the effectiveness of a specific repair system applied to damaged pipes. (author)
Transfinite Approximation of Hindman's Theorem
Beiglböck, Mathias
2010-01-01
Hindman's Theorem states that in any finite coloring of the integers, there is an infinite set all of whose finite sums belong to the same color. This is much stronger than the corresponding finite form, stating that in any finite coloring of the integers there are arbitrarily long finite sets with the same property. We extend the finite form of Hindman's Theorem to a "transfinite" version for each countable ordinal, and show that Hindman's Theorem is equivalent to the appropriate transfinite approximation holding for every countable ordinal. We then give a proof of Hindman's Theorem by directly proving these transfinite approximations.
Tree wavelet approximations with applications
无
2005-01-01
[1]Baraniuk, R. G., DeVore, R. A., Kyriazis, G., Yu, X. M., Near best tree approximation, Adv. Comput. Math.,2002, 16: 357-373.[2]Cohen, A., Dahmen, W., Daubechies, I., DeVore, R., Tree approximation and optimal encoding, Appl. Comput.Harmonic Anal., 2001, 11: 192-226.[3]Dahmen, W., Schneider, R., Xu, Y., Nonlinear functionals of wavelet expansions-adaptive reconstruction and fast evaluation, Numer. Math., 2000, 86: 49-101.[4]DeVore, R. A., Nonlinear approximation, Acta Numer., 1998, 7: 51-150.[5]Davis, G., Mallat, S., Avellaneda, M., Adaptive greedy approximations, Const. Approx., 1997, 13: 57-98.[6]DeVore, R. A., Temlyakov, V. N., Some remarks on greedy algorithms, Adv. Comput. Math., 1996, 5: 173-187.[7]Kashin, B. S., Temlyakov, V. N., Best m-term approximations and the entropy of sets in the space L1, Mat.Zametki (in Russian), 1994, 56: 57-86.[8]Temlyakov, V. N., The best m-term approximation and greedy algorithms, Adv. Comput. Math., 1998, 8:249-265.[9]Temlyakov, V. N., Greedy algorithm and m-term trigonometric approximation, Constr. Approx., 1998, 14:569-587.[10]Hutchinson, J. E., Fractals and self similarity, Indiana. Univ. Math. J., 1981, 30: 713-747.[11]Binev, P., Dahmen, W., DeVore, R. A., Petruchev, P., Approximation classes for adaptive methods, Serdica Math.J., 2002, 28: 1001-1026.[12]Gilbarg, D., Trudinger, N. S., Elliptic Partial Differential Equations of Second Order, Berlin: Springer-Verlag,1983.[13]Ciarlet, P. G., The Finite Element Method for Elliptic Problems, New York: North Holland, 1978.[14]Birman, M. S., Solomiak, M. Z., Piecewise polynomial approximation of functions of the class Wαp, Math. Sb.,1967, 73: 295-317.[15]DeVore, R. A., Lorentz, G. G., Constructive Approximation, New York: Springer-Verlag, 1993.[16]DeVore, R. A., Popov, V., Interpolation of Besov spaces, Trans. Amer. Math. Soc., 1988, 305: 397-414.[17]Devore, R., Jawerth, B., Popov, V., Compression of wavelet decompositions, Amer. J. Math., 1992, 114: 737-785.[18]Storozhenko, E
Sr Isotopes and human skeletal remains, improving a methodological approach in migration studies
Solis Pichardo, G.; Schaaf, P. E.; Hernandez, T.; Horn, P.; Manzanilla, L. R.
2013-12-01
Asserting mobility of ancient humans is a major issue for anthropologists. Sr isotopes are widely used in anthropological sciences to trace human migration histories from ancient burials. Sr in bone approximately reflects the isotopic composition of the geological region where the person lived before death; whereas the Sr isotopic system in tooth enamel is thought to remain as a closed system and thus conserves the isotope ratio acquired during childhood. A comparison of the 87Sr/86Sr ratios found in tooth enamel and in bone is performed to determine if the human skeletal remains belonged to a local or a migrant. Until now, tooth enamel was considered to be less sensitive to secondary Sr contamination due to its higher crystallinity and larger sizes of the biogenic apatites in comparison to that in bone and dentine. In the past, enamel as well as bone material was powdered, dissolved and analyzed by thermal ionization mass spectrometry (TIMS). In this contribution we show, however, that simple dissolution of enamel frequently yields erroneous results. Tooth enamel is often affected by secondary strontium contamination processes such as caries or diagenetic and environmental input, which can change the original isotopic composition. To avoid these problems we introduced a pre-treatment and three-step leaching procedure in enamel samples. Leaching is carried out with acetic acid of different concentrations, yielding two leachates and one residue of each sample. Frequently the 87Sr/86Sr results of the three leachates display different values confirming that secondary contamination did occur. Several examples from Teotihuacan, central Mexico demonstrate that enamel 87Sr/86Sr without leaching can show correct biogenic values, but there is also a considerable probability for these values to represent a mixture of original and secondary Sr without significance for migration reconstructions. Only the residue value is interpreted by us as the representative ratio for
WKB Approximation in Noncommutative Gravity
Maja Buric
2007-12-01
Full Text Available We consider the quasi-commutative approximation to a noncommutative geometry defined as a generalization of the moving frame formalism. The relation which exists between noncommutativity and geometry is used to study the properties of the high-frequency waves on the flat background.
Approximation properties of haplotype tagging
Dreiseitl Stephan
2006-01-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.
Truthful approximations to range voting
Filos-Ratsika, Aris; Miltersen, Peter Bro
We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...
Approximate Reasoning with Fuzzy Booleans
Broek, van den P.M.; Noppen, J.A.R.
2004-01-01
This paper introduces, in analogy to the concept of fuzzy numbers, the concept of fuzzy booleans, and examines approximate reasoning with the compositional rule of inference using fuzzy booleans. It is shown that each set of fuzzy rules is equivalent to a set of fuzzy rules with singleton crisp ante
Rational approximation of vertical segments
Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte
2007-08-01
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.
Approximation on the complex sphere
Alsaud, Huda; Kushpel, Alexander; Levesley, Jeremy
2012-01-01
We develop new elements of harmonic analysis on the complex sphere on the basis of which Bernstein's, Jackson's and Kolmogorov's inequalities are established. We apply these results to get order sharp estimates of $m$-term approximations. The results obtained is a synthesis of new results on classical orthogonal polynomials, harmonic analysis on manifolds and geometric properties of Euclidean spaces.
Pythagorean Approximations and Continued Fractions
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Approximation of Surfaces by Cylinders
Randrup, Thomas
1998-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Approximate Reanalysis in Topology Optimization
Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole
2009-01-01
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...
Low Rank Approximation in $G_0W_0$ Approximation
Shao, Meiyue; Yang, Chao; Liu, Fang; da Jornada, Felipe H; Deslippe, Jack; Louie, Steven G
2016-01-01
The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cos...
Approximating Low-Dimensional Coverage Problems
Badanidiyuru, Ashwinkumar; Lee, Hooyeon
2011-01-01
We study the complexity of the maximum coverage problem, restricted to set systems of bounded VC-dimension. Our main result is a fixed-parameter tractable approximation scheme: an algorithm that outputs a $(1-\\eps)$-approximation to the maximum-cardinality union of $k$ sets, in running time $O(f(\\eps,k,d)\\cdot poly(n))$ where $n$ is the problem size, $d$ is the VC-dimension of the set system, and $f(\\eps,k,d)$ is exponential in $(kd/\\eps)^c$ for some constant $c$. We complement this positive result by showing that the function $f(\\eps,k,d)$ in the running-time bound cannot be replaced by a function depending only on $(\\eps,d)$ or on $(k,d)$, under standard complexity assumptions. We also present an improved upper bound on the approximation ratio of the greedy algorithm in special cases of the problem, including when the sets have bounded cardinality and when they are two-dimensional halfspaces. Complementing these positive results, we show that when the sets are four-dimensional halfspaces neither the greedy ...
Approximate Graph Edit Distance in Quadratic Time.
Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst
2015-09-14
Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.
The Complexity of Approximately Counting Stable Matchings
Chebolu, Prasad; Martin, Russell
2010-01-01
We investigate the complexity of approximately counting stable matchings in the $k$-attribute model, where the preference lists are determined by dot products of "preference vectors" with "attribute vectors", or by Euclidean distances between "preference points" and "attribute points". Irving and Leather proved that counting the number of stable matchings in the general case is $#P$-complete. Counting the number of stable matchings is reducible to counting the number of downsets in a (related) partial order and is interreducible, in an approximation-preserving sense, to a class of problems that includes counting the number of independent sets in a bipartite graph ($#BIS$). It is conjectured that no FPRAS exists for this class of problems. We show this approximation-preserving interreducibilty remains even in the restricted $k$-attribute setting when $k \\geq 3$ (dot products) or $k \\geq 2$ (Euclidean distances). Finally, we show it is easy to count the number of stable matchings in the 1-attribute dot-product ...
Approximate Inference for Wireless Communications
Hansen, Morten
This thesis investigates signal processing techniques for wireless communication receivers. The aim is to improve the performance or reduce the computationally complexity of these, where the primary focus area is cellular systems such as Global System for Mobile communications (GSM) (and extensions...... complexity can potentially lead to limited power consumption, which translates into longer battery life-time in the handsets. The scope of the thesis is more specifically to investigate approximate (nearoptimal) detection methods that can reduce the computationally complexity significantly compared...... to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum...
Hydrogen Beyond the Classic Approximation
Scivetti, I
2003-01-01
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Validity of the eikonal approximation
Kabat, D
1992-01-01
We summarize results on the reliability of the eikonal approximation in obtaining the high energy behavior of a two particle forward scattering amplitude. Reliability depends on the spin of the exchanged field. For scalar fields the eikonal fails at eighth order in perturbation theory, when it misses the leading behavior of the exchange-type diagrams. In a vector theory the eikonal gets the exchange diagrams correctly, but fails by ignoring certain non-exchange graphs which dominate the asymptotic behavior of the full amplitude. For spin--2 tensor fields the eikonal captures the leading behavior of each order in perturbation theory, but the sum of eikonal terms is subdominant to graphs neglected by the approximation. We also comment on the eikonal for Yang-Mills vector exchange, where the additional complexities of the non-abelian theory may be absorbed into Regge-type modifications of the gauge boson propagators.
Validity of the Eikonal Approximation
Kabat, Daniel
1992-01-01
We summarize results on the reliability of the eikonal approximation in obtaining the high energy behavior of a two particle forward scattering amplitude. Reliability depends on the spin of the exchanged field. For scalar fields the eikonal fails at eighth order in perturbation theory, when it misses the leading behavior of the exchange-type diagrams. In a vector theory the eikonal gets the exchange diagrams correctly, but fails by ignoring certain non-exchange graphs which dominate the asymp...
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations.
Péter L Erdős
Full Text Available In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007, for regular directed graphs (by Greenhill, 2011 and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013. Several heuristics on counting the number of possible realizations exist (via sampling processes, and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS for counting of all realizations.
Many Faces of Boussinesq Approximations
Vladimirov, Vladimir A
2016-01-01
The \\emph{equations of Boussinesq approximation} (EBA) for an incompressible and inhomogeneous in density fluid are analyzed from a viewpoint of the asymptotic theory. A systematic scaling shows that there is an infinite number of related asymptotic models. We have divided them into three classes: `poor', `reasonable' and `good' Boussinesq approximations. Each model can be characterized by two parameters $q$ and $k$, where $q =1, 2, 3, \\dots$ and $k=0, \\pm 1, \\pm 2,\\dots$. Parameter $q$ is related to the `quality' of approximation, while $k$ gives us an infinite set of possible scales of velocity, time, viscosity, \\emph{etc.} Increasing $q$ improves the quality of a model, but narrows the limits of its applicability. Parameter $k$ allows us to vary the scales of time, velocity and viscosity and gives us the possibility to consider any initial and boundary conditions. In general, we discover and classify a rich variety of possibilities and restrictions, which are hidden behind the routine use of the Boussinesq...
APPROXIMATE MODELS FOR FLOOD ROUTING
For rapid calculation of the downstream effects» of the propagation of ﬂoods ... kinematic model and a nonlinear convection-diffusion model are extracted ... immensely to the development of this area of study. ... journal of science and technology, volume 23 no.1 2003 .... of change of ﬂow rate RD (ratio of ﬁnal normal ﬂow.
Rollout Sampling Approximate Policy Iteration
Dimitrakakis, Christos
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.
Approximate Deconvolution Reduced Order Modeling
Xie, Xuping; Wang, Zhu; Iliescu, Traian
2015-01-01
This paper proposes a large eddy simulation reduced order model(LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition(POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution(AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient(10^{-3})
Approximation for Bayesian Ability Estimation.
1987-02-18
posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31
DeHart, Russell; Smith, Eric; Lakin, John
2015-01-01
The spin period to precession period ratio of a non-axisymmetric spin-stabilized spacecraft, the Advanced Composition Explorer (ACE), was used to estimate the remaining mass and distribution of fuel within its propulsion system. This analysis was undertaken once telemetry suggested that two of the four fuel tanks had no propellant remaining, contrary to pre-launch expectations of the propulsion system performance. Numerical integration of possible fuel distributions was used to calculate moments of inertia for the spinning spacecraft. A Fast Fourier Transform (FFT) of output from a dynamics simulation was employed to relate calculated moments of inertia to spin and precession periods. The resulting modeled ratios were compared to the actual spin period to precession period ratio derived from the effect of post-maneuver nutation angle on sun sensor measurements. A Monte Carlo search was performed to tune free parameters using the observed spin period to precession period ratio over the life of the mission. This novel analysis of spin and precession periods indicates that at the time of launch, propellant was distributed unevenly between the two pairs of fuel tanks, with one pair having approximately 20% more propellant than the other pair. Furthermore, it indicates the pair of the tanks with less fuel expelled all of its propellant by 2014 and that approximately 46 kg of propellant remains in the other two tanks, an amount that closely matches the operational fuel accounting estimate. Keywords: Fuel Distribution, Moments of Inertia, Precession, Spin, Nutation
Plasma Physics Approximations in Ares
Managan, R. A.
2015-01-08
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, F_{n}( μ/θ ), the chemical potential, μ or ζ = ln(1+e^{ μ/θ} ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A^{α} (ζ ),A^{β} (ζ ), ζ, f(ζ ) = (1 + e^{-μ/θ})F_{1/2}(μ/θ), F_{1/2}'/F_{1/2}, F_{c}^{α}, and F_{c}^{β}. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
Rational approximations to fluid properties
Kincaid, J. M.
1990-05-01
The purpose of this report is to summarize some results that were presented at the Spring AIChE meeting in Orlando, Florida (20 March 1990). We report on recent attempts to develop a systematic method, based on the technique of rational approximation, for creating mathematical models of real-fluid equations of state and related properties. Equation-of-state models for real fluids are usually created by selecting a function tilde p(T,rho) that contains a set of parameters (gamma sub i); the (gamma sub i) is chosen such that tilde p(T,rho) provides a good fit to the experimental data. (Here p is the pressure, T the temperature and rho is the density). In most cases, a nonlinear least-squares numerical method is used to determine (gamma sub i). There are several drawbacks to this method: one has essentially to guess what tilde p(T,rho) should be; the critical region is seldom fit very well and nonlinear numerical methods are time consuming and sometimes not very stable. The rational approximation approach we describe may eliminate all of these drawbacks. In particular, it lets the data choose the function tilde p(T,rho) and its numerical implementation involves only linear algorithms.
Refining Approximating Betweenness Centrality Based on Samplings
Ji, Shiyu
2016-01-01
Betweenness Centrality (BC) is an important measure used widely in complex network analysis, such as social network, web page search, etc. Computing the exact BC values is highly time consuming. Currently the fastest exact BC determining algorithm is given by Brandes, taking $O(nm)$ time for unweighted graphs and $O(nm+n^2\\log n)$ time for weighted graphs, where $n$ is the number of vertices and $m$ is the number of edges in the graph. Due to the extreme difficulty of reducing the time complexity of exact BC determining problem, many researchers have considered the possibility of any satisfactory BC approximation algorithms, especially those based on samplings. Bader et al. give the currently best BC approximation algorithm, with a high probability to successfully estimate the BC of one vertex within a factor of $1/\\varepsilon$ using $\\varepsilon t$ samples, where $t$ is the ratio between $n^2$ and the BC value of the vertex. However, some of the algorithmic parameters in Bader's work are not yet tightly boun...
Dodgson's Rule Approximations and Absurdity
McCabe-Dansted, John C
2010-01-01
With the Dodgson rule, cloning the electorate can change the winner, which Young (1977) considers an "absurdity". Removing this absurdity results in a new rule (Fishburn, 1977) for which we can compute the winner in polynomial time (Rothe et al., 2003), unlike the traditional Dodgson rule. We call this rule DC and introduce two new related rules (DR and D&). Dodgson did not explicitly propose the "Dodgson rule" (Tideman, 1987); we argue that DC and DR are better realizations of the principle behind the Dodgson rule than the traditional Dodgson rule. These rules, especially D&, are also effective approximations to the traditional Dodgson's rule. We show that, unlike the rules we have considered previously, the DC, DR and D& scores differ from the Dodgson score by no more than a fixed amount given a fixed number of alternatives, and thus these new rules converge to Dodgson under any reasonable assumption on voter behaviour, including the Impartial Anonymous Culture assumption.
Approximation by double Walsh polynomials
Ferenc Móricz
1992-01-01
Full Text Available We study the rate of approximation by rectangular partial sums, Cesàro means, and de la Vallée Poussin means of double Walsh-Fourier series of a function in a homogeneous Banach space X. In particular, X may be Lp(I2, where 1≦p<∞ and I2=[0,1×[0,1, or CW(I2, the latter being the collection of uniformly W-continuous functions on I2. We extend the results by Watari, Fine, Yano, Jastrebova, Bljumin, Esfahanizadeh and Siddiqi from univariate to multivariate cases. As by-products, we deduce sufficient conditions for convergence in Lp(I2-norm and uniform convergence on I2 as well as characterizations of Lipschitz classes of functions. At the end, we raise three problems.
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options."
Approximate reduction of dynamical systems
Tabuada, Paulo; Julius, Agung; Pappas, George J
2007-01-01
The reduction of dynamical systems has a rich history, with many important applications related to stability, control and verification. Reduction of nonlinear systems is typically performed in an exact manner - as is the case with mechanical systems with symmetry--which, unfortunately, limits the type of systems to which it can be applied. The goal of this paper is to consider a more general form of reduction, termed approximate reduction, in order to extend the class of systems that can be reduced. Using notions related to incremental stability, we give conditions on when a dynamical system can be projected to a lower dimensional space while providing hard bounds on the induced errors, i.e., when it is behaviorally similar to a dynamical system on a lower dimensional space. These concepts are illustrated on a series of examples.
Diophantine approximations and Diophantine equations
Schmidt, Wolfgang M
1991-01-01
"This book by a leading researcher and masterly expositor of the subject studies diophantine approximations to algebraic numbers and their applications to diophantine equations. The methods are classical, and the results stressed can be obtained without much background in algebraic geometry. In particular, Thue equations, norm form equations and S-unit equations, with emphasis on recent explicit bounds on the number of solutions, are included. The book will be useful for graduate students and researchers." (L'Enseignement Mathematique) "The rich Bibliography includes more than hundred references. The book is easy to read, it may be a useful piece of reading not only for experts but for students as well." Acta Scientiarum Mathematicarum
Truthful approximations to range voting
Filos-Ratsika, Aris; Miltersen, Peter Bro
We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...... maximization in this setting. With m being the number of alternatives, we exhibit a randomized truthful-in-expectation ordinal mechanism implementing an outcome whose expected social welfare is at least an Omega(m^{-3/4}) fraction of the social welfare of the socially optimal alternative. On the other hand, we...... show that for sufficiently many agents and any truthful-in-expectation ordinal mechanism, there is a valuation profile where the mechanism achieves at most an O(m^{-{2/3}) fraction of the optimal social welfare in expectation. We get tighter bounds for the natural special case of m = 3...
Approximation of Surfaces by Cylinders
Randrup, Thomas
1998-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points...... in the projection within a tolerance given by the reference curve, and the rulings are lines perpendicular to the projection plane. Application of the method in ship design is given....
Analytical approximations for spiral waves
Löber, Jakob, E-mail: jakob@physik.tu-berlin.de; Engel, Harald [Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstrasse 36, EW 7-1, 10623 Berlin (Germany)
2013-12-15
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
On quantum and approximate privacy
Klauck, H
2001-01-01
This paper studies privacy in communication complexity. The focus is on quantum versions of the model and on protocols with only approximate privacy against honest players. We show that the privacy loss (the minimum divulged information) in computing a function can be decreased exponentially by using quantum protocols, while the class of privately computable functions (i.e., those with privacy loss 0) is not increased by quantum protocols. Quantum communication combined with small information leakage on the other hand makes certain functions computable (almost) privately which are not computable using quantum communication without leakage or using classical communication with leakage. We also give an example of an exponential reduction of the communication complexity of a function by allowing a privacy loss of o(1) instead of privacy loss 0.
Magnus approximation in neutrino oscillations
Acero, Mario A.; Aguilar-Arevalo, Alexis A.; D'Olivo, J. C.
2011-04-01
Oscillations between active and sterile neutrinos remain as an open possibility to explain some anomalous experimental observations. In a four-neutrino (three active plus one sterile) mixing scheme, we use the Magnus expansion of the evolution operator to study the evolution of neutrino flavor amplitudes within the Earth. We apply this formalism to calculate the transition probabilities from active to sterile neutrinos with energies of the order of a few GeV, taking into account the matter effect for a varying terrestrial density.
Oscillatory convection and limitations of the Boussinesq approximation
Wood, Toby S
2016-01-01
We determine the asymptotic conditions under which the Boussinesq approximation is valid for oscillatory convection in a rapidly rotating fluid. In the astrophysically relevant parameter regime of small Prandtl number, we show that the Boussinesq prediction for the onset of convection is valid only under much more restrictive conditions than those that are usually assumed. In the case of an ideal gas, we recover the Boussinesq results only if the ratio of the domain height to a typical scale height is much smaller than the Prandtl number. This requires an extremely shallow domain in the astrophysical parameter regime. Other commonly-used "sound-proof" approximations generally perform no better than the Boussinesq approximation. The exception is a particular implementation of the pseudo-incompressible approximation, which predicts the correct instability threshold beyond the range of validity of the Boussinesq approximation.
IONIS: Approximate atomic photoionization intensities
Heinäsmäki, Sami
2012-02-01
A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a
Approximate analytic solutions to the NPDD: Short exposure approximations
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
Non-perturbative QCD amplitudes in quenched and eikonal approximations
Fried, H. M.; Grandou, T.; Sheu, Y.-M.
2014-05-01
Even though approximated, strong coupling non-perturbative QCD amplitudes remain very difficult to obtain. In this article, in eikonal and quenched approximations at least, physical insights are presented that rely on the newly-discovered property of effective locality. The present article also provides a more rigorous mathematical basis for the crude approximations used in the previous derivation of the binding potential of quarks and nucleons. Furthermore, the techniques of Random Matrix calculus along with Meijer G-functions are applied to analyze the generic structure of fermionic amplitudes in QCD.
Spectral ratio method for measuring emissivity
Watson, K.
1992-01-01
The spectral ratio method is based on the concept that although the spectral radiances are very sensitive to small changes in temperature the ratios are not. Only an approximate estimate of temperature is required thus, for example, we can determine the emissivity ratio to an accuracy of 1% with a temperature estimate that is only accurate to 12.5 K. Selecting the maximum value of the channel brightness temperatures is an unbiased estimate. Laboratory and field spectral data are easily converted into spectral ratio plots. The ratio method is limited by system signal:noise and spectral band-width. The images can appear quite noisy because ratios enhance high frequencies and may require spatial filtering. Atmospheric effects tend to rescale the ratios and require using an atmospheric model or a calibration site. ?? 1992.
Leora Halpern Lanz
2015-08-01
Full Text Available The hotel marketing budget, typically amounting to approximately 4-5% of an asset’s total revenue, must remain fluid, so that the marketing director can constantly adapt the marketing tools to meet consumer communications methods and demands. This article suggests how an independent hotel can maximize their marketing budget by using multiple channels and strategies.
Randomized approximate nearest neighbors algorithm.
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-09-20
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.
Donnet, Jean-Baptiste; Ridaoui, Hassan; Balard, Henri; Barthel, Herbert; Gottschalk-Gaudig, Torsten
2008-09-01
The interactions of water, hexamethyldisiloxane, and dodecane with pyrogenic silica samples, modified by a controlled partial silylation with dimethyldichlorosilane, were studied by microcalorimetry and wettability measurements. The samples, having a coverage ratio lower than dimethylsilyl (DMS) monolayer capacity ( approximately 2.6 DMS/nm(2)), show a regular and linear decrease of their heat of immersion into water with the coverage ratio and correlate with the increase of residual silanol groups. Two critical coverage ratios were evidenced at about 25 and 50% of the DMS monolayer capacity, the grafted silica remaining hydrophilic, below 25% being strongly hydrophobic beyond. The heat of immersion into hexamethyldisiloxane decreases until 50% of the DMS monolayer whereas that of dodecane remains independent of the grafting ratio. This study demonstrates that the water/residual free silica surface plays the main role in the stabilization of the W/O Pickering's emulsions.
Mineralized Remains of Morphotypes of Filamentous Cyanobacteria in Carbonaceous Meteorites
Hoover, Richard B.
2005-01-01
) investigations of freshly fractured interior surfaces of carbonaceous meteorites, terrestrial rocks, and recent microbial extremophiles and filamentous cyanobacteria. These studies have resulted in the detection in a several carbonaceous meteorites of the mineralized remains of a wide variety of complex filamentous trichomic microorganisms. These embedded forms are consistent in size and microstructure with well-preserved morphotypes of mat- forming filamentous trichomic cyanobacteria and the degraded remains of microfibrils of cyanobacterial sheaths. We present the results of comparative imaging studies and EDAX elemental analyses of recent cyanobacteria (e.g. Calothrix, Oscillatoria, and Lyngbya) that are similar in size, morphology and microstructure to morphotypes found embedded in meteorites. EDAX elemental studies reveal that forms found in carbonaceous meteorites often have highly carbonized sheaths in close association with permineralized filaments, trichomes and microbial cells. Ratios of critical bioelements (C:O, C:N, C:P, and C:S) reveal dramatic differences between microfossils in Earth rocks and meteorites and in filaments, trichomes, hormogonia, and cells of recent cyanobacteria.
Gluon transport equations with condensate in the small angle approximation
Blaizot, Jean-Paul [Institut de Physique Théorique (IPhT), CNRS/URA2306, CEA Saclay, F-91191 Gif-sur-Yvette (France); Liao, Jinfeng [Physics Department and Center for Exploration of Energy and Matter, Indiana University, 2401 N Milo B. Sampson Lane, Bloomington, IN 47408 (United States); RIKEN BNL Research Center, Bldg. 510A, Brookhaven National Laboratory, Upton, NY 11973 (United States)
2016-05-15
We derive the set of kinetic equations that control the evolution of gluons in the presence of a condensate. We show that the dominant singularities remain logarithmic when the scattering involves particles in the condensate. This allows us to define a consistent small angle approximation.
Obtaining exact value by approximate computations
Jing-zhong ZHANG; Yong FENG
2007-01-01
Numerical approximate computations can solve large and complex problems fast. They have the advantage of high efficiency. However they only give approximate results, whereas we need exact results in some fields. There is a gap between approximate computations and exact results.In this paper, we build a bridge by which exact results can be obtained by numerical approximate computations.
Fuzzy Set Approximations in Fuzzy Formal Contexts
Mingwen Shao; Shiqing Fan
2006-01-01
In this paper, a kind of multi-level formal concept is introduced. Based on the proposed multi-level formal concept, we present a pair of rough fuzzy set approximations within fuzzy formal contexts. By the proposed rough fuzzy set approximations, we can approximate a fuzzy set according to different precision level. We discuss the properties of the proposed approximation operators in detail.
Obtaining exact value by approximate computations
2007-01-01
Numerical approximate computations can solve large and complex problems fast.They have the advantage of high efficiency.However they only give approximate results,whereas we need exact results in some fields.There is a gap between approximate computations and exact results. In this paper,we build a bridge by which exact results can be obtained by numerical approximate computations.
Approximation Preserving Reductions among Item Pricing Problems
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
Robust Generalized Low Rank Approximations of Matrices.
Jiarong Shi
Full Text Available In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM. We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.
Robust Generalized Low Rank Approximations of Matrices.
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.
Nonlinear approximation with dictionaries I. Direct estimates
Gribonval, Rémi; Nielsen, Morten
2004-01-01
We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation...... with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...
Nonlinear approximation with dictionaries, I: Direct estimates
Gribonval, Rémi; Nielsen, Morten
We study various approximation classes associated with $m$-term approximation by elements from a (possibly redundant) dictionary in a Banach space. The standard approximation class associated with the best $m$-term approximation is compared to new classes defined by considering $m......$-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space...
A coefficient average approximation towards Gutzwiller wavefunction formalism
Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-06-01
Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.
A coefficient average approximation towards Gutzwiller wavefunction formalism.
Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-06-24
Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.
Tănase Alin-Eliodor
2014-08-01
Full Text Available This article focuses on computing techniques starting from trial balance data regarding financial key ratios. There are presented activity, liquidity, solvency and profitability financial key ratios. It is presented a computing methodology in three steps based on a trial balance.
Collins, Mimi
1997-01-01
Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…
Akkerman, J. W.
1982-01-01
New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.
Wyer, J C; Salzinger, F H
1983-01-01
Many common management techniques have little use in managing a medical group practice. Ratio analysis, however, can easily be adapted to the group practice setting. Acting as broad-gauge indicators, financial ratios provide an early warning of potential problems and can be very useful in planning for future operations. The author has gathered a collection of financial ratios which were developed by participants at an education seminar presented for the Virginia Medical Group Management Association. Classified according to the human element, system component, and financial factor, the ratios provide a good sampling of measurements relevant to medical group practices and can serve as an example for custom-tailoring a ratio analysis system for your medical group.
Contour polygonal approximation using shortest path in networks
Backes, André Ricardo; Bruno, Odemir Martinez
2013-01-01
Contour polygonal approximation is a simplified representation of a contour by line segments, so that the main characteristics of the contour remain in a small number of line segments. This paper presents a novel method for polygonal approximation based on the Complex Networks theory. We convert each point of the contour into a vertex, so that we model a regular network. Then we transform this network into a Small-World Complex Network by applying some transformations over its edges. By analyzing of network properties, especially the geodesic path, we compute the polygonal approximation. The paper presents the main characteristics of the method, as well as its functionality. We evaluate the proposed method using benchmark contours, and compare its results with other polygonal approximation methods.
Generalised quasi-linear approximation of the HMRI
Child, Adam; Marston, Brad; Tobias, Steven
2016-01-01
Motivated by recent advances in Direct Statistical Simulation (DSS) of astrophysical phenomena such as out of equilibrium jets, we perform a Direct Numerical Simulation (DNS) of the helical magnetorotational instability (HMRI) under the generalised quasilinear approximation (GQL). This approximation generalises the quasilinear approximation (QL) to include the self-consistent interaction of large-scale modes, interpolating between fully nonlinear DNS and QL DNS whilst still remaining formally linear in the small scales. In this paper we address whether GQL can more accurately describe low-order statistics of axisymmetric HMRI when compared with QL by performing DNS under various degrees of GQL approximation. We utilise various diagnostics, such as energy spectra in addition to first and second cumulants, for calculations performed for a range of Reynolds and Hartmann numbers (describing rotation and imposed magnetic field strength respectively). We find that GQL performs significantly better than QL in descri...
Mineralized remains of morphotypes of filamentous cyanobacteria in carbonaceous meteorites
Hoover, Richard B.
2005-09-01
rocks, living, cryopreserved and fossilized extremophiles and cyanobacteria. These studies have resulted in the detection of mineralized remains of morphotypes of filamentous cyanobacteria, mats and consortia in many carbonaceous meteorites. These well-preserved and embedded microfossils are consistent with the size, morphology and ultra-microstructure of filamentous trichomic prokaryotes and degraded remains of microfibrils of cyanobacterial sheaths. EDAX elemental studies reveal that the forms in the meteorites often have highly carbonized sheaths in close association with permineralized filaments, trichomes, and microbial cells. The eextensive protocols and methodologies that have been developed to protect the samples from contamination and to distinguish recent contaminants from indigenous microfossils are described recent bio-contaminants. Ratios of critical bioelements (C:O, C:N, C:P, and C:S) reveal dramatic differences between microfossils in Earth rocks and meteorites and in the cells, filaments, trichomes, and hormogonia of recently living cyanobacteria. The results of comparative optical, ESEM and FESEM studies and EDAX elemental analyses of recent cyanobacteria (e.g. Calothrix, Oscillatoria, and Lyngbya) of similar size, morphology and microstructure to microfossils found embedded in the Murchison CM2 and the Orgueil CI1 carbonaceous meteorites are presented
APPROXIMATE SAMPLING THEOREM FOR BIVARIATE CONTINUOUS FUNCTION
杨守志; 程正兴; 唐远炎
2003-01-01
An approximate solution of the refinement equation was given by its mask, and the approximate sampling theorem for bivariate continuous function was proved by applying the approximate solution. The approximate sampling function defined uniquely by the mask of the refinement equation is the approximate solution of the equation, a piece-wise linear function, and posseses an explicit computation formula. Therefore the mask of the refinement equation is selected according to one' s requirement, so that one may controll the decay speed of the approximate sampling function.
Bernstein-type approximations of smooth functions
Andrea Pallini
2007-10-01
Full Text Available The Bernstein-type approximation for smooth functions is proposed and studied. We propose the Bernstein-type approximation with definitions that directly apply the binomial distribution and the multivariate binomial distribution. The Bernstein-type approximations generalize the corresponding Bernstein polynomials, by considering definitions that depend on a convenient approximation coefficient in linear kernels. In the Bernstein-type approximations, we study the uniform convergence and the degree of approximation. The Bernstein-type estimators of smooth functions of population means are also proposed and studied.
Use of Information Measures and Their Approximations to Detect Predictive Gene-Gene Interaction
Jan Mielniczuk
2017-01-01
Full Text Available We reconsider the properties and relationships of the interaction information and its modified versions in the context of detecting the interaction of two SNPs for the prediction of a binary outcome when interaction information is positive. This property is called predictive interaction, and we state some new sufficient conditions for it to hold true. We also study chi square approximations to these measures. It is argued that interaction information is a different and sometimes more natural measure of interaction than the logistic interaction parameter especially when SNPs are dependent. We introduce a novel measure of predictive interaction based on interaction information and its modified version. In numerical experiments, which use copulas to model dependence, we study examples when the logistic interaction parameter is zero or close to zero for which predictive interaction is detected by the new measure, while it remains undetected by the likelihood ratio test.
A consistent collinear triad approximation for operational wave models
Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.
2016-08-01
In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.
Svendsen, Anders Jørgen; Holmskov, U; Bro, Peter
1995-01-01
hitherto unnoted differences between controls and patients with either rheumatoid arthritis or systemic lupus erythematosus. For this we use simple, but unconventional, graphic representations of the data, based on difference plots and ratio plots. Differences between patients with Burkitt's lymphoma...... and systemic lupus erythematosus from another previously published study (Macanovic, M. and Lachmann, P.J. (1979) Clin. Exp. Immunol. 38, 274) are also represented using ratio plots. Our observations indicate that analysis by regression analysis may often be misleading....
Applications of Discrepancy Theory in Multiobjective Approximation
Glaßer, Christian; Witek, Maximilian
2011-01-01
We apply a multi-color extension of the Beck-Fiala theorem to show that the multiobjective maximum traveling salesman problem is randomized 1/2-approximable on directed graphs and randomized 2/3-approximable on undirected graphs. Using the same technique we show that the multiobjective maximum satisfiablilty problem is 1/2-approximable.
Fractal Trigonometric Polynomials for Restricted Range Approximation
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
Axiomatic Characterizations of IVF Rough Approximation Operators
Guangji Yu
2014-01-01
Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.
Some relations between entropy and approximation numbers
郑志明
1999-01-01
A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.
Nonlinear approximation with dictionaries, I: Direct estimates
Gribonval, Rémi; Nielsen, Morten
$-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space...
Operator approximant problems arising from quantum theory
Maher, Philip J
2017-01-01
This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.
Advanced Concepts and Methods of Approximate Reasoning
1989-12-01
and L. Valverde. On mode and implication in approximate reasoning. In M.M. Gupta, A. Kandel, W. Bandler , J.B. Kiszka, editors, Approximate Reasoning and...190, 1981. [43] E. Trillas and L. Valverde. On mode and implication in approximate reasoning. In M.M. Gupta, A. Kandel, W. Bandler , J.B. Kiszka
NONLINEAR APPROXIMATION WITH GENERAL WAVE PACKETS
L. Borup; M. Nielsen
2005-01-01
We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete characterization of the approximation spaces is derived.
Approximate Nearest Neighbor Queries among Parallel Segments
Emiris, Ioannis Z.; Malamatos, Theocharis; Tsigaridas, Elias
2010-01-01
We develop a data structure for answering efficiently approximate nearest neighbor queries over a set of parallel segments in three dimensions. We connect this problem to approximate nearest neighbor searching under weight constraints and approximate nearest neighbor searching on historical data...
Nonlinear approximation with general wave packets
Borup, Lasse; Nielsen, Morten
2005-01-01
We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...... characterization of the approximation spaces is derived....
Nonlinear approximation with bi-framelets
Borup, Lasse; Nielsen, Morten; Gribonval, Rémi
2005-01-01
We study the approximation in Lebesgue spaces of wavelet bi-frame systems given by translations and dilations of a finite set of generators. A complete characterization of the approximation spaces associated with best m-term approximation of wavelet bi-framelet systems is given...
Approximation properties of fine hyperbolic graphs
Benyin Fu
2016-05-01
In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use the techniques of Ozawa’s to prove that a fine hyperbolic graph has the metric invariant translation approximation property.
Differentiation between decomposed remains of human origin and bigger mammals.
Rosier, E; Loix, S; Develter, W; Van de Voorde, W; Cuypers, E; Tytgat, J
2017-08-01
This study is a follow-up study in the search for a human specific marker in the decomposition where the VOC-profile of decomposing human, pig, lamb and roe remains were analyzed using a thermal desorber combined with a gas chromatograph coupled to a mass spectrometer in a laboratory environment during 6 months. The combination of 8 previously identified human and pig specific compounds (ethyl propionate, propyl propionate, propyl butyrate, ethyl pentanoate, 3-methylthio-1-propanol, methyl(methylthio)ethyl disulfide, diethyl disulfide and pyridine) was also seen in these analyzed mammals. However, combined with 5 additional compounds (hexane, heptane, octane, N-(3-methylbutyl)- and N-(2-methylpropyl)acetamide) human remains could be separated from pig, lamb and roe remains. Based on a higher number of remains analyzed, as compared with the pilot study, it was no longer possible to rely on the 5 previously proposed esters to separate pig from human remains. From this follow-up study reported, it was found that pyridine is an interesting compound specific to human remains. Such a human specific marker can help in the training of cadaver dogs or in the development of devices to search for human remains. However, further investigations have to verify these results. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Resonant-state expansion Born Approximation
Doost, M B
2015-01-01
The Born Approximation is a fundamental formula in Physics, it allows the calculation of weak scattering via the Fourier transform of the scattering potential. I extend the Born Approximation by including in the formula the Fourier transform of a truncated basis of the infinite number of appropriately normalised resonant states. This extension of the Born Approximation is named the Resonant-State Expansion Born Approximation or RSE Born Approximation. The resonant-states of the system can be calculated using the recently discovered RSE perturbation theory for electrodynamics and normalised correctly to appear in spectral Green's functions via the flux volume normalisation.
A better approximation algorithm for finding planar subgraphs
Calinescu, G.; Karloff, H.; Fernandes, C.G. [Georgia Inst. of Technology, Atlanta, GA (United States); Finkler, U. [Max-Planch-Inst. fuer Informatik, Saarbruecken (Germany)
1996-12-31
The MAXIMUM PLANAR SUBGRAPH problem-given a graph G, find a largest planar subgraph of G-has applications in circuit layout, facility layout, and graph drawing. No previous polynomial-time approximation algorithm for this NP-Complete problem was known to achieve a performance ratio larger than 1/3, which is achieved simply by producing a spanning tree of G. We present the first approximation algorithm for MAXIMUM PLANAR SUBGRAPH with higher performance ratio (2/5 instead of 1/3). We also apply our algorithm to find large outerplanar subgraphs. Last, we show that both MAXIMUM PLANAR SUBGRAPH and its complement, the problem of removing as few edges as possible to leave a planar subgraph, are Max SNP-Hard.
Electronic Flux Density beyond the Born-Oppenheimer Approximation.
Schild, Axel; Agostini, Federica; Gross, E K U
2016-05-19
In the Born-Oppenheimer approximation, the electronic wave function is typically real-valued and hence the electronic flux density (current density) seems to vanish. This is unfortunate for chemistry, because it precludes the possibility to monitor the electronic motion associated with the nuclear motion during chemical rearrangements from a Born-Oppenheimer simulation of the process. We study an electronic flux density obtained from a correction to the electronic wave function. This correction is derived via nuclear velocity perturbation theory applied in the framework of the exact factorization of electrons and nuclei. To compute the correction, only the ground state potential energy surface and the electronic wave function are needed. For a model system, we demonstrate that this electronic flux density approximates the true one very well, for coherent tunneling dynamics as well as for over-the-barrier scattering, and already for mass ratios between electrons and nuclei that are much larger than the true mass ratios.
Approximate Series Solutions for Nonlinear Free Vibration of Suspended Cables
Yaobing Zhao
2014-01-01
Full Text Available This paper presents approximate series solutions for nonlinear free vibration of suspended cables via the Lindstedt-Poincare method and homotopy analysis method, respectively. Firstly, taking into account the geometric nonlinearity of the suspended cable as well as the quasi-static assumption, a mathematical model is presented. Secondly, two analytical methods are introduced to obtain the approximate series solutions in the case of nonlinear free vibration. Moreover, small and large sag-to-span ratios and initial conditions are chosen to study the nonlinear dynamic responses by these two analytical methods. The numerical results indicate that frequency amplitude relationships obtained with different analytical approaches exhibit some quantitative and qualitative differences in the cases of motions, mode shapes, and particular sag-to-span ratios. Finally, a detailed comparison of the differences in the displacement fields and cable axial total tensions is made.
Canonical Sets of Best L1-Approximation
Dimiter Dryanov
2012-01-01
Full Text Available In mathematics, the term approximation usually means either interpolation on a point set or approximation with respect to a given distance. There is a concept, which joins the two approaches together, and this is the concept of characterization of the best approximants via interpolation. It turns out that for some large classes of functions the best approximants with respect to a certain distance can be constructed by interpolation on a point set that does not depend on the choice of the function to be approximated. Such point sets are called canonical sets of best approximation. The present paper summarizes results on canonical sets of best L1-approximation with emphasis on multivariate interpolation and best L1-approximation by blending functions. The best L1-approximants are characterized as transfinite interpolants on canonical sets. The notion of a Haar-Chebyshev system in the multivariate case is discussed also. In this context, it is shown that some multivariate interpolation spaces share properties of univariate Haar-Chebyshev systems. We study also the problem of best one-sided multivariate L1-approximation by sums of univariate functions. Explicit constructions of best one-sided L1-approximants give rise to well-known and new inequalities.
Mapping moveout approximations in TI media
Stovas, Alexey
2013-11-21
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
On Gakerkin approximations for the quasigeostrophic equations
Rocha, Cesar B; Grooms, Ian
2015-01-01
We study the representation of approximate solutions of the three-dimensional quasigeostrophic (QG) equations using Galerkin series with standard vertical modes. In particular, we show that standard modes are compatible with nonzero buoyancy at the surfaces and can be used to solve the Eady problem. We extend two existing Galerkin approaches (A and B) and develop a new Galerkin approximation (C). Approximation A, due to Flierl (1978), represents the streamfunction as a truncated Galerkin series and defines the potential vorticity (PV) that satisfies the inversion problem exactly. Approximation B, due to Tulloch and Smith (2009b), represents the PV as a truncated Galerkin series and calculates the streamfunction that satisfies the inversion problem exactly. Approximation C, the true Galerkin approximation for the QG equations, represents both streamfunction and PV as truncated Galerkin series, but does not satisfy the inversion equation exactly. The three approximations are fundamentally different unless the b...
Denoising MR Spectroscopic Imaging Data With Low-Rank Approximations
Nguyen, Hien M.; Peng, Xi; Do, Minh N.; Liang, Zhi-Pei
2012-01-01
This paper addresses the denoising problem associated with magnetic resonance spectroscopic imaging (MRSI), where signal-to-noise ratio (SNR) has been a critical problem. A new scheme is proposed, which exploits two low-rank structures that exist in MRSI data, one due to partial separability and the other due to linear predictability. Denoising is performed by arranging the measured data in in appropriate matrix forms (i.e., Casorati and Hankel) and applying low-rank approximations by singula...
Kjærgaard, Søren; Canudas-Romo, Vladimir
2017-01-01
, the prospective potential support ratio usually focuses on the current mortality schedule, or period life expectancy. Instead, in this paper we look at the actual mortality experienced by cohorts in a population, using cohort life tables. We analyse differences between the two perspectives using mortality models......, historical data, and forecasted data. Cohort life expectancy takes future mortality improvements into account, unlike period life expectancy, leading to a higher prospective potential support ratio. Our results indicate that using cohort instead of period life expectancy returns around 0.5 extra younger...
The taphonomy of human remains in a glacial environment.
Pilloud, Marin A; Megyesi, Mary S; Truffer, Martin; Congram, Derek
2016-04-01
A glacial environment is a unique setting that can alter human remains in characteristic ways. This study describes glacial dynamics and how glaciers can be understood as taphonomic agents. Using a case study of human remains recovered from Colony Glacier, Alaska, a glacial taphonomic signature is outlined that includes: (1) movement of remains, (2) dispersal of remains, (3) altered bone margins, (4) splitting of skeletal elements, and (5) extensive soft tissue preservation and adipocere formation. As global glacier area is declining in the current climate, there is the potential for more materials of archaeological and medicolegal significance to be exposed. It is therefore important for the forensic anthropologist to have an idea of the taphonomy in this setting and to be able to differentiate glacial effects from other taphonomic agents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Attempted Suicide Rates in U.S. Remain Unchanged
... https://medlineplus.gov/news/fullstory_162339.html Attempted Suicide Rates in U.S. Remain Unchanged Men more often ... HealthDay News) -- The number of Americans who attempted suicide and wound up in the emergency room has ...
A Bayesian Framework for Remaining Useful Life Estimation
National Aeronautics and Space Administration — The estimation of remaining useful life (RUL) of a faulty component is at the center of system prognostics and health management. It gives operators a potent tool in...
[The craniofacial identification of the remains from the Yekaterinburg burial].
Abramov, S S
1998-01-01
Based on expert evaluation of remains of 7 members of Imperial Romanov family and 4 persons in their attendance, the author demonstrates methodological approaches to identification craniocephalic studies in cases with group burials.
Two Timescale Approximation Applied to Gravitational Waves from Eccentric EMRIs
Moxon, Jordan; Flanagan, Eanna; Hinderer, Tanja; Pound, Adam
2016-03-01
Gravitational-wave driven inspirals of compact objects into massive black holes (Extreme Mass Ratio Inspirals - EMRIs) form an interesting, long-lived signal for future space-based gravitational wave detectors. Accurate signal predictions will be necessary to take full advantage of matched filtering techniques, motivating the development of a calculational technique for deriving the gravitational wave signal to good approximation throughout the inspiral. We report on recent work on developing the two-timescale technique with the goal of predicting waveforms from eccentric equatorial systems to subleading (post-adiabatic) order in the phase, building on recent work by Pound in the scalar case. The computation requires us to understand the dissipative component of the second-order self force. It also demands careful consideration of how the two timescale (near-zone) approximation should match with the post-Minkowski approximation of the gravitational waves at great distances.
Repatriation of human remains following death in international travellers.
Connolly, Ruairi; Prendiville, Richard; Cusack, Denis; Flaherty, Gerard
2017-03-01
Death during international travel and the repatriation of human remains to one's home country is a distressing and expensive process. Much organization is required involving close liaison between various agencies. A review of the literature was conducted using the PubMed database. Search terms included: 'repatriation of remains', 'death', 'abroad', 'tourism', 'travel', 'travellers', 'travelling' and 'repatriation'. Additional articles were obtained from grey literature sources and reference lists. The local national embassy, travel insurance broker and tour operator are important sources of information to facilitate the repatriation of the deceased traveller. Formal identification of the deceased's remains is required and a funeral director must be appointed. Following this, the coroner in the country or jurisdiction receiving the repatriated remains will require a number of documents prior to providing clearance for burial. Costs involved in repatriating remains must be borne by the family of the deceased although travel insurance may help defray some of the costs. If the death is secondary to an infectious disease, cremation at the site of death is preferred. No standardized procedure is in place to deal with the remains of a migrant's body at present and these remains are often not repatriated to their country of origin. Repatriation of human remains is a difficult task which is emotionally challenging for the bereaving family and friends. As a travel medicine practitioner, it is prudent to discuss all eventualities, including the risk of death, during the pre-travel consultation. Awareness of the procedures involved in this process may ease the burden on the grieving family at a difficult time.
Robotics to Enable Older Adults to Remain Living at Home
Pearce, Alan J.; Brooke Adair; Kimberly Miller; Elizabeth Ozanne; Catherine Said; Nick Santamaria; Morris, Meg E.
2012-01-01
Given the rapidly ageing population, interest is growing in robots to enable older people to remain living at home. We conducted a systematic review and critical evaluation of the scientific literature, from 1990 to the present, on the use of robots in aged care. The key research questions were as follows: (1) what is the range of robotic devices available to enable older people to remain mobile, independent, and safe? and, (2) what is the evidence demonstrating that robotic devices are effec...
Collisionless magnetic reconnection under anisotropic MHD approximation
Hirabayashi, Kota; Hoshino, Masahiro
We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless magneto-hydro-dynamic (MHD) simulations based on the double adiabatic approximation, which is an important step to bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observation. According to our results, a pair of slow shocks does form in the reconnection layer. The resultant shock waves, however, are quite weak compared with those in an isotropic MHD from the point of view of the plasma compression and the amount of the magnetic energy released across the shock. Once the slow shock forms, the downstream plasma are heated in highly anisotropic manner and a firehose-sense (P_{||}>P_{⊥}) pressure anisotropy arises. The maximum anisotropy is limited by the marginal firehose criterion, 1-(P_{||}-P_{⊥})/B(2) =0. In spite of the weakness of the shocks, the resultant reconnection rate is kept at the same level compared with that in the corresponding ordinary MHD simulations. It is also revealed that the sequential order of propagation of the slow shock and the rotational discontinuity, which appears when the guide field component exists, changes depending on the magnitude of the guide field. Especially, when no guide field exists, the rotational discontinuity degenerates with the contact discontinuity remaining at the position of the initial current sheet, while with the slow shock in the isotropic MHD. Our result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.
A non-destructive method for dating human remains
Lail, Warren K.; Sammeth, David; Mahan, Shannon; Nevins, Jason
2013-01-01
The skeletal remains of several Native Americans were recovered in an eroded state from a creek bank in northeastern New Mexico. Subsequently stored in a nearby museum, the remains became lost for almost 36 years. In a recent effort to repatriate the remains, it was necessary to fit them into a cultural chronology in order to determine the appropriate tribe(s) for consultation pursuant to the Native American Grave Protection and Repatriation Act (NAGPRA). Because the remains were found in an eroded context with no artifacts or funerary objects, their age was unknown. Having been asked to avoid destructive dating methods such as radiocarbon dating, the authors used Optically Stimulated Luminescence (OSL) to date the sediments embedded in the cranium. The OSL analyses yielded reliable dates between A.D. 1415 and A.D. 1495. Accordingly, we conclude that the remains were interred somewhat earlier than A.D. 1415, but no later than A.D. 1495. We believe the remains are from individuals ancestral to the Ute Mouache Band, which is now being contacted for repatriation efforts. Not only do our methods contribute to the immediate repatriation efforts, they provide archaeologists with a versatile, non-destructive, numerical dating method that can be used in many burial contexts.
An analysis of the alleged skeletal remains of Carin Goring.
Anna Kjellström
Full Text Available In 1991, treasure hunters found skeletal remains in an area close to the destroyed country residence of former Nazi leader Hermann Göring in northeastern Berlin. The remains, which were believed to belong to Carin Göring, who was buried at the site, were examined to determine whether it was possible to make a positive identification. The anthropological analysis showed that the remains come from an adult woman. The DNA analysis of several bone elements showed female sex, and a reference sample from Carin's son revealed mtDNA sequences identical to the remains. The profile has one nucleotide difference from the Cambridge reference sequence (rCRS, the common variant 263G. A database search resulted in a frequency of this mtDNA sequence of about 10% out of more than 7,000 European haplotypes. The mtDNA sequence found in the ulna, the cranium and the reference sample is, thus, very common among Europeans. Therefore, nuclear DNA analysis was attempted. The remains as well as a sample from Carin's son were successfully analysed for the three nuclear markers TH01, D7S820 and D8S1179. The nuclear DNA analysis of the two samples revealed one shared allele for each of the three markers, supporting a mother and son relationship. This genetic information together with anthropological and historical files provides an additional piece of circumstantial evidence in our efforts to identify the remains of Carin Göring.
A novel approach of remaining discharge energy prediction for large format lithium-ion battery pack
Zhang, Xu; Wang, Yujie; Liu, Chang; Chen, Zonghai
2017-03-01
Accurate estimation of battery pack remaining discharge energy is a crucial challenge to the battery energy storage systems. In this paper, a new method of battery pack remaining discharge energy estimation is proposed using the recursive least square-unscented Kalman filter. To predict the remaining discharge energy precisely, the inconsistency of the battery pack caused by different working temperatures is taken into consideration and the degree of battery inconsistency is quantified based on mathematical methods of statistics. In addition, the recursive least square is applied to identify the parameters of the battery pack model on-line and the unscented Kalman filter is employed in battery pack remaining discharge energy and energy utilization ratio estimation. The experimental results in terms of battery states estimation under the new European driving cycle and real driven profiles, with the root mean square error less than 0.01, further verify that the proposed method can estimate the battery pack remaining discharge energy with high accuracy. What's more, the relationship between the pack energy utilization ratio and the degree of battery inconsistency is summarized in the paper.
Better Balance by Being Biased: A 0.8776-Approximation for Max Bisection
Austrin, Per; Georgiou, Konstantinos
2012-01-01
Recently Raghavendra and Tan (SODA 2012) gave a 0.85-approximation algorithm for the Max Bisection problem. We improve their algorithm to a 0.8776-approximation. As Max Bisection is hard to approximate within $\\alpha_{GW} + \\epsilon \\approx 0.8786$ under the Unique Games Conjecture (UGC), our algorithm is nearly optimal. We conjecture that Max Bisection is approximable within $\\alpha_{GW}-\\epsilon$, i.e., the bisection constraint (essentially) does not make Max Cut harder. We also obtain an optimal algorithm (assuming the UGC) for the analogous variant of Max 2-Sat. Our approximation ratio for this problem exactly matches the optimal approximation ratio for Max 2-Sat, i.e., $\\alpha_{LLZ} + \\epsilon \\approx 0.9401$, showing that the bisection constraint does not make Max 2-Sat harder. This improves on a 0.93-approximation for this problem due to Raghavendra and Tan.
Non-perturbative QCD amplitudes in quenched and eikonal approximations
Fried, H.M. [Physics Department, Brown University, Providence, RI 02912 (United States); Grandou, T., E-mail: Thierry.Grandou@inln.cnrs.fr [Université de Nice-Sophia Antipolis, Institut Non Linéaire de Nice, UMR 6618 CNRS 7335, 1361 routes des Lucioles, 06560 Valbonne (France); Sheu, Y.-M., E-mail: ymsheu@alumni.brown.edu [Université de Nice-Sophia Antipolis, Institut Non Linéaire de Nice, UMR 6618 CNRS 7335, 1361 routes des Lucioles, 06560 Valbonne (France)
2014-05-15
Even though approximated, strong coupling non-perturbative QCD amplitudes remain very difficult to obtain. In this article, in eikonal and quenched approximations at least, physical insights are presented that rely on the newly-discovered property of effective locality. The present article also provides a more rigorous mathematical basis for the crude approximations used in the previous derivation of the binding potential of quarks and nucleons. Furthermore, the techniques of Random Matrix calculus along with Meijer G-functions are applied to analyze the generic structure of fermionic amplitudes in QCD. - Highlights: • We discuss the physical insight of effective locality to QCD fermionic amplitudes. • We show that an unavoidable delta function goes along with the effective locality property. • The generic structure of QCD fermion amplitudes is obtained through Random Matrix calculus.
Miles, T. R.; Haslum, M. N.; Wheeler, T. J.
1998-01-01
A study involving 11,804 British children (age 10) found that when specified criteria for dyslexia were used, 269 children qualified as dyslexic. These included 223 boys and 46 girls, for a ratio of 4.51 to 1. Difficulties in interpreting these data are discussed and a defense of the criteria is provided. (Author/CR)
PO de Wet
2005-06-01
Full Text Available The rectilinear Steiner ratio was shown to be 3/2 by Hwang [Hwang FK, 1976, On Steiner minimal trees with rectilinear distance, SIAM Journal on Applied Mathematics, 30, pp. 104– 114.]. We use continuity and introduce restricted point sets to obtain an alternative, short and self-contained proof of this result.
Confidence Interval Methodology for Ratio Means (CIM4RM)
2010-08-01
is the combined effort of bootstrapping and creativity in constructing CLs around ratio means. CIM4RM was tested in many ratio mean applications and...for vehicle or repair period man - hours Man-hours per vehicle or repair MR: Estimated maintenance ratio for i th BSS adj MRt Weighted estimated...assumptions on the distributions) and creativity to compute approximate confidence intervals for a ratio mean metric. The bootstrap-t approach is very
Estimation of VO2max from the ratio between HRmax and HRrest--the Heart Rate Ratio Method.
Uth, Niels; Sørensen, Henrik; Overgaard, Kristian; Pedersen, Preben K
2004-01-01
The effects of training and/or ageing upon maximal oxygen uptake ( VO(2max)) and heart rate values at rest (HR(rest)) and maximal exercise (HR(max)), respectively, suggest a relationship between VO(2max) and the HR(max)-to-HR(rest) ratio which may be of use for indirect testing of VO(2max). Fick principle calculations supplemented by literature data on maximum-to-rest ratios for stroke volume and the arterio-venous O(2) difference suggest that the conversion factor between mass-specific VO(2max) (ml.min(-1).kg(-1)) and HR(max).HR(rest)(-1) is approximately 15. In the study we experimentally examined this relationship and evaluated its potential for prediction of VO(2max). VO(2max) was measured in 46 well-trained men (age 21-51 years) during a treadmill protocol. A subgroup ( n=10) demonstrated that the proportionality factor between HR(max).HR(rest)(-1) and mass-specific VO(2max) was 15.3 (0.7) ml.min(-1).kg(-1). Using this value, VO(2max) in the remaining 36 individuals could be estimated with an SEE of 0.21 l.min(-1) or 2.7 ml.min(-1).kg(-1) (approximately 4.5%). This compares favourably with other common indirect tests. When replacing measured HR(max) with an age-predicted one, SEE was 0.37 l.min(-1) and 4.7 ml.min(-1).kg(-1) (approximately 7.8%), which is still comparable with other indirect tests. We conclude that the HR(max)-to-HR(rest) ratio may provide a tool for estimation of VO(2max) in well-trained men. The applicability of the test principle in relation to other groups will have to await direct validation. VO(2max) can be estimated indirectly from the measured HR(max)-to-HR(rest) ratio with an accuracy that compares favourably with that of other common indirect tests. The results also suggest that the test may be of use for VO(2max) estimation based on resting measurements alone.
Gillespie, Dirk
2011-01-28
The mean spherical approximation (MSA) for the primitive model of electrolytes provides reasonable estimates of thermodynamic quantities such as the excess chemical potential and screening length. It is especially widely used because of its explicit formulas so that numerically solving equations is minimized. As originally formulated, the MSA screening parameter Γ (akin to the reciprocal of the Debye screening length) does not have an explicit analytic formula; an equation for Γ must be solved numerically. Here, an analytic approximation for Γ is presented whose relative error is generally ≲10(-5). If more accuracy is desired, one step of an iterative procedure (which also produces an explicit formula for Γ) is shown to give relative errors within machine precision in many cases. Even when ion diameter ratios are ∼10 and ion valences are ∼10, the relative error for the analytic approximation is still ≲10(-3) and for the single iterative substitution it is ≲10(-9).
Improving biconnectivity approximation via local optimization
Ka Wong Chong; Tak Wah Lam [Univ. of Hong Kong (Hong Kong)
1996-12-31
The problem of finding the minimum biconnected spanning subgraph of an undirected graph is NP-hard. A lot of effort has been made to find biconnected spanning subgraphs that approximate to the minimum one as close as possible. Recently, new polynomial-time (sequential) approximation algorithms have been devised to improve the approximation factor from 2 to 5/3 , then 3/2, while NC algorithms have also been known to achieve 7/4 + {epsilon}. This paper presents a new technique which can be used to further improve parallel approximation factors to 5/3 + {epsilon}. In the sequential context, the technique reveals an algorithm with a factor of {alpha} + 1/5, where a is the approximation factor of any 2-edge connectivity approximation algorithm.
Frankenstein's Glue: Transition functions for approximate solutions
Yunes, N
2006-01-01
Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate solutions together. In particular, we propose certain sufficient conditions on these functions and proof that these conditions guarantee that the joined solution still satisfies the Einstein equations to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the...
Floating-Point $L^2$-Approximations
Brisebarre, Nicolas; Hanrot, Guillaume
2007-01-01
International audience; Computing good polynomial approximations to usual functions is an important topic for the computer evaluation of those functions. These approximations can be good under several criteria, the most desirable being probably that the relative error is as small as possible in the $L^{\\infty}$ sense, i.e. everywhere on the interval under study. In the present paper, we investigate a simpler criterion, the $L^2$ case. Though finding a best polynomial $L^2$-approximation with ...
Metric Diophantine approximation on homogeneous varieties
Ghosh, Anish; Nevo, Amos
2012-01-01
We develop the metric theory of Diophantine approximation on homogeneous varieties of semisimple algebraic groups and prove results analogous to the classical Khinchin and Jarnik theorems. In full generality our results establish simultaneous Diophantine approximation with respect to several completions, and Diophantine approximation over general number fields using S-algebraic integers. In several important examples, the metric results we obtain are optimal. The proof uses quantitative equidistribution properties of suitable averaging operators, which are derived from spectral bounds in automorphic representations.
Approximately liner phase IIR digital filter banks
J. D. Ćertić; M. D. Lutovac; L. D. Milić
2013-01-01
In this paper, uniform and nonuniform digital filter banks based on approximately linear phase IIR filters and frequency response masking technique (FRM) are presented. Both filter banks are realized as a connection of an interpolated half-band approximately linear phase IIR filter as a first stage of the FRM design and an appropriate number of masking filters. The masking filters are half-band IIR filters with an approximately linear phase. The resulting IIR filter banks are compared with li...
Forensic considerations when dealing with incinerated human dental remains.
Reesu, Gowri Vijay; Augustine, Jeyaseelan; Urs, Aadithya B
2015-01-01
Establishing the human dental identification process relies upon sufficient post-mortem data being recovered to allow for a meaningful comparison with ante-mortem records of the deceased person. Teeth are the most indestructible components of the human body and are structurally unique in their composition. They possess the highest resistance to most environmental effects like fire, desiccation, decomposition and prolonged immersion. In most natural as well as man-made disasters, teeth may provide the only means of positive identification of an otherwise unrecognizable body. It is imperative that dental evidence should not be destroyed through erroneous handling until appropriate radiographs, photographs, or impressions can be fabricated. Proper methods of physical stabilization of incinerated human dental remains should be followed. The maintenance of integrity of extremely fragile structures is crucial to the successful confirmation of identity. In such situations, the forensic dentist must stabilise these teeth before the fragile remains are transported to the mortuary to ensure preservation of possibly vital identification evidence. Thus, while dealing with any incinerated dental remains, a systematic approach must be followed through each stage of evaluation of incinerated dental remains to prevent the loss of potential dental evidence. This paper presents a composite review of various studies on incinerated human dental remains and discusses their impact on the process of human identification and suggests a step by step approach. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Cablk, Mary E; Szelagowski, Erin E; Sagebiel, John C
2012-07-10
Human Remains Detection (HRD) dogs can be a useful tool to locate buried human remains because they rely on olfactory rather than visual cues. Trained specifically to locate deceased humans, it is widely believed that HRD dogs can differentiate animal remains from human remains. This study analyzed the volatile organic compounds (VOCs) present in the headspace above partially decomposed animal tissue samples and directly compared them with results published from human tissues using established solid-phase microextraction (SPME) and gas chromatography/mass spectrometry (GC/MS) methods. Volatile organic compounds present in the headspace of four different animal tissue samples (bone, muscle, fat and skin) from each of cow, pig and chicken were identified and compared to published results from human samples. Although there were compounds common to both animal and human remains, the VOC signatures of each of the animal remains differed from those of humans. Of particular interest was the difference between pigs and humans, because in some countries HRD dogs are trained on pig remains rather than human remains. Pig VOC signatures were not found to be a subset of human; in addition to sharing only seven of thirty human-specific compounds, an additional nine unique VOCs were recorded from pig samples which were not present in human samples. The VOC signatures from chicken and human samples were most similar sharing the most compounds of the animals studied. Identifying VOCs that are unique to humans may be useful to develop human-specific training aids for HRD canines, and may eventually lead to an instrument that can detect clandestine human burial sites. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A Note on Generalized Approximation Property
Antara Bhar
2013-01-01
Full Text Available We introduce a notion of generalized approximation property, which we refer to as --AP possessed by a Banach space , corresponding to an arbitrary Banach sequence space and a convex subset of , the class of bounded linear operators on . This property includes approximation property studied by Grothendieck, -approximation property considered by Sinha and Karn and Delgado et al., and also approximation property studied by Lissitsin et al. We characterize a Banach space having --AP with the help of -compact operators, -nuclear operators, and quasi--nuclear operators. A particular case for ( has also been characterized.
Upper Bounds on Numerical Approximation Errors
Raahauge, Peter
2004-01-01
This paper suggests a method for determining rigorous upper bounds on approximationerrors of numerical solutions to infinite horizon dynamic programming models.Bounds are provided for approximations of the value function and the policyfunction as well as the derivatives of the value function....... The bounds apply to moregeneral problems than existing bounding methods do. For instance, since strict concavityis not required, linear models and piecewise linear approximations can bedealt with. Despite the generality, the bounds perform well in comparison with existingmethods even when applied...... to approximations of a standard (strictly concave)growth model.KEYWORDS: Numerical approximation errors, Bellman contractions, Error bounds...
TMB: Automatic differentiation and laplace approximation
Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte
2016-01-01
computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects...... are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three) of the joint likelihood. The computations are designed to be fast for problems with many random effects (approximate to 10(6)) and parameters (approximate to 10...
Ramirez Rozzi, Fernando V; d'Errico, Francesco; Vanhaeren, Marian; Grootes, Pieter M; Kerautret, Bertrand; Dujardin, Véronique
2009-01-01
The view that Aurignacian technologies and their associated symbolic manifestations represent the archaeologicalproxy for the spread of Anatomically Modern Humans into Europe, is supported by few diagnostic human remains, including those from the Aurignacian site of Les Rois in south-western France. Here we reassess the taxonomic attribution of the human remains, their cultural affiliation, and provide five new radiocarbon dates for the site. Patterns of tooth growth along with the morphological and morphometric analysis of the human remains indicate that a juvenile mandible showing cutmarks presents some Neandertal features, whereas another mandible is attributed to Anatomically Modern Humans. Reappraisal of the archaeological sequence demonstrates that human remains derive from two layers dated to 28-30 kyr BP attributed to the Aurignacian, the only cultural tradition detected at the site. Three possible explanations may account for this unexpected evidence. The first one is that the Aurignacian was exclusively produced by AMH and that the child mandible from unit A2 represents evidence for consumption or, more likely, symbolic use of a Neandertal child by Aurignacian AMH The second possible explanation is that Aurignacian technologies were produced at Les Rois by human groups bearing both AMH and Neandertal features. Human remains from Les Rois would be in this case the first evidence of a biological contact between the two human groups. The third possibility is that all human remains from Les Rois represent an AMH population with conserved plesiomorphic characters suggesting a larger variation in modern humans from the Upper Palaeolithic.
Classification of pelvic ring fractures in skeletonized human remains.
Báez-Molgado, Socorro; Bartelink, Eric J; Jellema, Lyman M; Spurlock, Linda; Sholts, Sabrina B
2015-01-01
Pelvic ring fractures are associated with high rates of mortality and thus can provide key information about circumstances surrounding death. These injuries can be particularly informative in skeletonized remains, yet difficult to diagnose and interpret. This study adapted a clinical system of classifying pelvic ring fractures according to their resultant degree of pelvic stability for application to gross human skeletal remains. The modified Tile criteria were applied to the skeletal remains of 22 individuals from the Cleveland Museum of Natural History and Universidad Nacional Autónoma de México that displayed evidence of pelvic injury. Because these categories are tied directly to clinical assessments concerning the severity and treatment of injuries, this approach can aid in the identification of manner and cause of death, as well as interpretations of possible mechanisms of injury, such as those typical in car-to-pedestrian and motor vehicle accidents. © 2014 American Academy of Forensic Sciences.
Microscopic residues of bone from dissolving human remains in acids.
Vermeij, Erwin; Zoon, Peter; van Wijk, Mayonne; Gerretsen, Reza
2015-05-01
Dissolving bodies is a current method of disposing of human remains and has been practiced throughout the years. During the last decade in the Netherlands, two cases have emerged in which human remains were treated with acid. In the first case, the remains of a cremated body were treated with hydrofluoric acid. In the second case, two complete bodies were dissolved in a mixture of hydrochloric and sulfuric acid. In both cases, a great variety of evidence was collected at the scene of crime, part of which was embedded in resin, polished, and investigated using SEM/EDX. Apart from macroscopic findings like residual bone and artificial teeth, in both cases, distinct microscopic residues of bone were found as follows: (partly) digested bone, thin-walled structures, and recrystallized calcium phosphate. Although some may believe it is possible to dissolve a body in acid completely, at least some of these microscopic residues will always be found. © 2015 American Academy of Forensic Sciences.
Approximation error in PDE-based modelling of vehicular platoons
Hao, He; Barooah, Prabir
2012-08-01
We study the problem of how much error is introduced in approximating the dynamics of a large vehicular platoon by using a partial differential equation, as was done in Barooah, Mehta, and Hespanha [Barooah, P., Mehta, P.G., and Hespanha, J.P. (2009), 'Mistuning-based Decentralised Control of Vehicular Platoons for Improved Closed Loop Stability', IEEE Transactions on Automatic Control, 54, 2100-2113], Hao, Barooah, and Mehta [Hao, H., Barooah, P., and Mehta, P.G. (2011), 'Stability Margin Scaling Laws of Distributed Formation Control as a Function of Network Structure', IEEE Transactions on Automatic Control, 56, 923-929]. In particular, we examine the difference between the stability margins of the coupled-ordinary differential equations (ODE) model and its partial differential equation (PDE) approximation, which we call the approximation error. The stability margin is defined as the absolute value of the real part of the least stable pole. The PDE model has proved useful in the design of distributed control schemes (Barooah et al. 2009; Hao et al. 2011); it provides insight into the effect of gains of local controllers on the closed-loop stability margin that is lacking in the coupled-ODE model. Here we show that the ratio of the approximation error to the stability margin is O(1/N), where N is the number of vehicles. Thus, the PDE model is an accurate approximation of the coupled-ODE model when N is large. Numerical computations are provided to corroborate the analysis.
Ebola vaccine 2014:remained problems to be answered
Somsri; Wiwanitkit; Viroj; Wiwanitkit
2015-01-01
Ebola virus outbreak in Africa in 2014 is a big global issue.The vaccine is the hope for management of the present outbreak of Ebola virus infection.There are several ongoing researches on new Ebola vaccine.In this short manuscript,we discuss and put forward specific remained problems to be answered on this specific issue.Lack for complete knowledge on the new emerging virus,concern from pharmaceutical company and good trial of new vaccine candidates are the remained problem to be further discussed in vaccinology.
On random age and remaining lifetime for populations of items
Finkelstein, M.; Vaupel, J.
2015-01-01
We consider items that are incepted into operation having already a random (initial) age and define the corresponding remaining lifetime. We show that these lifetimes are identically distributed when the age distribution is equal to the equilibrium distribution of the renewal theory. Then we...... develop the population studies approach to the problem and generalize the setting in terms of stationary and stable populations of items. We obtain new stochastic comparisons for the corresponding population ages and remaining lifetimes that can be useful in applications. Copyright (c) 2014 John Wiley...
无
2010-01-01
The experimental analysis of 21 crude oil samples shows a good correlation between high molecular-weight hydrocarbon components (C 40+) and viscosity.Forty-four remaining oil samples extracted from oil sands of oilfield development coring wells were analyzed by high-temperature gas chromatography (HTGC),for the relative abundance of C 21-,C 21-C 40 and C 40+ hydrocarbons.The relationship between viscosity of crude oil and C 40+ (%) hydrocarbons abundance is used to expect the viscosity of remaining oil.The mobility characteristics of remaining oil,the properties of remaining oil,and the next displacement methods in reservoirs either water-flooded or polymer-flooded are studied with rock permeability,oil saturation of coring wells,etc.The experimental results show that the hydrocarbons composition,viscosity,and mobility of remaining oil from both polymer-flooding and water-flooding reservoirs are heterogeneous,especially the former.Relative abundance of C 21- and C 21-C 40 hydrocarbons in polymer-flooding reservoirs is lower than that of water-flooding,but with more abundance of C 40+ hydrocarbons.It is then suggested that polymer flooding must have driven more C 40- hydrocarbons out of reservoir,which resulted in relatively enriched C 40+,more viscous oils,and poorer mobility.Remaining oil in water-flooding reservoirs is dominated by moderate viscosity oil with some low viscosity oil,while polymer-flooding mainly contained moderate viscosity oil with some high viscosity oil.In each oilfield and reservoir,displacement methods of remaining oil,viscosity,and concentration by polymer-solution can be adjusted by current viscosity of remaining oil and mobility ratio in a favorable range.A new basis and methods are suggested for the further development and enhanced oil recovery of remaining oil.
Inversion and approximation of Laplace transforms
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
Computing Functions by Approximating the Input
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-06-23
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Random Attractors of Stochastic Modified Boussinesq Approximation
郭春晓
2011-01-01
The Boussinesq approximation is a reasonable model to describe processes in body interior in planetary physics. We refer to [1] and [2] for a derivation of the Boussinesq approximation, and [3] for some related results of existence and uniqueness of solution.
Approximating a harmonizable isotropic random field
Randall J. Swift
2001-01-01
Full Text Available The class of harmonizable fields is a natural extension of the class of stationary fields. This paper considers a stochastic series approximation of a harmonizable isotropic random field. This approximation is useful for numerical simulation of such a field.
Regression with Sparse Approximations of Data
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected by...
A case where BO Approximation breaks down
无
2007-01-01
@@ The Bom-Oppenheimer (BO)Approximation is ubiquitous in molecular physics,quantum physics and quantum chemistry. However, CAS researchers recently observed a breakdown of the Approximation in the reaction of fluorine with deuterium atoms.The result has been published in the August 24 issue of Science.
Two Point Pade Approximants and Duality
Banks, Tom
2013-01-01
We propose the use of two point Pade approximants to find expressions valid uniformly in coupling constant for theories with both weak and strong coupling expansions. In particular, one can use these approximants in models with a strong/weak duality, when the symmetries do not determine exact expressions for some quantity.
Function Approximation Using Probabilistic Fuzzy Systems
J.H. van den Berg (Jan); U. Kaymak (Uzay); R.J. Almeida e Santos Nogueira (Rui Jorge)
2011-01-01
textabstractWe consider function approximation by fuzzy systems. Fuzzy systems are typically used for approximating deterministic functions, in which the stochastic uncertainty is ignored. We propose probabilistic fuzzy systems in which the probabilistic nature of uncertainty is taken into account.
Approximation of the Inverse -Frame Operator
M R Abdollahpour; A Najati
2011-05-01
In this paper, we introduce the concept of (strong) projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.
Nonlinear approximation with dictionaries I. Direct estimates
Gribonval, Rémi; Nielsen, Morten
2004-01-01
with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...
Approximations for stop-loss reinsurance premiums
Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.
2005-01-01
Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use
Quirks of Stirling's Approximation
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
INVARIANT RANDOM APPROXIMATION IN NONCONVEX DOMAIN
R. Shrivastava
2012-05-01
Full Text Available Random fixed point results in the setup of compact and weakly compact domain of Banach spaces which is not necessary starshaped have been obtained in the present work. Invariant random approximation results have also been determined asits application. In this way, random version of invariant approximation results due toMukherjee and Som [13] and Singh [17] have been given.
Approximability and Parameterized Complexity of Minmax Values
Hansen, Kristoffer Arnsfelt; Hansen, Thomas Dueholm; Miltersen, Peter Bro;
2008-01-01
We consider approximating the minmax value of a multi player game in strategic form. Tightening recent bounds by Borgs et al., we observe that approximating the value with a precision of ε log n digits (for any constant ε > 0) is NP-hard, where n is the size of the game. On the other hand...
Hardness of approximation for strip packing
Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin
2017-01-01
[SODA 2016] have recently proposed a (1.4 + ϵ)-approximation algorithm for this variant, thus showing that strip packing with polynomially bounded data can be approximated better than when exponentially large values are allowed in the input. Their result has subsequently been improved to a (4/3 + ϵ...
Approximations for stop-loss reinsurance premiums
Reijnen, Rajko; Albers, Willem; Kallenberg, Wilbert C.M.
2005-01-01
Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use
Approximations for stop-loss reinsurance premiums
Reijnen, R.; Albers, W.; Kallenberg, W.C.M.
2003-01-01
Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use
Lifetime of the Nonlinear Geometric Optics Approximation
Binzer, Knud Andreas
The subject of the thesis is to study acertain approximation method for highly oscillatory solutions to nonlinear partial differential equations.......The subject of the thesis is to study acertain approximation method for highly oscillatory solutions to nonlinear partial differential equations....
Simple Lie groups without the approximation property
Haagerup, Uffe; de Laat, Tim
2013-01-01
For a locally compact group G, let A(G) denote its Fourier algebra, and let M0A(G) denote the space of completely bounded Fourier multipliers on G. The group G is said to have the Approximation Property (AP) if the constant function 1 can be approximated by a net in A(G) in the weak-∗ topology...
Neanderthal infant and adult infracranial remains from Marillac (Charente, France).
Dolores Garralda, María; Maureille, Bruno; Vandermeersch, Bernard
2014-09-01
At the site of Marillac, near the Ligonne River in Marillac-le-Franc (Charente, France), a remarkable stratigraphic sequence has yielded a wealth of archaeological information, palaeoenvironmental data, as well as faunal and human remains. Marillac must have been a sinkhole used by Neanderthal groups as a hunting camp during MIS 4 (TL date 57,600 ± 4,600BP), where Quina Mousterian lithics and fragmented bones of reindeer predominate. This article describes three infracranial skeleton fragments. Two of them are from adults and consist of the incomplete shafts of a right radius (Marillac 24) and a left fibula (Marillac 26). The third fragment is the diaphysis of the right femur of an immature individual (Marillac 25), the size and shape of which resembles those from Teshik-Tash and could be assigned to a child of a similar age. The three fossils have been compared with the remains of other Neanderthals or anatomically Modern Humans (AMH). Furthermore, the comparison of the infantile femora, Marillac 25 and Teshik-Tash, with the remains of several European children from the early Middle Ages clearly demonstrates the robustness and rounded shape of both Neanderthal diaphyses. Evidence of peri-mortem manipulations have been identified on all three bones, with spiral fractures, percussion pits and, in the case of the radius and femur, unquestionable cutmarks made with flint implements, probably during defleshing. Traces of periostosis appear on the fibula fragment and on the immature femoral diaphysis, although their aetiology remains unknown.
Fossil remains of fungi, algae and other organisms from Jamaica
Germeraad, J.H.
1979-01-01
Fungal remains and other fossils from Cainophytic strata of Jamaica have been compared with species described in mycological and algological publications. Only in a few cases morphologically related taxons have been encountered. The stratigraphie significance of these Jamaican fossils is unknown as
Holocene insect remains from south-western Greenland
Bøcher, Jens Jensenius; Bennike, Ole; Wagner, Bernd
2012-01-01
Remains of plants and invertebrates from Holocene deposits in south-western Greenland include a number of insect fragments from Heteroptera and Coleoptera. Some of the finds extend the known temporal range of the species considerably back in time, and one of the taxa has not previously been found...... of terrestrial insects complement the scarce fossil Greenland record of the species concerned....
Remaining childless : Causes and consequences from a life course perspective
Keizer, R.
2010-01-01
Little is know about childless individuals in the Netherlands, although currently one out of every five Dutch individuals remains childless. Who are they? How did they end up being childless? How and to what extent are their life outcomes influenced by their childlessness? By focusing on individual
Methodology for Extraction of Remaining Sodium of Used Sodium Containers
Jung, Minhwan; Kim, Jongman; Cho, Youngil; Jeong, Jiyoung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
Sodium used as a coolant in the SFR (Sodium-cooled Fast Reactor) reacts easily with most elements due to its high reactivity. If sodium at high temperature leaks outside of a system boundary and makes contact with oxygen, it starts to burn and toxic aerosols are produced. In addition, it generates flammable hydrogen gas through a reaction with water. Hydrogen gas can be explosive within the range of 4.75 vol%. Therefore, the sodium should be handled carefully in accordance with standard procedures even though there is a small amount of target sodium remainings inside the containers and drums used for experiment. After the experiment, all sodium experimental apparatuses should be dismantled carefully through a series of draining, residual sodium extraction, and cleaning if they are no longer reused. In this work, a system for the extraction of the remaining sodium of used sodium drums has been developed and an operation procedure for the system has been established. In this work, a methodology for the extraction of remaining sodium out of the used sodium container has been developed as one of the sodium facility maintenance works. The sodium extraction system for remaining sodium of the used drums was designed and tested successfully. This work will contribute to an establishment of sodium handling technology for PGSFR. (Prototype Gen-IV Sodium-cooled Fast Reactor)
Ancient DNA in human bone remains from Pompeii archaeological site.
Cipollaro, M; Di Bernardo, G; Galano, G; Galderisi, U; Guarino, F; Angelini, F; Cascino, A
1998-06-29
aDNA extraction and amplification procedures have been optimized for Pompeian human bone remains whose diagenesis has been determined by histological analysis. Single copy genes amplification (X and Y amelogenin loci and Y specific alphoid repeat sequences) have been performed and compared with anthropometric data on sexing.
Robotics to enable older adults to remain living at home.
Pearce, Alan J; Adair, Brooke; Miller, Kimberly; Ozanne, Elizabeth; Said, Catherine; Santamaria, Nick; Morris, Meg E
2012-01-01
Given the rapidly ageing population, interest is growing in robots to enable older people to remain living at home. We conducted a systematic review and critical evaluation of the scientific literature, from 1990 to the present, on the use of robots in aged care. The key research questions were as follows: (1) what is the range of robotic devices available to enable older people to remain mobile, independent, and safe? and, (2) what is the evidence demonstrating that robotic devices are effective in enabling independent living in community dwelling older people? Following database searches for relevant literature an initial yield of 161 articles was obtained. Titles and abstracts of articles were then reviewed by 2 independent people to determine suitability for inclusion. Forty-two articles met the criteria for question 1. Of these, 4 articles met the criteria for question 2. Results showed that robotics is currently available to assist older healthy people and people with disabilities to remain independent and to monitor their safety and social connectedness. Most studies were conducted in laboratories and hospital clinics. Currently limited evidence demonstrates that robots can be used to enable people to remain living at home, although this is an emerging smart technology that is rapidly evolving.
Iontophoresis generates an antimicrobial effect that remains after iontophoresis ceases.
Davis, C P; Wagle, N; M. D. Anderson(University of Glasgow); Warren, M M
1992-01-01
Iontophoresis required chlorine-containing compounds in the medium for effective microbial population reduction and killing. After iontophoresis ceased, the antimicrobial effect generated by iontophoresis remained but slowly decreased. Antimicrobial effects of iontophoresis may be related to the generation of short-lived chlorine-containing compounds.
The Remains of the Day Under the Perspective of Adaptation
SUN Ling-ling
2015-01-01
The Remains of the Day is a Booker-winner novel by Kazuo Ishiguro. Stevens is both the protagonist and the narrator of the novel who restrains his feelings and has to live a life of regret and loss. This article provides a glimpse of its character and theme under the perspective of linguistic adaptation.
Identification of the remains of King Richard III
T.E. King (Turi E.); G.G. Fortes (Gloria Gonzalez); P. Balaresque (Patricia); M.G. Thomas (Mark); D.J. Balding (David); P.M. Delser (Pierpaolo Maisano); R. Neumann (Rita); W. Parson (Walther); M. Knapp (Michael); S. Walsh (Susan); L. Tonasso (Laure); J. Holt (John); M.H. Kayser (Manfred); J. Appleby (Jo); P. Forster (Peter); D. Ekserdjian (David); M. Hofreiter (Michael); K. Schürer (Kevin)
2014-01-01
textabstractIn 2012, a skeleton was excavated at the presumed site of the Grey Friars friary in Leicester, the last-known resting place of King Richard III. Archaeological, osteological and radiocarbon dating data were consistent with these being his remains. Here we report DNA analyses of both the
Robotics to Enable Older Adults to Remain Living at Home
Alan J. Pearce
2012-01-01
Full Text Available Given the rapidly ageing population, interest is growing in robots to enable older people to remain living at home. We conducted a systematic review and critical evaluation of the scientific literature, from 1990 to the present, on the use of robots in aged care. The key research questions were as follows: (1 what is the range of robotic devices available to enable older people to remain mobile, independent, and safe? and, (2 what is the evidence demonstrating that robotic devices are effective in enabling independent living in community dwelling older people? Following database searches for relevant literature an initial yield of 161 articles was obtained. Titles and abstracts of articles were then reviewed by 2 independent people to determine suitability for inclusion. Forty-two articles met the criteria for question 1. Of these, 4 articles met the criteria for question 2. Results showed that robotics is currently available to assist older healthy people and people with disabilities to remain independent and to monitor their safety and social connectedness. Most studies were conducted in laboratories and hospital clinics. Currently limited evidence demonstrates that robots can be used to enable people to remain living at home, although this is an emerging smart technology that is rapidly evolving.
An improved proximity force approximation for electrostatics
Fosco, C D; Mazzitelli, F D
2012-01-01
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated to their shapes. Indeed, in the so called "proximity force approximation" the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contribution of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied to different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful to discuss the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction i...
Approximate Furthest Neighbor in High Dimensions
Pagh, Rasmus; Silvestri, Francesco; Sivertsen, Johan von Tangen;
2015-01-01
Much recent work has been devoted to approximate nearest neighbor queries. Motivated by applications in recommender systems, we consider approximate furthest neighbor (AFN) queries. We present a simple, fast, and highly practical data structure for answering AFN queries in high-dimensional Euclid......Much recent work has been devoted to approximate nearest neighbor queries. Motivated by applications in recommender systems, we consider approximate furthest neighbor (AFN) queries. We present a simple, fast, and highly practical data structure for answering AFN queries in high......-dimensional Euclidean space. We build on the technique of Indyk (SODA 2003), storing random projections to provide sublinear query time for AFN. However, we introduce a different query algorithm, improving on Indyk’s approximation factor and reducing the running time by a logarithmic factor. We also present a variation...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.
A two parameter ratio-product-ratio estimator using auxiliary information
Chami, Peter S; Thomas, Doneal
2012-01-01
We propose a two parameter ratio-product-ratio estimator for a finite population mean in a simple random sample without replacement following the methodology in Ray and Sahai (1980), Sahai and Ray (1980), Sahai and Sahai (1985) and Singh and Ruiz Espejo (2003). The bias and mean square error of our proposed estimator are obtained to the first degree of approximation. We derive conditions for the parameters under which the proposed estimator has smaller mean square error than the sample mean, ratio and product estimators. We carry out an application showing that the proposed estimator outperforms the traditional estimators using groundwater data taken from a geological site in the state of Florida.
A systematic sequence of relativistic approximations.
Dyall, Kenneth G
2002-06-01
An approach to the development of a systematic sequence of relativistic approximations is reviewed. The approach depends on the atomically localized nature of relativistic effects, and is based on the normalized elimination of the small component in the matrix modified Dirac equation. Errors in the approximations are assessed relative to four-component Dirac-Hartree-Fock calculations or other reference points. Projection onto the positive energy states of the isolated atoms provides an approximation in which the energy-dependent parts of the matrices can be evaluated in separate atomic calculations and implemented in terms of two sets of contraction coefficients. The errors in this approximation are extremely small, of the order of 0.001 pm in bond lengths and tens of microhartrees in absolute energies. From this approximation it is possible to partition the atoms into relativistic and nonrelativistic groups and to treat the latter with the standard operators of nonrelativistic quantum mechanics. This partitioning is shared with the relativistic effective core potential approximation. For atoms in the second period, errors in the approximation are of the order of a few hundredths of a picometer in bond lengths and less than 1 kJ mol(-1) in dissociation energies; for atoms in the third period, errors are a few tenths of a picometer and a few kilojoule/mole, respectively. A third approximation for scalar relativistic effects replaces the relativistic two-electron integrals with the nonrelativistic integrals evaluated with the atomic Foldy-Wouthuysen coefficients as contraction coefficients. It is similar to the Douglas-Kroll-Hess approximation, and is accurate to about 0.1 pm and a few tenths of a kilojoule/mole. The integrals in all the approximations are no more complicated than the integrals in the full relativistic methods, and their derivatives are correspondingly easy to formulate and evaluate.
Krasilnikov, M. B., E-mail: mihail.krasilnikov@gmail.com; Kudryavtsev, A. A. [St. Petersburg State University, St. Petersburg 198504 (Russian Federation); Kapustin, K. D. [St. Petersburg University ITMO, St. Petersburg 197101 (Russian Federation)
2014-12-15
It is shown that the local approximation for computing the electron distribution function depends both on the ratio between the energy relaxation length and a characteristic plasma length and on the ratio between heating and ambipolar electric fields. In particular, the local approximation is not valid at the discharge periphery even at high pressure due to the fact that the ambipolar electric field practically always is larger than the heating electric field.
k-Edge-Connectivity: Approximation and LP Relaxation
Pritchard, David
2010-01-01
This paper's focus is the following family of problems, denoted k-ECSS, where k denotes a positive integer: given a graph (V, E) and costs for each edge, find a minimum-cost subset F of E such that (V, F) is k-edge-connected. For k=1 it is the spanning tree problem which is in P; for every other k it is APX-hard and has a 2-approximation. Moreover, assuming P != NP, it is known that for unit costs, the best possible approximation ratio is 1 + Theta(1/k) for k>1. Our first main result is to determine the analogous asymptotic ratio for general costs: we show there is a constant eps>0 so that for all k>1, finding a (1+eps)-approximation for k-ECSS is NP-hard. Thus we establish a gap between the unit-cost and general-cost versions, for large enough k. Next, we consider the multi-subgraph cousin of k-ECSS, in which we are allowed to buy arbitrarily many copies of any edges (i.e., F is now a multi-subset of E, with parallel copies having the same cost as the original edge). Not so much is known about this natural v...
Frankenstein's glue: transition functions for approximate solutions
Yunes, Nicolás
2007-09-01
Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.
The tendon approximator device in traumatic injuries.
Forootan, Kamal S; Karimi, Hamid; Forootan, Nazilla-Sadat S
2015-01-01
Precise and tension-free approximation of two tendon endings is the key predictor of outcomes following tendon lacerations and repairs. We evaluate the efficacy of a new tendon approximator device in tendon laceration repairs. In a comparative study, we used our new tendon approximator device in 99 consecutive patients with laceration of 266 tendons who attend a university hospital and evaluated the operative time to repair the tendons, surgeons' satisfaction as well as patient's outcomes in a long-term follow-up. Data were compared with the data of control patients undergoing tendon repair by conventional method. Totally 266 tendons were repaired by approximator device and 199 tendons by conventional technique. 78.7% of patients in first group were male and 21.2% were female. In approximator group 38% of patients had secondary repair of cut tendons and 62% had primary repair. Patients were followed for a mean period of 3years (14-60 months). Time required for repair of each tendon was significantly reduced with the approximator device (2 min vs. 5.5 min, ptendon repair were identical in the two groups and were not significantly different. 1% of tendons in group A and 1.2% in group B had rupture that was not significantly different. The new nerve approximator device is cheap, feasible to use and reduces the time of tendon repair with sustained outcomes comparable to the conventional methods.
Dental DNA fingerprinting in identification of human remains
K L Girish
2010-01-01
Full Text Available The recent advances in molecular biology have revolutionized all aspects of dentistry. DNA, the language of life yields information beyond our imagination, both in health or disease. DNA fingerprinting is a tool used to unravel all the mysteries associated with the oral cavity and its manifestations during diseased conditions. It is being increasingly used in analyzing various scenarios related to forensic science. The technical advances in molecular biology have propelled the analysis of the DNA into routine usage in crime laboratories for rapid and early diagnosis. DNA is an excellent means for identification of unidentified human remains. As dental pulp is surrounded by dentin and enamel, which forms dental armor, it offers the best source of DNA for reliable genetic type in forensic science. This paper summarizes the recent literature on use of this technique in identification of unidentified human remains.
Mineral remains of early life on Earth? On Mars?
Iberall, Robbins E.; Iberall, A.S.
1991-01-01
The oldest sedimentary rocks on Earth, the 3.8-Ga Isua Iron-Formation in southwestern Greenland, are metamorphosed past the point where organic-walled fossils would remain. Acid residues and thin sections of these rocks reveal ferric microstructures that have filamentous, hollow rod, and spherical shapes not characteristic of crystalline minerals. Instead, they resemble ferric-coated remains of bacteria. Because there are no earlier sedimentary rocks to study on Earth, it may be necessary to expand the search elsewhere in the solar system for clues to any biotic precursors or other types of early life. A study of morphologies of iron oxide minerals collected in the southern highlands during a Mars sample return mission may therefore help to fill in important gaps in the history of Earth's earliest biosphere. -from Authors
The impact of downsizing on remaining workers' sickness absence.
Østhus, Ståle; Mastekaasa, Arne
2010-10-01
It is generally assumed that organizational downsizing has considerable negative consequences, not only for workers that are laid off, but also for those who remain employed. The empirical evidence with regard to effects on sickness absence is, however, inconsistent. This study employs register data covering a major part of the total workforce in Norway over the period 2000-2003. The number of sickness absence episodes and the number of sickness absence days are analysed by means of Poisson regression. To control for both observed and unobserved stable individual characteristics, we use conditional (fixed effects) estimation. The analyses provide some weak indications that downsizing may lead to slightly less sickness absence, but the overall impression is that downsizing has few if any effects on the sickness absence of the remaining employees.
Holocene insect remains from south-western Greenland
Bøcher, Jens Jensenius; Bennike, Ole; Wagner, Bernd
2012-01-01
Remains of plants and invertebrates from Holocene deposits in south-western Greenland include a number of insect fragments from Heteroptera and Coleoptera. Some of the finds extend the known temporal range of the species considerably back in time, and one of the taxa has not previously been found...... in Greenland either fossil or extant. The fossil fauna includes the weevil Rutidosoma globulus which is at present extremely rare in Greenland. Its rarity might indicate that it is a recent immigrant, but the fossil finds provide a minimum date for its arrival at around 5840 cal. years B. P. Other remains...... of terrestrial insects complement the scarce fossil Greenland record of the species concerned....
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Kneringer, E; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Roussarie, A; Schuller, J P; Schwindling, J; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
From 64492 selected \\tau-pair events, produced at the Z^0 resonance, the measurement of the tau decays into hadrons from a global analysis using 1991, 1992 and 1993 ALEPH data is presented. Special emphasis is given to the reconstruction of photons and \\pi^0's, and the removal of fake photons. A detailed study of the systematics entering the \\pi^0 reconstruction is also given. A complete and consistent set of tau hadronic branching ratios is presented for 18 exclusive modes. Most measurements are more precise than the present world average. The new level of precision reached allows a stringent test of \\tau-\\mu universality in hadronic decays, g_\\tau/g_\\mu \\ = \\ 1.0013 \\ \\pm \\ 0.0095, and the first measurement of the vector and axial-vector contributions to the non-strange hadronic \\tau decay width: R_{\\tau ,V} \\ = \\ 1.788 \\ \\pm \\ 0.025 and R_{\\tau ,A} \\ = \\ 1.694 \\ \\pm \\ 0.027. The ratio (R_{\\tau ,V} - R_{\\tau ,A}) / (R_{\\tau ,V} + R_{\\tau ,A}), equal to (2.7 \\pm 1.3) \\ \\%, is a measure of the importance of Q...
Comprehensive analysis of microorganisms accompanying human archaeological remains.
Philips, Anna; Stolarek, Ireneusz; Kuczkowska, Bogna; Juras, Anna; Handschuh, Luiza; Piontek, Janusz; Kozlowski, Piotr; Figlerowicz, Marek
2017-07-01
Metagenome analysis has become a common source of information about microbial communities that occupy a wide range of niches, including archaeological specimens. It has been shown that the vast majority of DNA extracted from ancient samples come from bacteria (presumably modern contaminants). However, characterization of microbial DNA accompanying human remains has never been done systematically for a wide range of different samples. We used metagenomic approaches to perform comparative analyses of microorganism communities present in 161 archaeological human remains. DNA samples were isolated from the teeth of human skeletons dated from 100 AD to 1200 AD. The skeletons were collected from 7 archaeological sites in Central Europe and stored under different conditions. The majority of identified microbes were ubiquitous environmental bacteria that most likely contaminated the host remains not long ago. We observed that the composition of microbial communities was sample-specific and not correlated with its temporal or geographical origin. Additionally, traces of bacteria and archaea typical for human oral/gut flora, as well as potential pathogens, were identified in two-thirds of the samples. The genetic material of human-related species, in contrast to the environmental species that accounted for the majority of identified bacteria, displayed DNA damage patterns comparable with endogenous human ancient DNA, which suggested that these microbes might have accompanied the individual before death. Our study showed that the microbiome observed in an individual sample is not reliant on the method or duration of sample storage. Moreover, shallow sequencing of DNA extracted from ancient specimens and subsequent bioinformatics analysis allowed both the identification of ancient microbial species, including potential pathogens, and their differentiation from contemporary species that colonized human remains more recently. © The Authors 2017. Published by Oxford University
Middle Paleolithic and Uluzzian human remains from Fumane Cave, Italy.
Benazzi, Stefano; Bailey, Shara E; Peresani, Marco; Mannino, Marcello A; Romandini, Matteo; Richards, Michael P; Hublin, Jean-Jacques
2014-05-01
The site of Fumane Cave (western Lessini Mountains, Italy) contains a stratigraphic sequence spanning the Middle to early Upper Paleolithic. During excavations from 1989 to 2011, four human teeth were unearthed from the Mousterian (Fumane 1, 4, 5) and Uluzzian (Fumane 6) levels of the cave. In this contribution, we provide the first morphological description and morphometric analysis of the dental remains. All of the human remains, except for Fumane 6, are deciduous teeth. Based on metric data (crown and cervical outline analysis, and lateral enamel thickness) and non-metric dental traits (e.g., mid-trigonid crest), Fumane 1 (lower left second deciduous molar) clearly belongs to a Neandertal. For Fumane 4 (upper right central deciduous incisor), the taxonomic attribution is difficult due to heavy incisal wear. Some morphological features observed in Fumane 5 (lower right lateral deciduous incisor), coupled with the large size of the tooth, support Neandertal affinity. Fumane 6, a fragment of a permanent molar, does not show any morphological features useful for taxonomic discrimination. The human teeth from Fumane Cave increase the sample of Italian fossil remains, and emphasize the need to develop new methods to extract meaningful taxonomic information from deciduous and worn teeth. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mineral content of the dentine remaining after chemomechanical caries removal.
Yip, H K; Beeley, J A; Stevenson, A G
1995-01-01
Although the dentine remaining after chemomechanical caries removal appears sound by normal clinical criteria, no definitive evidence has yet been obtained to confirm that the dentine surface is in fact mineralised. The aim of this study was to use backscattered electron (BSE) imaging and electron probe micro-analysis (EPMA) to ascertain the level of mineralisation of the dentine remaining in cavities prepared by this technique. Carious dentine was removed from carious lesions by means of N-monochloro-DL-2-aminobutyric acid (NMAB) or NMAB containing 2 mol/l urea. Sections of teeth in which caries removal was complete by normal clinical criteria were examined by EPMA and BSE. Dentine adjacent to the pulp was found to be less mineralised than the surrounding dentine. Although the superficial layer of dentine remaining on the cavity floors frequently appeared to have a slightly reduced mineral content, the results clearly indicated that there was no significant difference between this dentine and the underlying sound dentine.
After 3 years of starvation: duodenum swallowed remaining stomach.
Hillenbrand, Andreas; Waidner, Uta; Henne-Bruns, Doris; Maria Wolf, Anna; Buttenschoen, Klaus
2009-05-01
A 42-year-old morbidly obese patient (BMI 44.1 kg/m(2)) was admitted to our emergency room with upper abdominal pain, nausea, and cholestasis. Nine years ago, a vertical banded gastroplasty had been performed (former BMI 53.5 kg/m(2)) with a subsequent weight loss to BMI 33.0 kg/m(2). After regaining weight up to a BMI of 47.6 kg/m(2), 5 years ago a conversion to a gastric bypass was realized. A computed tomography of the abdomen showed an invagination of the remaining stomach into the duodenum causing obstruction of the orifice of common bile duct. The patient underwent an open desinvagination of the intussusception and resection of the remaining stomach. Gastroduodenal intussusception is rare and mostly secondary to gastric lipoma. To prevent this rare but serious complication, the remaining stomach could be fixed at the crura of the diaphragm, tagged to the anterior abdominal wall by temporary gastrostomy tube, or resected.
Remaining useful tool life predictions in turning using Bayesian inference
Jaydeep M. Karandikar
2013-01-01
Full Text Available Tool wear is an important factor in determining machining productivity. In this paper, tool wear is characterized by remaining useful tool life in a turning operation and is predicted using spindle power and a random sample path method of Bayesian inference. Turning tests are performed at different speeds and feed rates using a carbide tool and MS309 steel work material. The spindle power and the tool flank wear are monitored during cutting; the root mean square of the time domain power is found to be sensitive to tool wear. Sample root mean square power growth curves are generated and the probability of each curve being the true growth curve is updated using Bayes’ rule. The updated probabilities are used to determine the remaining useful tool life. Results show good agreement between the predicted tool life and the empirically-determined true remaining life. The proposed method takes into account the uncertainty in tool life and the growth of the root mean square power at the end of tool life and is, therefore, robust and reliable.
DIFFERENCE SCHEMES BASING ON COEFFICIENT APPROXIMATION
MOU Zong-ze; LONG Yong-xing; QU Wen-xiao
2005-01-01
In respect of variable coefficient differential equations, the equations of coefficient function approximation were more accurate than the coefficient to be frozen as a constant in every discrete subinterval. Usually, the difference schemes constructed based on Taylor expansion approximation of the solution do not suit the solution with sharp function.Introducing into local bases to be combined with coefficient function approximation, the difference can well depict more complex physical phenomena, for example, boundary layer as well as high oscillatory, with sharp behavior. The numerical test shows the method is more effective than the traditional one.
Approximate equivalence in von Neumann algebras
DING; Huiru; Don; Hadwin
2005-01-01
One formulation of D. Voiculescu's theorem on approximate unitary equivalence is that two unital representations π and ρ of a separable C*-algebra are approximately unitarily equivalent if and only if rank o π = rank o ρ. We study the analog when the ranges of π and ρ are contained in a von Neumann algebra R, the unitaries inducing the approximate equivalence must come from R, and "rank" is replaced with "R-rank" (defined as the Murray-von Neumann equivalence of the range projection).
Approximation of free-discontinuity problems
Braides, Andrea
1998-01-01
Functionals involving both volume and surface energies have a number of applications ranging from Computer Vision to Fracture Mechanics. In order to tackle numerical and dynamical problems linked to such functionals many approximations by functionals defined on smooth functions have been proposed (using high-order singular perturbations, finite-difference or non-local energies, etc.) The purpose of this book is to present a global approach to these approximations using the theory of gamma-convergence and of special functions of bounded variation. The book is directed to PhD students and researchers in calculus of variations, interested in approximation problems with possible applications.
Mathematical analysis, approximation theory and their applications
Gupta, Vijay
2016-01-01
Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.
Regression with Sparse Approximations of Data
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...
Orthorhombic rational approximants for decagonal quasicrystals
S Ranganathan; Anandh Subramaniam
2003-10-01
An important exercise in the study of rational approximants is to derive their metric, especially in relation to the corresponding quasicrystal or the underlying clusters. Kuo’s model has been the widely accepted model to calculate the metric of the decagonal approximants. Using an alternate model, the metric of the approximants and other complex structures with the icosahedral cluster are explained elsewhere. In this work a comparison is made between the two models bringing out their equivalence. Further, using the concept of average lattices, a modified model is proposed.
Approximation of the semi-infinite interval
A. McD. Mercer
1980-01-01
Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.
Malleability of the approximate number system: effects of feedback and training
Nicholas Kurshan DeWind
2012-04-01
Full Text Available Prior research demonstrates that animals and humans share an approximate number system (ANS, characterized by ratio dependence and that the precision of this system increases substantially over human development. The goal of the present research was to investigate the malleability of the ANS (as measured by weber fraction in adult subjects in response to feedback and to explore the relationship between ANS acuity and acuity on another magnitude comparison task. We tested each of 20 subjects over six 1-hour sessions. The main findings were that a weber fractions rapidly decreased when trial-by-trial feedback was introduced in the second session and remained stable over continued training, b weber fractions remained steady when trial-by-trial feedback was removed in session six, c weber fractions from the number comparison task were positively correlated with weber fractions from a line length comparison task, d improvement in weber fractions in response to feedback for the number task did not transfer to the line length task, e finally, the precision of the ANS was positively correlated with math, but not verbal, SAT or GRE scores. Potential neural correlates of the perceptual information and decision processes are considered, and predictions regarding the neural correlates of ANS malleability are discussed.
Approximability of the discrete Fréchet distance
Karl Bringmann
2015-12-01
Full Text Available The Fréchet distance is a popular and widespread distance measure for point sequences and for curves. About two years ago, Agarwal et al. [SIAM J. Comput. 2014] presented a new (mildly subquadratic algorithm for the discrete version of the problem. This spawned a flurry of activity that has led to several new algorithms and lower bounds.In this paper, we study the approximability of the discrete Fréchet distance. Building on a recent result by Bringmann [FOCS 2014], we present a new conditional lower bound showing that strongly subquadratic algorithms for the discrete Fréchet distance are unlikely to exist, even in the one-dimensional case and even if the solution may be approximated up to a factor of 1.399.This raises the question of how well we can approximate the Fréchet distance (of two given $d$-dimensional point sequences of length $n$ in strongly subquadratic time. Previously, no general results were known. We present the first such algorithm by analysing the approximation ratio of a simple, linear-time greedy algorithm to be $2^{\\Theta(n}$. Moreover, we design an $\\alpha$-approximation algorithm that runs in time $O(n\\log n + n^2/\\alpha$, for any $\\alpha\\in [1, n]$. Hence, an $n^\\varepsilon$-approximation of the Fréchet distance can be computed in strongly subquadratic time, for any $\\varepsilon > 0$.
An overview on Approximate Bayesian computation*
Baragatti Meïli
2014-01-01
Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.
Trigonometric Approximations for Some Bessel Functions
Muhammad Taher Abuelma'atti
1999-01-01
Formulas are obtained for approximating the tabulated Bessel functions Jn(x), n = 0–9 in terms of trigonometric functions. These formulas can be easily integrated and differentiated and are convenient for personal computers and pocket calculators.
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
An approximate Expression for Viscosity of Nanosuspensions
Domostroeva, N G
2009-01-01
We consider liquid suspensions with dispersed nanoparticles. Using two-points Pade approximants and combining results of both hydrodynamic and molecular dynamics methods, we obtain the effective viscosity for any diameters of nanoparticles
Staying thermal with Hartree ensemble approximations
Salle, Mischa E-mail: msalle@science.uva.nl; Smit, Jan E-mail: jsmit@science.uva.nl; Vink, Jeroen C. E-mail: jcvink@science.uva.nl
2002-03-25
We study thermal behavior of a recently introduced Hartree ensemble approximation, which allows for non-perturbative inhomogeneous field configurations as well as for approximate thermalization, in the phi (cursive,open) Greek{sup 4} model in 1+1 dimensions. Using ensembles with a free field thermal distribution as out-of-equilibrium initial conditions we determine thermalization time scales. The time scale for which the system stays in approximate quantum thermal equilibrium is an indication of the time scales for which the approximation method stays reasonable. This time scale turns out to be two orders of magnitude larger than the time scale for thermalization, in the range of couplings and temperatures studied. We also discuss simplifications of our method which are numerically more efficient and make a comparison with classical dynamics.
Approximations of solutions to retarded integrodifferential equations
Dhirendra Bahuguna
2004-11-01
Full Text Available In this paper we consider a retarded integrodifferential equation and prove existence, uniqueness and convergence of approximate solutions. We also give some examples to illustrate the applications of the abstract results.
Approximation methods in gravitational-radiation theory
Will, C. M.
1986-02-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION
Mădălina Roxana Buneci
2016-12-01
Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere
Methods of Fourier analysis and approximation theory
Tikhonov, Sergey
2016-01-01
Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.
Seismic wave extrapolation using lowrank symbol approximation
Fomel, Sergey
2012-04-30
We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.
Approximate Flavor Symmetry in Supersymmetric Model
Tao, Zhijian
1998-01-01
We investigate the maximal approximate flavor symmetry in the framework of generic minimal supersymmetric standard model. We consider the low energy effective theory of the flavor physics with all the possible operators included. Spontaneous flavor symmetry breaking leads to the approximate flavor symmetry in Yukawa sector and the supersymmetry breaking sector. Fermion mass and mixing hierachies are the results of the hierachy of the flavor symmetry breaking. It is found that in this theory i...
Pointwise approximation by elementary complete contractions
Magajna, Bojan
2009-01-01
A complete contraction on a C*-algebra A, which preserves all closed two sided ideals J, can be approximated pointwise by elementary complete contractions if and only if the induced map on the tensor product of B with A/J is contractive for every C*-algebra B, ideal J in A and C*-tensor norm on the tensor product. A lifting obstruction for such an approximation is also obtained.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Parallel local approximation MCMC for expensive models
Conrad, Patrick; Davis, Andrew; Marzouk, Youssef; Pillai, Natesh; Smith, Aaron
2016-01-01
Performing Bayesian inference via Markov chain Monte Carlo (MCMC) can be exceedingly expensive when posterior evaluations invoke the evaluation of a computationally expensive model, such as a system of partial differential equations. In recent work [Conrad et al. JASA 2015, arXiv:1402.1694] we described a framework for constructing and refining local approximations of such models during an MCMC simulation. These posterior--adapted approximations harness regularity of the model to reduce the c...
The Actinide Transition Revisited by Gutzwiller Approximation
Xu, Wenhu; Lanata, Nicola; Yao, Yongxin; Kotliar, Gabriel
2015-03-01
We revisit the problem of the actinide transition using the Gutzwiller approximation (GA) in combination with the local density approximation (LDA). In particular, we compute the equilibrium volumes of the actinide series and reproduce the abrupt change of density found experimentally near plutonium as a function of the atomic number. We discuss how this behavior relates with the electron correlations in the 5 f states, the lattice structure, and the spin-orbit interaction. Our results are in good agreement with the experiments.
Intuitionistic Fuzzy Automaton for Approximate String Matching
K.M. Ravi
2014-03-01
Full Text Available This paper introduces an intuitionistic fuzzy automaton model for computing the similarity between pairs of strings. The model details the possible edit operations needed to transform any input (observed string into a target (pattern string by providing a membership and non-membership value between them. In the end, an algorithm is given for approximate string matching and the proposed model computes the similarity and dissimilarity between the pair of strings leading to better approximation.
Approximations for the Erlang Loss Function
Mejlbro, Leif
1998-01-01
Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error <1E-2, and methods are indicated for improving this bound.......Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error
Staying Thermal with Hartree Ensemble Approximations
Salle, M; Vink, Jeroen C
2000-01-01
Using Hartree ensemble approximations to compute the real time dynamics of scalar fields in 1+1 dimension, we find that with suitable initial conditions, approximate thermalization is achieved much faster than found in our previous work. At large times, depending on the interaction strength and temperature, the particle distribution slowly changes: the Bose-Einstein distribution of the particle densities develops classical features. We also discuss variations of our method which are numerically more efficient.
Lattice quantum chromodynamics with approximately chiral fermions
Hierl, Dieter
2008-05-15
In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)
Nonlinear approximation in alpha-modulation spaces
Borup, Lasse; Nielsen, Morten
2006-01-01
The α-modulation spaces are a family of spaces that contain the Besov and modulation spaces as special cases. In this paper we prove that brushlet bases can be constructed to form unconditional and even greedy bases for the α-modulation spaces. We study m -term nonlinear approximation with brushlet...... bases, and give complete characterizations of the associated approximation spaces in terms of α-modulation spaces....
On surface approximation using developable surfaces
Chen, H. Y.; Lee, I. K.; Leopoldseder, s.
1999-01-01
We introduce a method for approximating a given surface by a developable surface. It will be either a G(1) surface consisting of pieces of cones or cylinders of revolution or a G(r) NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produce...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding. (C) 1999 Academic Press....
On surface approximation using developable surfaces
Chen, H. Y.; Lee, I. K.; Leopoldseder, S.
1998-01-01
We introduce a method for approximating a given surface by a developable surface. It will be either a G_1 surface consisting of pieces of cones or cylinders of revolution or a G_r NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produce...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding....
Improved approximations for robust mincut and shortest path
Polishchuk, Valentin
2010-01-01
In two-stage robust optimization the solution to a problem is built in two stages: In the first stage a partial, not necessarily feasible, solution is exhibited. Then the adversary chooses the "worst" scenario from a predefined set of scenarios. In the second stage, the first-stage solution is extended to become feasible for the chosen scenario. The costs at the second stage are larger than at the first one, and the objective is to minimize the total cost paid in the two stages. We give a 2-approximation algorithm for the robust mincut problem and a ({\\gamma}+2)-approximation for the robust shortest path problem, where {\\gamma} is the approximation ratio for the Steiner tree. This improves the factors (1+\\sqrt2) and 2({\\gamma}+2) from [Golovin, Goyal and Ravi. Pay today for a rainy day: Improved approximation algorithms for demand-robust min-cut and shortest path problems. STACS 2006]. In addition, our solution for robust shortest path is simpler and more efficient than the earlier ones; this is achieved by a...
Xiao Xiaowei
1998-01-01
Objective: To study the relationship between internal anal sphincter function and length of remaining rectum after resecting rectal carcinoma. Methods: Preoperatively,21 patients were evaluated via patients' clinical date,including anal resting pressure (resting pressure) assay.Six months postoperatively, repeated manometric studies and clinical evaluations were performed to assess the level of continence. The formula use for calculating postoperative resting pressure is as follows: postoperative resting pressure=0.42×preoperative resting pressure+1.56×length of remaining recturm+12.37 (R2=0.58; P＜0.01).Degree of continence was graded based on severity of the dysfunction and grade of the continence score. Results:It was demonstrated the patients with low postoperative resting pressures (＜4.0 Kpa) had incontinence, and those with high postoperative resting pressures (＞4.7 Kpa) were continent. There were significant correlations between length of the remaining rectum and ratio of the decrease in maximum resting pressure (postoperative/preoperative maximum resting pressure; r=0.62; P＜0.01). Conclusion:Continence of rectum is influenced by maximum resting pressure of function of the internal anal sphincter, length of remaining rectum is shorter, the more damage to the internal anal sphincter. It is able to foretell stool incontinence by using the postoperative resting pressure formula, and to determine the length of the remaining rectum.
Differential geometry of proteins. Helical approximations.
Louie, A H; Somorjai, R L
1983-07-25
We regard a protein molecule as a geometric object, and in a first approximation represent it as a regular parametrized space curve passing through its alpha-carbon atoms (the backbone). In an earlier paper we argued that the regular patterns of secondary structures of proteins (morphons) correspond to geodesics on minimal surfaces. In this paper we discuss methods of recognizing these morphons on space curves that represent the protein backbone conformation. The mathematical tool we employ is the differential geometry of curves and surfaces. We introduce a natural approximation of backbone space curves in terms of helical approximating elements and present a computer algorithm to implement the approximation. Simple recognition criteria are given for the various morphons of proteins. These are incorporated into our helical approximation algorithm, together with more non-local criteria for the recognition of beta-sheet topologies. The method and the algorithm are illustrated with several examples of representative proteins. Generalizations of the helical approximation method are considered and their possible implications for protein energetics are sketched.
Quasi-greedy triangulations approximating the minimum weight triangulation
Levcopoulos, C.; Krznaric, D. [Lund Univ. (Sweden)
1996-12-31
This paper settles the following two open problems: (1) What is the worst-case approximation ratio between the greedy and the minimum weight triangulation? (2) Is there a polynomial time algorithm that always pro- duces a triangulation whose length is within a constant factor from the minimum? The answer to the first question is that the known {Omega}({radical}n) lower bound is tight. The second question is answered in the affirmative by using a slight modification of an O(n log n) algorithm for the greedy triangulation. We also derive some other interesting results. For example, we show that a constant-factor approximation of the minimum weight convex partition can be obtained within the same time bounds.
The restricted isometry property meets nonlinear approximation with redundant frames
Gribonval, Rémi; Nielsen, Morten
with a redundant frame. The main ingredients of our approach are: a) Jackson and Bernstein inequalities, associated to the characterization of certain approximation spaces with interpolation spaces; b) a new proof that for overcomplete frames which satisfy a Bernstein inequality, these interpolation spaces...... are nothing but the collection of vectors admitting a representation in the dictionary with compressible coefficients; c) the proof that the RIP implies Bernstein inequalities. As a result, we obtain that in most overcomplete random Gaussian dictionaries with fixed aspect ratio, just as in any orthonormal...... basis, the error of best m-term approximation of a vector decays at a certain rate if, and only if, the vector admits a compressible expansion in the dictionary. Yet, for mildly overcomplete dictionaries with a one-dimensional kernel, we give examples where the Bernstein inequality holds, but the same...
Pseudoscalar decays into lepton pairs from rational approximants
Sanchez-Puertas, Pablo
2015-01-01
The pseudoscalar decays into lepton pairs $P\\rightarrow\\overline{\\ell}\\ell$ are analyzed with the machinery of Canterbury approximants, an extension of Pad\\'e approximants to bivariate functions. This framework provides an ideal model-independent approach to implement all our knowledge of the pseudoscalar transition form factors driving these decays, can be used for data analysis, and allows to include experimental data and theoretical constraints in an easy way, and determine a systematic error. We find that previous theoretical estimates for these branching ratios have underestimated their theoretical uncertainties. From our updated results, the existing experimental discrepancies for $\\pi^0\\rightarrow e^+e^-$ and $\\eta\\rightarrow \\mu^+\\mu^-$ channels cannot be explained unless the doubly-virtual transition form factors behavior -not yet measured- are out of theoretical expectations, which is an interesting result both for anomalous magnetic moments of leptons, and for physics beyond the standard model.
Sadegh, Payman
1997-01-01
This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...
Yellow Fever Remains a Potential Threat to Public Health.
Vasconcelos, Pedro F C; Monath, Thomas P
2016-08-01
Yellow fever (YF) remains a serious public health threat in endemic countries. The recent re-emergence in Africa, initiating in Angola and spreading to Democratic Republic of Congo and Uganda, with imported cases in China and Kenya is of concern. There is such a shortage of YF vaccine in the world that the World Health Organization has proposed the use of reduced doses (1/5) during emergencies. In this short communication, we discuss these and other problems including the risk of spread of YF to areas free of YF for decades or never before affected by this arbovirus disease.
Studies on protozoa in ancient remains - A Review
Liesbeth Frías
2013-02-01
Full Text Available Paleoparasitological research has made important contributions to the understanding of parasite evolution and ecology. Although parasitic protozoa exhibit a worldwide distribution, recovering these organisms from an archaeological context is still exceptional and relies on the availability and distribution of evidence, the ecology of infectious diseases and adequate detection techniques. Here, we present a review of the findings related to protozoa in ancient remains, with an emphasis on their geographical distribution in the past and the methodologies used for their retrieval. The development of more sensitive detection methods has increased the number of identified parasitic species, promising interesting insights from research in the future.
"Recent" macrofossil remains from the Lomonosov Ridge, central Arctic Ocean
Le Duc, Cynthia; de Vernal, Anne; Archambault, Philippe; Brice, Camille; Roberge, Philippe
2016-04-01
The examination of surface sediment samples collected from 17 sites along the Lomonosov Ridge at water depths ranging from 737 to 3339 meters during Polarstern Expedition PS87 in 2014 (Stein, 2015), indicates a rich biogenic content almost exclusively dominated by calcareous remains. Amongst biogenic remains, microfossils (planktic and benthic foraminifers, pteropods, ostracods, etc.) dominate but millimetric to centrimetric macrofossils occurred frequently at the surface of the sediment. The macrofossil remains consist of a large variety of taxa, including gastropods, bivalvia, polychaete tubes, scaphopods, echinoderm plates and spines, and fish otoliths. Among the Bivalvia, the most abundant taxa are Portlandia arctica, Hyalopecten frigidus, Cuspidaria glacilis, Policordia densicostata, Bathyarca spp., and Yoldiella spp. Whereas a few specimens are well preserved and apparently pristine, most mollusk shells displayed extensive alteration features. Moreover, most shells were covered by millimeter scale tubes of the serpulid polychaete Spirorbis sp. suggesting transport from low intertidal or subtidal zone. Both the ecological affinity and known geographic distribution of identified bivalvia as named above support the hypothesis of transportation rather than local development. In addition to mollusk shells, more than a hundred fish otoliths were recovered in surface sediments. The otoliths mostly belong to the Gadidae family. Most of them are well preserved and without serpulid tubes attached to their surface, suggesting a local/regional origin, unlike the shell remains. Although recovered at the surface, the macrofaunal assemblages of the Lomonosov Ridge do not necessarily represent the "modern" environments as they may result from reworking and because their occurrence at the surface of the sediment may also be due to winnowing of finer particles. Although the shells were not dated, we suspect that their actual ages may range from modern to several thousands of
How Long Do Numerical Chaotic Solutions Remain Valid?
Sauer, T. [Department of Mathematical Sciences , George Mason University , Fairfax, Virginia 22030 (United States); Sauer, T.; Yorke, J.A. [Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742 (United States); Grebogi, C. [Institut fuer Theoretische Physik und Astrophysik , Universitaet Potsdam , PF 601553, D-14415 Potsdam (Germany)
1997-07-01
Dynamical conditions for the loss of validity of numerical chaotic solutions of physical systems are already understood. However, the fundamental questions of {open_quotes}how good{close_quotes} and {open_quotes}for how long{close_quotes} the solutions are valid remained unanswered. This work answers these questions by establishing scaling laws for the shadowing distance and for the shadowing time in terms of physically meaningful quantities that are easily computable in practice. The scaling theory is verified against a physical model. {copyright} {ital 1997} {ital The American Physical Society}
The Unreliable Narrator in the Remains of the Day
魏艳
2012-01-01
Kazuo Ishiguro is an immigration contemporary British literature writer.He received "the typical British middle-class education" in England,so he can skillfully use the standard and beautiful English to write novels.His The Remains of the day won Booker Prize,which is the highest prize for novel in England.The present thesis is aimed at the analysis of the unreliability of the narrator,Stevens,and tries to prove that a large part of his narration goes against the reality or covers his own emotions.
Remains to be transmitted: Primo Levi's traumatic dream.
Blévis, Jean-Jacques
2004-07-01
Drawing on the writings of Primo Levi and the psychoanalysis of Jacques Lacan, the author attempts to conceive psychic trauma as a coalescence of traumas, since this is perhaps the only way to prevent a subject from being forced back into identification with the catastrophic event, whatever that may have been. A recurrent dream of Primo Levi's suggests to the author the way that traumas may have coalesced within Levi. The hope would be to restore the entire significance of what remains from that traumatic event to the speech (parole) of the Other, to the speech of every human, even the most helpless, bruised, or destroyed among us.
Leprosy: ancient disease remains a public health problem nowadays*
Noriega, Leandro Fonseca; Chiacchio, Nilton Di; Noriega, Angélica Fonseca; Pereira, Gilmayara Alves Abreu Maciel; Vieira, Marina Lino
2016-01-01
Despite being an ancient disease, leprosy remains a public health problem in several countries - particularly in India, Brazil and Indonesia. The current operational guidelines emphasize the evaluation of disability from the time of diagnosis and stipulate as fundamental principles for disease control: early detection and proper treatment. Continued efforts are needed to establish and improve quality leprosy services. A qualified primary care network that is integrated into specialized service and the development of educational activities are part of the arsenal in the fight against the disease, considered neglected and stigmatizing. PMID:27579761
Tuberculosis remains a challenge despite economic growth in Panama.
Tarajia, M; Goodridge, A
2014-03-01
Tuberculosis (TB) is a disease associated with inequality, and wise investment of economic resources is considered critical to its control. Panama has recently secured its status as an upper-middle-income country with robust economic growth. However, the prioritisation of resources for TB control remains a major challenge. In this article, we highlight areas that urgently require action to effectively reduce TB burden to minimal levels. Our conclusions suggest the need for fund allocation and a multidisciplinary approach to ensure prompt laboratory diagnosis, treatment assurance and workforce reinforcement, complemented by applied and operational research, development and innovation.
Plutonium isotope ratio variations in North America
Steiner, Robert E [Los Alamos National Laboratory; La Mont, Stephen P [Los Alamos National Laboratory; Eisele, William F [Los Alamos National Laboratory; Fresquez, Philip R [Los Alamos National Laboratory; Mc Naughton, Michael [Los Alamos National Laboratory; Whicker, Jeffrey J [Los Alamos National Laboratory
2010-12-14
Historically, approximately 12,000 TBq of plutonium was distributed throughout the global biosphere by thermo nuclear weapons testing. The resultant global plutonium fallout is a complex mixture whose {sup 240}Pu/{sup 239}Pu atom ratio is a function of the design and yield of the devices tested. The average {sup 240}Pu/{sup 239}Pu atom ratio in global fallout is 0.176 + 014. However, the {sup 240}Pu/{sup 239}Pu atom ratio at any location may differ significantly from 0.176. Plutonium has also been released by discharges and accidents associated with the commercial and weapons related nuclear industries. At many locations contributions from this plutonium significantly alters the {sup 240}Pu/{sup 239}Pu atom ratios from those observed in global fallout. We have measured the {sup 240}Pu/{sup 239}Pu atom ratios in environmental samples collected from many locations in North America. This presentation will summarize the analytical results from these measurements. Special emphasis will be placed on interpretation of the significance of the {sup 240}Pu/{sup 239}Pu atom ratios measured in environmental samples collected in the Arctic and in the western portions of the United States.
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian
2016-01-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian
2016-09-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.
Alexander, Michael B; Hodges, Theresa K; Wescott, Daniel J; Aitkenhead-Peterson, Jacqueline A
2016-05-01
Despite technological advances, human remains detection (HRD) dogs still remain one of the best tools for locating clandestine graves. However, soil texture may affect the escape of decomposition gases and therefore the effectiveness of HDR dogs. Six nationally credentialed HRD dogs (three HRD only and three cross-trained) were evaluated on novel buried human remains in contrasting soils, a clayey and a sandy soil. Search time and accuracy were compared for the clayey soil and sandy soil to assess odor location difficulty. Sandy soil (p < 0.001) yielded significantly faster trained response times, but no significant differences were found in performance accuracy between soil textures or training method. Results indicate soil texture may be significant factor in odor detection difficulty. Prior knowledge of soil texture and moisture may be useful for search management and planning. Appropriate adjustments to search segment sizes, sweep widths and search time allotment depending on soil texture may optimize successful detection. © 2016 American Academy of Forensic Sciences.
Ito, K.
1984-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
Ito, K.
1985-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A characteristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
Approximating the Minimum Tour Cover of a Digraph
Viet Hung Nguyen
2011-04-01
Full Text Available Given a directed graph G with non-negative cost on the arcs, a directed tour cover T of G is a cycle (not necessarily simple in G such that either head or tail (or both of them of every arc in G is touched by T. The minimum directed tour cover problem (DToCP, which is to find a directed tour cover of minimum cost, is NP-hard. It is thus interesting to design approximation algorithms with performance guarantee to solve this problem. Although its undirected counterpart (ToCP has been studied in recent years, in our knowledge, the DToCP remains widely open. In this paper, we give a 2 log2(n-approximation algorithm for the DToCP.
Approximation methods for efficient learning of Bayesian networks
Riggelsen, C
2008-01-01
This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.
The Fine Structure of Dyadically Badly Approximable Numbers
Nilsson, Johan
2010-01-01
We consider badly approximable numbers in the case of dyadic diophantine approximation. For the unit circle $\\mathbb{S}$ and the smallest distance to an integer $\\|\\cdot\\|$ we give elementary proofs that the set $F(c) = \\{x \\in \\mathbb{S}: \\|2^nx\\| \\geq c, n\\geq 0\\}$ is a fractal set whose Hausdorff dimension depends continuously on $c$, is constant on intervals which form a set of Lebesgue measure 1 and is self-similar. Hence it has a fractal graph. Moreover, the dimension of $F(c)$ is zero if and only if $c\\geq 1-2\\tau$, where $\\tau$ is the Thue-Morse constant. We completely characterise the intervals where the dimension remains unchanged. As a consequence we can completely describe the graph of $ c\\mapsto \\dim_H \\{x\\in[0,1]: \\|x-\\frac{m}{2^n}\\|< \\frac{c}{2^n} \\textnormal{finitely often}\\}$.
Green's Kernels and meso-scale approximations in perforated domains
Maz'ya, Vladimir; Nieves, Michael
2013-01-01
There are a wide range of applications in physics and structural mechanics involving domains with singular perturbations of the boundary. Examples include perforated domains and bodies with defects of different types. The accurate direct numerical treatment of such problems remains a challenge. Asymptotic approximations offer an alternative, efficient solution. Green’s function is considered here as the main object of study rather than a tool for generating solutions of specific boundary value problems. The uniformity of the asymptotic approximations is the principal point of attention. We also show substantial links between Green’s functions and solutions of boundary value problems for meso-scale structures. Such systems involve a large number of small inclusions, so that a small parameter, the relative size of an inclusion, may compete with a large parameter, represented as an overall number of inclusions. The main focus of the present text is on two topics: (a) asymptotics of Green’s kernels in domai...
Zuo, Xinxin; Lu, Houyuan; Jiang, Leping; Zhang, Jianping; Yang, Xiaoyan; Huan, Xiujia; He, Keyang; Wang, Can; Wu, Naiqin
2017-06-20
Phytolith remains of rice (Oryza sativa L.) recovered from the Shangshan site in the Lower Yangtze of China have previously been recognized as the earliest examples of rice cultivation. However, because of the poor preservation of macroplant fossils, many radiocarbon dates were derived from undifferentiated organic materials in pottery sherds. These materials remain a source of debate because of potential contamination by old carbon. Direct dating of the rice remains might serve to clarify their age. Here, we first validate the reliability of phytolith dating in the study region through a comparison with dates obtained from other material from the same layer or context. Our phytolith data indicate that rice remains retrieved from early stages of the Shangshan and Hehuashan sites have ages of approximately 9,400 and 9,000 calibrated years before the present, respectively. The morphology of rice bulliform phytoliths indicates they are closer to modern domesticated species than to wild species, suggesting that rice domestication may have begun at Shangshan during the beginning of the Holocene.
Double-shell tank remaining useful life estimates
Anantatmula, R.P., Westinghouse Hanford
1996-12-02
The existing 28 double-shell tanks (DSTS) at Hanford are currently planned to continue operation through the year 2028 when disposal schedules show removal of waste. This schedule will place the DSTs in a service life window of 4O to 60 years depending on tank construction date and actual retirement date. This paper examines corrosion- related life-limiting conditions of DSTs and reports the results of remaining useful life models developed for estimating remaining tank life. Three models based on controllable parameters such as temperature, chemistry, and relative humidity are presented for estimates to the year in which a particular DST may receive a breach in the primary tank due to pitting in the liquid or vapor region. Pitting is believed to be the life-limiting condition for DSTs,however, the region of the most aggressive pitting (vapor space or liquid) requires further investigation. The results of the models presented suggest none of the existing DSTs should fail by through-wall pitting until well beyond scheduled retrieval in 2028. The estimates of tank breach years (the year in which a tank may be expected to breach the primary tank wall) range from 2056 for pitting corrosion in the liquid region of tank 104-AW to beyond the next millennium for several tanks in the vapor region.
Taphonomy of the Tianyuandong human skeleton and faunal remains.
Fernández-Jalvo, Yolanda; Andrews, Peter; Tong, HaoWen
2015-06-01
Tianyuan Cave is an Upper Palaeolithic site, 6 km from the core area of the Zhoukoudian Site Complex. Tianyuandong (or Tianyuan Cave) yielded one ancient (though not the earliest) fossil skeleton of Homo sapiens in China (42-39 ka cal BP). Together with the human skeleton, abundant animal remains were found, but no stone tools were recovered. The animal fossil remains are extremely fragmentary, in contrast to human skeletal elements that are, for the most part, complete. We undertook a taphonomic study to investigate the circumstances of preservation of the human skeleton in Tianyuan Cave, and in course of this we considered four hypotheses: funerary ritual, cannibalism, carnivore activity or natural death. Taphonomic results characterize the role of human action in the site and how these agents acted in the past. Because of disturbance of the human skeleton during its initial excavation, it is not known if it was in a grave cut or if there was any funerary ritual. No evidence was found for cannibalism or carnivore activity in relation to the human skeleton, suggesting natural death as the most reasonable possibility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Detection of Buried Human Remains Using Bioreporter Fluorescence
Vass, A. Dr.; Singleton, G. B.
2001-10-01
The search for buried human remains is a difficult, laborious and time-consuming task for law enforcement agencies. This study was conducted as a proof of principle demonstration to test the concept of using bioreporter microorganisms as a means to cover large areas in such a search. These bioreporter microorganisms are affected by a particular component of decaying organic matter that is distinct from decaying vegetation. The diamino compounds cadaverine and putrescine were selected as target compounds for the proof-of-principle investigation, and a search for microorganisms and genes that are responsive to either of these compounds was conducted. One recombinant clone was singled out for characterization based on its response to putrescine. The study results show that small concentrations of putrescine increased expression from this bioreporter construct. Although the level of increase was small (making it difficult to distinguish the signal from background), the results demonstrate the principle that bioreporters can be used to detect compounds resulting from decaying human remains and suggest that a wider search for target compounds should be conducted.
Prions and lymphoid organs: solved and remaining mysteries.
O'Connor, Tracy; Aguzzi, Adriano
2013-01-01
Prion colonization of secondary lymphoid organs (SLOs) is a critical step preceding neuroinvasion in prion pathogenesis. Follicular dendritic cells (FDCs), which depend on both tumor necrosis factor receptor 1 (TNFR1) and lymphotoxin β receptor (LTβR) signaling for maintenance, are thought to be the primary sites of prion accumulation in SLOs. However, prion titers in RML-infected TNFR1 (-/-) lymph nodes and rates of neuroinvasion in TNFR1 (-/-) mice remain high despite the absence of mature FDCs. Recently, we discovered that TNFR1-independent prion accumulation in lymph nodes relies on LTβR signaling. Loss of LTβR signaling in TNFR1 (-/-) lymph nodes coincided with the de-differentiation of high endothelial venules (HEVs)-the primary sites of lymphocyte entry into lymph nodes. These findings suggest that HEVs are the sites through which prions initially invade lymph nodes from the bloodstream. Identification of HEVs as entry portals for prions clarifies a number of previous observations concerning peripheral prion pathogenesis. However, a number of questions still remain: What is the mechanism by which prions are taken up by HEVs? Which cells are responsible for delivering prions to lymph nodes? Are HEVs the main entry site for prions into lymph nodes or do alternative routes also exist? These questions and others are considered in this article.
REMAINED DENTAL PARTICLES IN THE JAWS OF EDENTULOUSPATIENTS (ISFAHAN. 1999
R MOSHARRAF
2002-09-01
Full Text Available Remained teeth and other lesions such as cysts, abcesses and tumors is one of the important problems in edentulous patients. In a cross-sectional study, 330 edentulous patients were evaluated radiographically. The radiographic evaluation of patients revealed the presence of 86 residual roots in 58 radiographs. 17.58% of patients had residual roots & 5.8% of patients had Impacted teeth. 58.1% of residual roots and 45% of impacted teeth were in the maxilla and others were in mandible. Maximum Percentage of residual roots (58.1% and impacted teeth (70% were found in molar region. In this study revealed 23.3% of examined patients had remaining dental fragments. From these patients, 5.76% had impacted teeth and 17.58% had residual roots, and maximum percentage of rooth fragments (58.1% were found in molar region. In similar study by spyropoulus, maximum percentage of root fragments (45.6% reported in molar region and maximum percentage of impacted teeth were found in molar and canine region (41.2% in molar and 41.2 in canine region. In this study, 58.1% of root fragments and 45% of impacted teeth were found in the maxilla but in spyropoulos" report, 71.9% of root fragments and 94.1% of impacted teeth were found in the maxilla.
Determination of Remaining Useful Life of Gas Turbine Blade
Meor Said Mior Azman
2016-01-01
Full Text Available The aim of this research is to determine the remaining useful life of gas turbine blade, using service-exposed turbine blades. This task is performed using Stress Rupture Test (SRT under accelerated test conditions where the applied stresses to the specimen is between 400 MPa to 600 MPa and the test temperature is 850°C. The study will focus on the creep behaviour of the 52000 hours service-exposed blades, complemented with creep-rupture modelling using JMatPro software and microstructure examination using optical microscope. The test specimens, made up of Ni-based superalloy of the first stage turbine blades, are machined based on International Standard (ISO 24. The results from the SRT will be analyzed using these two main equations – Larson-Miller Parameter and Life Fraction Rule. Based on the results of the remaining useful life analysis, the 52000h service-exposed blade has the condition to operate in the range of another 4751 hr to 18362 hr. The microstructure examinations shows traces of carbide precipitation that deteriorate the grain boundaries that occurs during creep process. Creep-rupture life modelling using JMatPro software has shown good agreement with the accelerated creep rupture test with minimal error.
CO2 studies remain key to understanding a future world.
Becklin, Katie M; Walker, S Michael; Way, Danielle A; Ward, Joy K
2017-04-01
Contents 34 I. 34 II. 36 III. 37 IV. 37 V. 38 38 References 38 SUMMARY: Characterizing plant responses to past, present and future changes in atmospheric carbon dioxide concentration ([CO2 ]) is critical for understanding and predicting the consequences of global change over evolutionary and ecological timescales. Previous CO2 studies have provided great insights into the effects of rising [CO2 ] on leaf-level gas exchange, carbohydrate dynamics and plant growth. However, scaling CO2 effects across biological levels, especially in field settings, has proved challenging. Moreover, many questions remain about the fundamental molecular mechanisms driving plant responses to [CO2 ] and other global change factors. Here we discuss three examples of topics in which significant questions in CO2 research remain unresolved: (1) mechanisms of CO2 effects on plant developmental transitions; (2) implications of rising [CO2 ] for integrated plant-water dynamics and drought tolerance; and (3) CO2 effects on symbiotic interactions and eco-evolutionary feedbacks. Addressing these and other key questions in CO2 research will require collaborations across scientific disciplines and new approaches that link molecular mechanisms to complex physiological and ecological interactions across spatiotemporal scales.
Slowly rotating scalar field wormholes: the second order approximation
Kashargin, P E
2008-01-01
We discuss rotating wormholes in general relativity with a scalar field with negative kinetic energy. To solve the problem, we use the assumption about slow rotation. The role of a small dimensionless parameter plays the ratio of the linear velocity of rotation of the wormhole's throat and the velocity of light. We construct the rotating wormhole solution in the second order approximation with respect to the small parameter. The analysis shows that the asymptotical mass of the rotating wormhole is greater than that of the non-rotating one, and the NEC violation in the rotating wormhole spacetime is weaker than that in the non-rotating one.
Surgical treatment of dislocated acromioclavicular syndesmolysis remains controversial
Slaviša Mihaljevič
2007-12-01
Full Text Available Background: Operative treatment of acromioclavicular (AC joint dislocations Allman-Tossy III type is controversial. There are more than 30 types of operative treatments described. At the Department of Traumatology of Celje General and Teaching Hospital (CGTH we operate the AC joint dislocation by the AC joint opened reduction and fixation using two Kirschner wires and additional figure of eight wire loop over the AC joint. The purpose of the analysis is to evaluate the results of acromioclavicular joint complete dislocation Allman-Tossy III type operative treatment.Patients and methods: In the 2-year period from July 1st 1997, to June 31st, 1999, at the Department of Traumatology of CGTH we operatively treated 59 injured persons with the AC joint dislocation. There were 55 men (93 % and 4 women (7 %. The average age was 40 years (from 20 to 72 years. 56 (95 % injured persons had the AC joint injury of Allman-Tossy III type. In first three weeks (early reconstruction we operated 45 injured persons (76.3 %. The applied material was removed after 8 weeks. 47 (79.7 % injured persons were re-examined at least one year after the injury (27 months in average; 14–39 months. The results were evaluated according to University of California at Los Angeles (UCLA scale for the shoulder function evaluation. The impact of factors on a good treatment result was presented by the odds ratio and uni-variant analysis calculation.Results: Out of 47 injured persons re-examined according to the UCLA scale at least one year after the injury there were 17 injured persons (36.2 % rated with an excellent result (UCLA 34– 35, 22 good (46.8 % (UCLA 28–33, 5 satisfactory (10.6 % (UCLA 21–27 and 3 bad (6.4 % (UCLA 0–20. In total we achieved 83 % of excellent and good results. The injured persons age did not significantly affect the treatment result. Complications occurred in 14 (29.8 % injured patients. If no complications were occurred the odds ratio for good
Choice with frequently changing food rates and food ratios.
Baum, William M; Davison, Michael
2014-03-01
In studies of operant choice, when one schedule of a concurrent pair is varied while the other is held constant, the constancy of the constant schedule may exert discriminative control over performance. In our earlier experiments, schedules varied reciprocally across components within sessions, so that while food ratio varied food rate remained constant. In the present experiment, we held one variable-interval (VI) schedule constant while varying the concurrent VI schedule within sessions. We studied five conditions, each with a different constant left VI schedule. On the right key, seven different VI schedules were presented in seven different unsignaled components. We analyzed performances at several different time scales. At the longest time scale, across conditions, behavior ratios varied with food ratios as would be expected from the generalized matching law. At shorter time scales, effects due to holding the left VI constant became more and more apparent, the shorter the time scale. In choice relations across components, preference for the left key leveled off as the right key became leaner. Interfood choice approximated strict matching for the varied right key, whereas interfood choice hardly varied at all for the constant left key. At the shortest time scale, visit patterns differed for the left and right keys. Much evidence indicated the development of a fix-and-sample pattern. In sum, the procedural difference made a large difference to performance, except for choice at the longest time scale and the fix-and-sample pattern at the shortest time scale. © Society for the Experimental Analysis of Behavior.
Lefever, A. E.
1982-01-01
Proposed arrangement of two connected planetary differentials results in gear ratio many times that obtainable in conventional series gear assembly of comparable size. Ratios of several thousand would present no special problems. Selection of many different ratios is available with substantially similar gear diameters. Very high gear ratios would be obtained from small mechanism.
Highly efficient DNA extraction method from skeletal remains
Irena Zupanič Pajnič
2011-03-01
Full Text Available Background: This paper precisely describes the method of DNA extraction developed to acquire high quality DNA from the Second World War skeletal remains. The same method is also used for molecular genetic identification of unknown decomposed bodies in routine forensic casework where only bones and teeth are suitable for DNA typing. We analysed 109 bones and two teeth from WWII mass graves in Slovenia. Methods: We cleaned the bones and teeth, removed surface contaminants and ground the bones into powder, using liquid nitrogen . Prior to isolating the DNA in parallel using the BioRobot EZ1 (Qiagen, the powder was decalcified for three days. The nuclear DNA of the samples were quantified by real-time PCR method. We acquired autosomal genetic profiles and Y-chromosome haplotypes of the bones and teeth with PCR amplification of microsatellites, and mtDNA haplotypes 99. For the purpose of traceability in the event of contamination, we prepared elimination data bases including genetic profiles of the nuclear and mtDNA of all persons who have been in touch with the skeletal remains in any way. Results: We extracted up to 55 ng DNA/g of the teeth, up to 100 ng DNA/g of the femurs, up to 30 ng DNA/g of the tibias and up to 0.5 ng DNA/g of the humerus. The typing of autosomal and YSTR loci was successful in all of the teeth, in 98 % dekalof the femurs, and in 75 % to 81 % of the tibias and humerus. The typing of mtDNA was successful in all of the teeth, and in 96 % to 98 % of the bones. Conclusions: We managed to obtain nuclear DNA for successful STR typing from skeletal remains that were over 60 years old . The method of DNA extraction described here has proved to be highly efficient. We obtained 0.8 to 100 ng DNA/g of teeth or bones and complete genetic profiles of autosomal DNA, Y-STR haplotypes, and mtDNA haplotypes from only 0.5g bone and teeth samples.
Tree-fold loop approximation of AMD
Ono, Akira [Tohoku Univ., Sendai (Japan). Faculty of Science
1997-05-01
AMD (antisymmetrized molecular dynamics) is a frame work for describing a wave function of nucleon multi-body system by Slater determinant of Gaussian wave flux, and a theory for integrally describing a wide range of nuclear reactions such as intermittent energy heavy ion reaction, nucleon incident reaction and so forth. The aim of this study is induction on approximation equation of expected value, {nu}, in correlation capable of calculation with time proportional A (exp 3) (or lower), and to make AMD applicable to the heavier system such as Au+Au. As it must be avoided to break characteristics of AMD, it needs not to be anxious only by approximating the {nu}-value. However, in order to give this approximation any meaning, error of this approximation will have to be sufficiently small in comparison with bond energy of atomic nucleus and smaller than 1 MeV/nucleon. As the absolute expected value in correlation may be larger than 50 MeV/nucleon, the approximation is required to have a high accuracy within 2 percent. (G.K.)
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Approximation of Bivariate Functions via Smooth Extensions
Zhang, Zhihua
2014-01-01
For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316
Approximating perfection a mathematician's journey into the world of mechanics
Lebedev, Leonid P
2004-01-01
This is a book for those who enjoy thinking about how and why Nature can be described using mathematical tools. Approximating Perfection considers the background behind mechanics as well as the mathematical ideas that play key roles in mechanical applications. Concentrating on the models of applied mechanics, the book engages the reader in the types of nuts-and-bolts considerations that are normally avoided in formal engineering courses: how and why models remain imperfect, and the factors that motivated their development. The opening chapter reviews and reconsiders the basics of c
Pedersen, Troels Dyhr; Schramm, Jesper
2007-01-01
An experimental study has been carried out on the homogeneous charge compression ignition (HCCI) combustion of Dimethyl Ether (DME). The study was performed as a parameter variation of engine speed and compression ratio on excess air ratios of approximately 2.5, 3 and 4. The compression ratio...... was adjusted in steps to find suitable regions of operation, and the effect of engine speed was studied at 1000, 2000 and 3000 RPM. It was found that leaner excess air ratios require higher compression ratios to achieve satisfactory combustion. Engine speed also affects operation significantly....
Pedersen, Troels Dyhr; Schramm, Jesper
2007-01-01
An experimental study has been carried out on the homogeneous charge compression ignition (HCCI) combustion of Dimethyl Ether (DME). The study was performed as a parameter variation of engine speed and compression ratio on excess air ratios of approximately 2.5, 3 and 4. The compression ratio...... was adjusted in steps to find suitable regions of operation, and the effect of engine speed was studied at 1000, 2000 and 3000 RPM. It was found that leaner excess air ratios require higher compression ratios to achieve satisfactory combustion. Engine speed also affects operation significantly....
The low Sr/Ba ratio on some extremely metal-poor stars
Spite, M; Bonifacio, P; Caffau, E; François, P; Sbordone, L
2014-01-01
It has been noted that, in classical extremely metal-poor (EMP) stars, the abundance ratio of Sr and Ba, is always higher than [Sr/Ba] = -0.5, the value of the solar r-only process; however, a handful of EMP stars have recently been found with a very low Sr/Ba ratio. We try to understand the origin of this anomaly by comparing the abundance pattern of the elements in these stars and in the classical EMP stars. Four stars with very low Sr/Ba ratios were observed and analyzed within LTE approximation through 1D (hydrostatic) model atmosphere, providing homogeneous abundances of nine neutron-capture elements. In CS 22950-173, the only turnoff star of the sample, the Sr/Ba ratio is, in fact, found to be higher than the r-only solar ratio, so the star is discarded. The remaining stars (CS 29493-090, CS 30322-023, HE 305-4520) are cool evolved giants. They do not present a clear carbon enrichment. The abundance patterns of the neutron-capture elements in the three stars are strikingly similar to a theoretical s-pro...
OX40: Structure and function - What questions remain?
Willoughby, Jane; Griffiths, Jordana; Tews, Ivo; Cragg, Mark S
2017-03-01
OX40 is a type 1 transmembrane glycoprotein, reported nearly 30 years ago as a cell surface antigen expressed on activated T cells. Since its discovery, it has been validated as a bone fide costimulatory molecule for T cells and member of the TNF receptor family. However, many questions still remain relating to its function on different T cell sub-sets and with recent interest in its utility as a target for antibody-mediated immunotherapy, there is a growing need to gain a better understanding of its biology. Here, we review the expression pattern of OX40 and its ligand, discuss the structure of the receptor:ligand interaction, the downstream signalling it can elicit, its function on different T cell subsets and how antibodies might engage with it to provide effective immunotherapy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Remaining teeth, cardiovascular morbidity and death among adult Danes
Heitmann, B L; Gamborg, M
2008-01-01
disease was increased by 50% (HR=1.50; 95% CI: 1.02-2.19). Risk for coronary heart disease was increased by 31%, but was not significant, after the adjustment for education, age, smoking, diabetes, alcohol intake, systolic blood pressure and body mass index (HR= 1.31; 95% CI: 0.74-2.31). Associations were...... trends in and determinants of CArdiovascular disease) in 1987-88 and 1993-94. Subjects were followed in Danish registers for fatal and non-fatal cardiovascular disease, coronary heart disease or stroke. RESULTS: Tooth loss was strongly associated with incidence of stroke, and to a lesser extent......, incidence of cardiovascular disease and coronary heart disease, during averagely 7.5 years of follow-up. Compared to those with most teeth remaining, the edentulous suffered >3-fold increased Hazard (HR) of developing stroke (HR=3.25; 95% CI: 1.48-7.14), whereas the risk of developing any cardiovascular...
Genetic Structure Analysis of Human Remains from Khitan Noble Necropolis
无
2006-01-01
Ancient DNA was extracted from 13 skeletal remains from the burial groups of Khitan nobles, which were excavated in northeast China. The hypervariable segment I sequences ( HVS Ⅰ ) of the mitochondrial DNA control region, in the 13 individuals, were used as genetic markers to determine the genetic relationships between the individuals and the genetic affinity to other interrelated populations by using the known database of mtDNA. Based on the phylogenetic analysis of these ancient DNA sequences, the genetic structures of two Khitan noble kindreds were obtained, including the Yel Yuzhi's kindred and the Xiao He's kindred. Furthermore, the relationships between the Khitan nobles and some modern interrelated populations were analyzed. On the basis of the result of the analysis, the gene flows of the ancient Khitans and their demographic expansion in history was deduced.
Tactile display on the remaining hand for unilateral hand amputees
Li Tao
2016-09-01
Full Text Available Human rely profoundly on tactile feedback from fingertips to interact with the environment, whereas most hand prostheses used in clinics provide no tactile feedback. In this study we demonstrate the feasibility to use a tactile display glove that can be worn by a unilateral hand amputee on the remaining healthy hand to display tactile feedback from a hand prosthesis. The main benefit is that users could easily distinguish the feedback for each finger, even without training. The claimed advantage is supported by preliminary tests with healthy subjects. This approach may lead to the development of effective and affordable tactile display devices that provide tactile feedback for individual fingertip of hand prostheses.
Clinical management of acute HIV infection: best practice remains unknown.
Bell, Sigall K; Little, Susan J; Rosenberg, Eric S
2010-10-15
Best practice for the clinical management of acute human immunodeficiency virus (HIV) infection remains unknown. Although some data suggest possible immunologic, virologic, or clinical benefit of early treatment, other studies show no difference in these outcomes over time, after early treatment is discontinued. The literature on acute HIV infection is predominantly small nonrandomized studies, which further limits interpretation. As a result, the physician is left to grapple with these uncertainties while making clinical decisions for patients with acute HIV infection. Here we review the literature, focusing on the potential advantages and disadvantages of treating acute HIV infection outlined in treatment guidelines, and summarize the presentations on clinical management of acute HIV infection from the 2009 Acute HIV Infection Meeting in Boston, Massachusetts.
Mining Cancer Transcriptomes: Bioinformatic Tools and the Remaining Challenges.
Milan, Thomas; Wilhelm, Brian T
2017-02-22
The development of next-generation sequencing technologies has had a profound impact on the field of cancer genomics. With the enormous quantities of data being generated from tumor samples, researchers have had to rapidly adapt tools or develop new ones to analyse the raw data to maximize its value. While much of this effort has been focused on improving specific algorithms to get faster and more precise results, the accessibility of the final data for the research community remains a significant problem. Large amounts of data exist but are not easily available to researchers who lack the resources and experience to download and reanalyze them. In this article, we focus on RNA-seq analysis in the context of cancer genomics and discuss the bioinformatic tools available to explore these data. We also highlight the importance of developing new and more intuitive tools to provide easier access to public data and discuss the related issues of data sharing and patient privacy.
DNA Profiling Success Rates from Degraded Skeletal Remains in Guatemala.
Johnston, Emma; Stephenson, Mishel
2016-07-01
No data are available regarding the success of DNA Short Tandem Repeat (STR) profiling from degraded skeletal remains in Guatemala. Therefore, DNA profiling success rates relating to 2595 skeletons from eleven cases at the Forensic Anthropology Foundation of Guatemala (FAFG) are presented. The typical postmortem interval was 30 years. DNA was extracted from bone powder and amplified using Identifiler and Minifler. DNA profiling success rates differed between cases, ranging from 50.8% to 7.0%, the overall success rate for samples was 36.3%. The best DNA profiling success rates were obtained from femur (36.2%) and tooth (33.7%) samples. DNA profiles were significantly better from lower body bones than upper body bones (p = forensic DNA sampling strategies in future victim recovery investigations.
Analysis of an Approximation Algorithm for Scheduling Independent Parallel Tasks
Keqin Li
1999-12-01
Full Text Available In this paper, we consider the problem of scheduling independent parallel tasks in parallel systems with identical processors. The problem is NP-hard, since it includes the bin packing problem as a special case when all tasks have unit execution time. We propose and analyze a simple approximation algorithm called H m, where m is a positive integer. Algorithm H m has a moderate asymptotic worst-case performance ratio in the range [4/3 ... 31/18] for all m≥6; but the algorithm has a small asymptotic worst-case performance ratio in the range [1+1/(r+1..1+1/r], when task sizes do not exceed 1/r of the total available processors, where r>1 is an integer. Furthermore, we show that if the task sizes are independent, identically distributed (i.i.d. uniform random variables, and task execution times are i.i.d. random variables with finite mean and variance, then the average-case performance ratio of algorithm H m is no larger than 1.2898680..., and for an exponential distribution of task sizes, it does not exceed 1.2898305.... As demonstrated by our analytical as well as numerical results, the average-case performance ratio improves significantly when tasks request for smaller numbers of processors.
High CJD infectivity remains after prion protein is destroyed.
Miyazawa, Kohtaro; Emmerling, Kaitlin; Manuelidis, Laura
2011-12-01
The hypothesis that host prion protein (PrP) converts into an infectious prion form rests on the observation that infectivity progressively decreases in direct proportion to the decrease of PrP with proteinase K (PK) treatment. PrP that resists limited PK digestion (PrP-res, PrP(sc)) has been assumed to be the infectious form, with speculative types of misfolding encoding the many unique transmissible spongiform encephalopathy (TSE) agent strains. Recently, a PK sensitive form of PrP has been proposed as the prion. Thus we re-evaluated total PrP (sensitive and resistant) and used a cell-based assay for titration of infectious particles. A keratinase (NAP) known to effectively digest PrP was compared to PK. Total PrP in FU-CJD infected brain was reduced to ≤0.3% in a 2 h PK digest, yet there was no reduction in titer. Remaining non-PrP proteins were easily visualized with colloidal gold in this highly infectious homogenate. In contrast to PK, NAP digestion left 0.8% residual PrP after 2 h, yet decreased titer by >2.5 log; few residual protein bands remained. FU-CJD infected cells with 10× the infectivity of brain by both animal and cell culture assays were also evaluated. NAP again significantly reduced cell infectivity (>3.5 log). Extreme PK digestions were needed to reduce cell PrP to report on maximal PrP destruction and titer. It is likely that one or more residual non-PrP proteins may protect agent nucleic acids in infectious particles.
Approximation Limits of Linear Programs (Beyond Hierarchies)
Braun, Gábor; Pokutta, Sebastian; Steurer, David
2012-01-01
We develop a framework for approximation limits of polynomial-size linear programs from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any linear program as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations for CLIQUE require linear programs of size 2^{n^\\Omega(eps)}. (This lower bound applies to linear programs using a certain encoding of CLIQUE as a linear optimization problem.) Moreover, we establish a similar result for approximations of semidefinite programs by linear programs. Our main ingredient is a quantitative improvement of Razborov's rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of certain perturbations of the unique disjointness matrix.
Discontinuous Galerkin Methods with Trefftz Approximation
Kretzschmar, Fritz; Tsukerman, Igor; Weiland, Thomas
2013-01-01
We present a novel Discontinuous Galerkin Finite Element Method for wave propagation problems. The method employs space-time Trefftz-type basis functions that satisfy the underlying partial differential equations and the respective interface boundary conditions exactly in an element-wise fashion. The basis functions can be of arbitrary high order, and we demonstrate spectral convergence in the $\\Lebesgue_2$-norm. In this context, spectral convergence is obtained with respect to the approximation error in the entire space-time domain of interest, i.e. in space and time simultaneously. Formulating the approximation in terms of a space-time Trefftz basis makes high order time integration an inherent property of the method and clearly sets it apart from methods, that employ a high order approximation in space only.
Approximating light rays in the Schwarzschild field
Semerak, Oldrich
2014-01-01
A short formula is suggested which approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various "low-order competitors", namely with those following from exact formulas for small $M$, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behaviour is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable--and very accurate--for practical solving of the ray-deflection exercise.
On the approximate zero of Newton method
黄正达
2003-01-01
A judgment criterion to guarantee a point to be a Chen' s approximate zero of Newton method for solving nonlinear equation is sought by dominating sequence techniques. The criterion is based on the fact that the dominating function may have only one simple positive zero, assuming that the operator is weak Lipschitz continuous, which is much more relaxed and can be checked much more easily than Lipschitz continuous in practice. It is demonstrated that a Chen' s approximate zero may not be a Smale' s approximate zero. The error estimate obtained indicated the convergent order when we use |f(x) | < ε to stop computation in software.The result can also be applied for solving partial derivative and integration equations.
On the approximate zero of Newton method
黄正达
2003-01-01
A judgment criterion to guarantee a point to be a Chen's approximate zero of Newton method for solving nonlinear equation is sought by dominating sequence techniques. The criterion is based on the fact that the dominating function may have only one simple positive zero, assuming that the operator is weak Lipschitz continuous, which is much more relaxed and can be checked much more easily than Lipschitz continuous in practice. It is demonstrated that a Chen's approximate zero may not be a Smale's approximate zero. The error estimate obtained indicated the convergent order when we use |f(x)|<ε to stop computation in software. The result can also be applied for solving partial derivative and integration equations.
Optical pulse propagation with minimal approximations
Kinsler, Paul
2010-01-01
Propagation equations for optical pulses are needed to assist in describing applications in ever more extreme situations—including those in metamaterials with linear and nonlinear magnetic responses. Here I show how to derive a single first-order propagation equation using a minimum of approximations and a straightforward “factorization” mathematical scheme. The approach generates exact coupled bidirectional equations, after which it is clear that the description can be reduced to a single unidirectional first-order wave equation by means of a simple “slow evolution” approximation, where the optical pulse changes little over the distance of one wavelength. It also allows a direct term-to-term comparison of an exact bidirectional theory with the approximate unidirectional theory.
Rough interfaces beyond the Gaussian approximation
Caselle, M; Gliozzi, F; Hasenbusch, M; Pinn, K; Vinti, S; Caselle, M; Gliozzi, F; Fiore, R; Hasenbusch, M; Pinn, K; Vinti, S
1994-01-01
We compare predictions of the Capillary Wave Model beyond its Gaussian approximation with Monte Carlo results for the energy gap and the surface energy of the 3D Ising model in the scaling region. Our study reveals that the finite size effects of these quantities are well described by the Capillary Wave Model, expanded to two--loop order ( one order beyond the Gaussian approximation). We compare predictions of the Capillary Wave Model with Monte Carlo results for the energy gap and the interface energy of the 3D Ising model in the scaling region. Our study reveals that the finite size effects of these quantities are well described by the Capillary Wave Model, expanded to two-loop order (one order beyond the Gaussian approximation).
Implementing regularization implicitly via approximate eigenvector computation
Mahoney, Michael W
2010-01-01
Regularization is a powerful technique for extracting useful information from noisy data. Typically, it is implemented by adding some sort of norm constraint to an objective function and then exactly optimizing the modified objective function. This procedure typically leads to optimization problems that are computationally more expensive than the original problem, a fact that is clearly problematic if one is interested in large-scale applications. On the other hand, a large body of empirical work has demonstrated that heuristics, and in some cases approximation algorithms, developed to speed up computations sometimes have the side-effect of performing regularization implicitly. Thus, we consider the question: What is the regularized optimization objective that an approximation algorithm is exactly optimizing? We address this question in the context of computing approximations to the smallest nontrivial eigenvector of a graph Laplacian; and we consider three random-walk-based procedures: one based on the heat ...
On approximation of Markov binomial distributions
Xia, Aihua; 10.3150/09-BEJ194
2010-01-01
For a Markov chain $\\mathbf{X}=\\{X_i,i=1,2,...,n\\}$ with the state space $\\{0,1\\}$, the random variable $S:=\\sum_{i=1}^nX_i$ is said to follow a Markov binomial distribution. The exact distribution of $S$, denoted $\\mathcal{L}S$, is very computationally intensive for large $n$ (see Gabriel [Biometrika 46 (1959) 454--460] and Bhat and Lal [Adv. in Appl. Probab. 20 (1988) 677--680]) and this paper concerns suitable approximate distributions for $\\mathcal{L}S$ when $\\mathbf{X}$ is stationary. We conclude that the negative binomial and binomial distributions are appropriate approximations for $\\mathcal{L}S$ when $\\operatorname {Var}S$ is greater than and less than $\\mathbb{E}S$, respectively. Also, due to the unique structure of the distribution, we are able to derive explicit error estimates for these approximations.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L. [Univ. of California, Los Angeles, CA (United States)
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Numerical approximation of partial differential equations
Bartels, Sören
2016-01-01
Finite element methods for approximating partial differential equations have reached a high degree of maturity, and are an indispensible tool in science and technology. This textbook aims at providing a thorough introduction to the construction, analysis, and implementation of finite element methods for model problems arising in continuum mechanics. The first part of the book discusses elementary properties of linear partial differential equations along with their basic numerical approximation, the functional-analytical framework for rigorously establishing existence of solutions, and the construction and analysis of basic finite element methods. The second part is devoted to the optimal adaptive approximation of singularities and the fast iterative solution of linear systems of equations arising from finite element discretizations. In the third part, the mathematical framework for analyzing and discretizing saddle-point problems is formulated, corresponding finte element methods are analyzed, and particular ...
On Born approximation in black hole scattering
Batic, D. [University of West Indies, Department of Mathematics, Kingston (Jamaica); Kelkar, N.G.; Nowakowski, M. [Universidad de los Andes, Departamento de Fisica, Bogota (Colombia)
2011-12-15
A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordstroem and Reissner-Nordstroem-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes. (orig.)
Time Stamps for Fixed-Point Approximation
Damian, Daniela
2001-01-01
Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed-point approximat......Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed......-point approximation algorithm which maintains a single copy of intermediate results. We then prove the correctness of this algorithm....
Exponential Approximations Using Fourier Series Partial Sums
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Approximation Algorithm for Bottleneck Steiner Tree Problem in the Euclidean Plane
Zi-Mao Li; Da-Ming Zhu; Shao-Han Ma
2004-01-01
A special case of the bottleneck Steiner tree problem in the Euclidean plane was considered in this paper. The problem has applications in the design of wireless communication networks, multifacility location, VLSI routing and network routing. For the special case which requires that there should be no edge connecting any two Steiner points in the optimal solution, a 3-restricted Steiner tree can be found indicating the existence of the performance ratio √2. In this paper, the special case of the problem is proved to be NP-hard and cannot be approximated within ratio √2. First a simple polynomial time approximation algorithm with performance ratio √3 is presented. Then based on this algorithm and the existence of the 3-restricted Steiner tree, a polynomial time approximation algorithm with performance ratio-√2 + ε is proposed, for any ε＞0.
Extending the Eikonal Approximation to Low Energy
Capel, Pierre; Ogata, Kazuyuki
2014-01-01
E-CDCC and DEA, two eikonal-based reaction models are compared to CDCC at low energy (e.g. 20AMeV) to study their behaviour in the regime at which the eikonal approximation is supposed to fail. We confirm that these models lack the Coulomb deflection of the projectile by the target. We show that a hybrid model, built on the CDCC framework at low angular momenta and the eikonal approximation at larger angular momenta gives a perfect agreement with CDCC. An empirical shift in impact parameter can also be used reliably to simulate this missing Coulomb deflection.
Approximately-Balanced Drilling in Daqing Oilfield
Xia Bairu; Zheng Xiuhua; Li Guoqing; Tian Tuo
2004-01-01
The Daqing oilfield is a multilayered heterogeneous oil field where the pressure are different in the same vertical profile causing many troubles to the adjustment well drillings. The approximately-balanced drilling technique has been developed and proved to be efficient and successful in Daqing oilfield. This paper discusses the application of approximately-balanced drilling technique under the condition of multilayered pressure in Daqing oilfield, including the prediction of formation pressure, the pressure discharge technique for the drilling well and the control of the density of drilling fluid.
Faddeev Random Phase Approximation for Molecules
Degroote, Matthias; Barbieri, Carlo
2010-01-01
The Faddeev Random Phase Approximation is a Green's function technique that makes use of Faddeev-equations to couple the motion of a single electron to the two-particle--one-hole and two-hole--one-particle excitations. This method goes beyond the frequently used third-order Algebraic Diagrammatic Construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are described at the level of the Random Phase Approximation. This paper presents the first results for diatomic molecules at equilibrium geometry. The behavior of the method in the dissociation limit is also investigated.
An Approximate Bayesian Fundamental Frequency Estimator
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2012-01-01
Joint fundamental frequency and model order estimation is an important problem in several applications such as speech and music processing. In this paper, we develop an approximate estimation algorithm of these quantities using Bayesian inference. The inference about the fundamental frequency...... and the model order is based on a probability model which corresponds to a minimum of prior information. From this probability model, we give the exact posterior distributions on the fundamental frequency and the model order, and we also present analytical approximations of these distributions which lower...
Approximate Controllability of Fractional Integrodifferential Evolution Equations
R. Ganesh
2013-01-01
Full Text Available This paper addresses the issue of approximate controllability for a class of control system which is represented by nonlinear fractional integrodifferential equations with nonlocal conditions. By using semigroup theory, p-mean continuity and fractional calculations, a set of sufficient conditions, are formulated and proved for the nonlinear fractional control systems. More precisely, the results are established under the assumption that the corresponding linear system is approximately controllable and functions satisfy non-Lipschitz conditions. The results generalize and improve some known results.
Excluded-Volume Approximation for Supernova Matter
Yudin, A V
2014-01-01
A general scheme of the excluded-volume approximation as applied to multicomponent systems with an arbitrary degree of degeneracy has been developed. This scheme also admits an allowance for additional interactions between the components of a system. A specific form of the excluded-volume approximation for investigating supernova matter at subnuclear densities has been found from comparison with the hard-sphere model. The possibility of describing the phase transition to uniform nuclear matter in terms of the formalism under consideration is discussed.
Generalized companion matrix for approximate GCD
Boito, Paola
2011-01-01
We study a variant of the univariate approximate GCD problem, where the coe?- cients of one polynomial f(x)are known exactly, whereas the coe?cients of the second polynomial g(x)may be perturbed. Our approach relies on the properties of the matrix which describes the operator of multiplication by gin the quotient ring C[x]=(f). In particular, the structure of the null space of the multiplication matrix contains all the essential information about GCD(f; g). Moreover, the multiplication matrix exhibits a displacement structure that allows us to design a fast algorithm for approximate GCD computation with quadratic complexity w.r.t. polynomial degrees.
An approximate analytical approach to resampling averages
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
Static correlation beyond the random phase approximation
Olsen, Thomas; Thygesen, Kristian Sommer
2014-01-01
We investigate various approximations to the correlation energy of a H2 molecule in the dissociation limit, where the ground state is poorly described by a single Slater determinant. The correlation energies are derived from the density response function and it is shown that response functions...... derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...
Approximate formulas for moderately small eikonal amplitudes
Kisselev, A. V.
2016-08-01
We consider the eikonal approximation for moderately small scattering amplitudes. To find numerical estimates of these approximations, we derive formulas that contain no Bessel functions and consequently no rapidly oscillating integrands. To obtain these formulas, we study improper integrals of the first kind containing products of the Bessel functions J0(z). We generalize the expression with four functions J0(z) and also find expressions for the integrals with the product of five and six Bessel functions. We generalize a known formula for the improper integral with two functions Jυ (az) to the case with noninteger υ and complex a.
The exact renormalization group and approximation solutions
Morris, T R
1994-01-01
We investigate the structure of Polchinski's formulation of the flow equations for the continuum Wilson effective action. Reinterpretations in terms of I.R. cutoff greens functions are given. A promising non-perturbative approximation scheme is derived by carefully taking the sharp cutoff limit and expanding in `irrelevancy' of operators. We illustrate with two simple models of four dimensional $\\lambda \\varphi^4$ theory: the cactus approximation, and a model incorporating the first irrelevant correction to the renormalized coupling. The qualitative and quantitative behaviour give confidence in a fuller use of this method for obtaining accurate results.
Approximating W projection as a separable kernel
Merry, Bruce
2016-02-01
W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.
BEST APPROXIMATION BY DOWNWARD SETS WITH APPLICATIONS
H.Mohebi; A. M. Rubinov
2006-01-01
We develop a theory of downward sets for a class of normed ordered spaces. We study best approximation in a normed ordered space X by elements of downward sets, and give necessary and sufficient conditions for any element of best approximation by a closed downward subset of X. We also characterize strictly downward subsets of X, and prove that a downward subset of X is strictly downward if and only if each its boundary point is Chebyshev. The results obtained are used for examination of some Chebyshev pairs (W,x), where x ∈ X and W is a closed downward subset of X.
Local density approximations from finite systems
Entwistle, Mike; Wetherell, Jack; Longstaff, Bradley; Ramsden, James; Godby, Rex
2016-01-01
The local density approximation (LDA) constructed through quantum Monte Carlo calculations of the homogeneous electron gas (HEG) is the most common approximation to the exchange-correlation functional in density functional theory. We introduce an alternative set of LDAs constructed from slab-like systems of one, two and three electrons that resemble the HEG within a finite region, and illustrate the concept in one dimension. Comparing with the exact densities and Kohn-Sham potentials for various test systems, we find that the LDAs give a good account of the self-interaction correction, but are less reliable when correlation is stronger or currents flow.
Assessment of the Remaining Life of Bituminous Layers in Road Pavements
Kálmán Adorjányi
2017-02-01
Full Text Available In this paper, a mechanistic-empirical approach is presented for the assessment of bearing capacity condition of asphalt pavement layers by Falling Weight Deflectometer measurements and laboratory fatigue tests. The bearing capacity condition ratio was determined using past traffic data and the remaining fatigue life which was determined from multilayer pavement response model. The traffic growth rate was taken into account with finite arithmetic and geometric progressions. Fatigue resistance of layers’ bituminous materials was obtained with indirect tensile fatigue tests. Deduct curve of condition scores was derived with Weibull distribution.
On Quadratic Programming with a Ratio Objective
Bhaskara, Aditya; Manokaran, Rajsekar; Vijayaraghavan, Aravindan
2011-01-01
Quadratic Programming (QP) is the well-studied problem of maximizing over {-1,1} values the quadratic form \\sum_ij a_ij x_i x_j. QP captures many known combinatorial optimization problems and SDP techniques have given optimal approximation algorithms for many of these problems. We extend this body of work by initiating the study of Quadratic Programming problems where the variables take values in the domain {-1,0,1}. The specific problem we study is: QP-Ratio: max_{-1,0,1}^n (x^T A x) / (x^T x). This objective function is a natural relative of several well studied problems. Yet, it is a good testbed for both algorithms and complexity because the techniques used for quadratic problems for the {-1,1} and {0,1} domains do not seem to carry over to the {-1,0,1} domain. We give approximation algorithms and evidence for the hardness of approximating the QP-Ratio problem. We consider an SDP relaxation obtained by adding constraints to the natural SDP relaxation for this problem and obtain an O(n^{2/7}) algorithm for...
Effect of interaction of embedded crack and free surface on remaining fatigue life
Genshichiro Katsumata
2016-12-01
Full Text Available Embedded crack located near free surface of a component interacts with the free surface. When the distance between the free surface and the embedded crack is short, stress at the crack tip ligament is higher than that at the other area of the cracked section. It can be easily expected that fatigue crack growth is fast, when the embedded crack locates near the free surface. To avoid catastrophic failures caused by fast fatigue crack growth at the crack tip ligament, fitness-for-service (FFS codes provide crack-to-surface proximity rules. The proximity rules are used to determine whether the cracks should be treated as embedded cracks as-is, or transformed to surface cracks. Although the concepts of the proximity rules are the same, the specific criteria and the rules to transform embedded cracks into surface cracks differ amongst FFS codes. This paper focuses on the interaction between an embedded crack and a free surface of a component as well as on its effects on the remaining fatigue lives of embedded cracks using the proximity rules provided by the FFS codes. It is shown that the remaining fatigue lives for the embedded cracks strongly depend on the crack aspect ratio and location from the component free surface. In addition, it can be said that the proximity criteria defined by the API and RSE-M codes give overly conservative remaining lives. On the contrary, the WES and AME codes always give long remaining lives and non-conservative estimations. When the crack aspect ratio is small, ASME code gives non-conservative estimation.
Calculation note for an underground leak which remains underground
Goldberg, H.J.
1997-05-20
This calculation note supports the subsurface leak accident scenario which remains subsurface. It is assumed that a single walled pipe carrying waste from tank 106-C ruptures, releasing the liquid waste into the soil. In this scenario, the waste does not form a surface pool, but remains subsurface. However, above the pipe is a berm, 0.762 m (2.5 ft) high and 2.44 m (8 ft) wide, and the liquid released from the leak rises into the berm. The slurry line, which transports a source term of higher activity than the sluice line, leaks into the soil at a rate of 5% of the maximum flow rate of 28.4 L/s (450 gpm) for twelve hours. The dose recipient was placed a perpendicular distance of 100 m from the pipe. Two source terms were considered, mitigated and unmitigated release as described in section 3.4.1 of UANF-SD-WM-BIO-001, Addendum 1. The unmitigated consisted of two parts of AWF liquid and one part AWF solid. The mitigated release consisted of two parts SST liquid, eighteen parts AWF liquid, nine parts SST solid, and one part AWF solid. The isotopic breakdown of the release in these cases is presented. Two geometries were considered in preliminary investigations, disk source, and rectangular source. Since the rectangular source results from the assumption that the contamination is wicked up into the berm, only six inches of shielding from uncontaminated earth is present, while the disk source, which remains six inches below the level of the surface of the land is often shielded by a thick shield due to the slant path to the dose point. For this reason, only the rectangular source was considered in the final analysis. The source model was a rectangle 2.134 m (7 ft) thick, 0.6096 m (2 ft) high, and 130.899 m (131 ft) long. The top and sides of this rectangular source was covered with earth of density 1.6 g/cm{sup 3} to a thickness of 15.24 cm (6 in). This soil is modeled as 40% void space. The source consisted of earth of the same density with the void spaces filled with
Future Remains: Industrial Heritage at the Hanford Plutonium Works
Freer, Brian
This dissertation argues that U.S. environmental and historic preservation regulations, industrial heritage projects, history, and art only provide partial frameworks for successfully transmitting an informed story into the long range future about nuclear technology and its related environmental legacy. This argument is important because plutonium from nuclear weapons production is toxic to humans in very small amounts, threatens environmental health, has a half-life of 24, 110 years and because the industrial heritage project at Hanford is the first time an entire U.S. Department of Energy weapons production site has been designated a U.S. Historic District. This research is situated within anthropological interest in industrial heritage studies, environmental anthropology, applied visual anthropology, as well as wider discourses on nuclear studies. However, none of these disciplines is really designed or intended to be a completely satisfactory frame of reference for addressing this perplexing challenge of documenting and conveying an informed story about nuclear technology and its related environmental legacy into the long range future. Others have thought about this question and have made important contributions toward a potential solution. Examples here include: future generations movements concerning intergenerational equity as evidenced in scholarship, law, and amongst Native American groups; Nez Perce and Confederated Tribes of the Umatilla Indian Reservation responses to the Hanford End State Vision and Hanford's Canyon Disposition Initiative; as well as the findings of organizational scholars on the advantages realized by organizations that have a long term future perspective. While these ideas inform the main line inquiry of this dissertation, the principal approach put forth by the researcher of how to convey an informed story about nuclear technology and waste into the long range future is implementation of the proposed Future Remains clause, as
Photoferrotrophy: Remains of an Ancient Photosynthesis in Modern Environments.
Camacho, Antonio; Walter, Xavier A; Picazo, Antonio; Zopfi, Jakob
2017-01-01
Photoferrotrophy, the process by which inorganic carbon is fixed into organic matter using light as an energy source and reduced iron [Fe(II)] as an electron donor, has been proposed as one of the oldest photoautotrophic metabolisms on Earth. Under the iron-rich (ferruginous) but sulfide poor conditions dominating the Archean ocean, this type of metabolism could have accounted for most of the primary production in the photic zone. Here we review the current knowledge of biogeochemical, microbial and phylogenetic aspects of photoferrotrophy, and evaluate the ecological significance of this process in ancient and modern environments. From the ferruginous conditions that prevailed during most of the Archean, the ancient ocean evolved toward euxinic (anoxic and sulfide rich) conditions and, finally, much after the advent of oxygenic photosynthesis, to a predominantly oxic environment. Under these new conditions photoferrotrophs lost importance as primary producers, and now photoferrotrophy remains as a vestige of a formerly relevant photosynthetic process. Apart from the geological record and other biogeochemical markers, modern environments resembling the redox conditions of these ancient oceans can offer insights into the past significance of photoferrotrophy and help to explain how this metabolism operated as an important source of organic carbon for the early biosphere. Iron-rich meromictic (permanently stratified) lakes can be considered as modern analogs of the ancient Archean ocean, as they present anoxic ferruginous water columns where light can still be available at the chemocline, thus offering suitable niches for photoferrotrophs. A few bacterial strains of purple bacteria as well as of green sulfur bacteria have been shown to possess photoferrotrophic capacities, and hence, could thrive in these modern Archean ocean analogs. Studies addressing the occurrence and the biogeochemical significance of photoferrotrophy in ferruginous environments have been
Diagenetic signals from ancient human remains - bioarchaeological applications
Szostek, Krzysztof; Stepańczak, Beata; Szczepanek, Anita; Kępa, Małgorzata; Głąb, Henryk; Jarosz, Paweł; Włodarczak, Piotr; Tunia, Krzysztof; Pawlyta, Jacek; Paluszkiewicz, Czesława; Tylko, Grzegorz
2011-01-01
This preliminary study examines the potential effects of diagenetic processes on the oxygen-isotope ratios of bone and tooth phosphate (δ18O) from skeletal material of individuals representing the Corded Ware Culture (2500-2400 BC) discovered in Malżyce (Southern Poland). Intra-individual variability of Ca/P, CI, C/P, collagen content (%) and oxygen isotopes was observed through analysis of enamel, dentin and postcranial bones. Using a variety of analytical techniques, it was found that, despite the lack of differences in soil acidity, not all the parts of a skeleton on a given site had been equally exposed to diagenetic post mortem changes. In a few cases, qualitative changes in the FTIR spectrum of analysed bones were observed. The data suggest that apart from quantitative analyses, i.e., the calculation of Ca/P, CI, C/P and collagen content, qualitative analyses such as examination of the absorbance line are recommended. The degree to which a sample is, contaminated on the basis of any additional, non-biogenic peaks, deemed to be contaminated should also be specified.
Rational approximations and quantum algorithms with postselection
Mahadev, U.; de Wolf, R.
2015-01-01
We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We gi
Kravchuk functions for the finite oscillator approximation
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
Optical bistability without the rotating wave approximation
Sharaby, Yasser A., E-mail: Yasser_Sharaby@hotmail.co [Physics Department, Faculty of Applied Sciences, Suez Canal University, Suez (Egypt); Joshi, Amitabh, E-mail: ajoshi@eiu.ed [Department of Physics, Eastern Illinois University, Charleston, IL 61920 (United States); Hassan, Shoukry S., E-mail: Shoukryhassan@hotmail.co [Mathematics Department, College of Science, University of Bahrain, P.O. Box 32038 (Bahrain)
2010-04-26
Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.
Improved Approximations for Multiprocessor Scheduling Under Uncertainty
Crutchfield, Christopher; Fineman, Jeremy T; Karger, David R; Scott, Jacob
2008-01-01
This paper presents improved approximation algorithms for the problem of multiprocessor scheduling under uncertainty, or SUU, in which the execution of each job may fail probabilistically. This problem is motivated by the increasing use of distributed computing to handle large, computationally intensive tasks. In the SUU problem we are given n unit-length jobs and m machines, a directed acyclic graph G of precedence constraints among jobs, and unrelated failure probabilities q_{ij} for each job j when executed on machine i for a single timestep. Our goal is to find a schedule that minimizes the expected makespan, which is the expected time at which all jobs complete. Lin and Rajaraman gave the first approximations for this NP-hard problem for the special cases of independent jobs, precedence constraints forming disjoint chains, and precedence constraints forming trees. In this paper, we present asymptotically better approximation algorithms. In particular, we give an O(loglog min(m,n))-approximation for indep...
Markov operators, positive semigroups and approximation processes
Altomare, Francesco; Leonessa, Vita; Rasa, Ioan
2015-01-01
In recent years several investigations have been devoted to the study of large classes of (mainly degenerate) initial-boundary value evolution problems in connection with the possibility to obtain a constructive approximation of the associated positive C_0-semigroups. In this research monograph we present the main lines of a theory which finds its root in the above-mentioned research field.
Image Compression Via a Fast DCT Approximation
Bayer, F. M.; Cintra, R. J.
2010-01-01
Discrete transforms play an important role in digital signal processing. In particular, due to its transform domain energy compaction properties, the discrete cosine transform (DCT) is pivotal in many image processing problems. This paper introduces a numerical approximation method for the DCT based
Approximation algorithms for planning and control
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Large hierarchies from approximate R symmetries
Kappl, Rolf; Ratz, Michael [Technische Univ. Muenchen, Garching (Germany). Physik Dept. T30; Nilles, Hans Peter [Bonn Univ. (Germany). Bethe Zentrum fuer Theoretische Physik und Physikalisches Inst.; Ramos-Sanchez, Saul; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Vaudrevange, Patrick K.S. [Ludwig-Maximilians-Univ. Muenchen (Germany). Arnold Sommerfeld Zentrum fuer Theoretische Physik
2008-12-15
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales. (orig.)
Strong washout approximation to resonant leptogenesis
Garbrecht, Bjoern; Gautier, Florian; Klaric, Juraj [Physik Department T70, James-Franck-Strasse, Techniche Universitaet Muenchen, 85748 Garching (Germany)
2016-07-01
We study resonant Leptogenesis with two sterile neutrinos with masses M{sub 1} and M{sub 2}, Yukawa couplings Y{sub 1} and Y{sub 2}, and a single active flavor. Specifically, we focus on the strong washout regime, where the decay width dominates the mass splitting of the two sterile neutrinos. We show that one can approximate the effective decay asymmetry by it's late time limit ε = X sin(2 φ)/(X{sup 2}+sin{sup 2}φ), where X=8 π Δ/(vertical stroke Y{sub 1} vertical stroke {sup 2}+ vertical stroke Y{sub 2} vertical stroke {sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), and φ=arg(Y{sub 2}/Y{sub 1}), and establish criteria for the validity of this approximation. We compare the approximate results with numerical ones, obtained by solving the mixing and oscillations of the sterile neutrinos. We generalize the formula to the case of several active flavors, and demonstrate how it can be used to calculate the lepton asymmetry in phenomenological scenarios which are in agreement with the neutrino oscillation data. We find that that using the late time limit is an applicable approximation throughout the phenomenologically viable parameter space.
Lower Bound Approximation for Elastic Buckling Loads
Vrouwenvelder, A.; Witteveen, J.
1975-01-01
An approximate method for the elastic buckling analysis of two-dimensional frames is introduced. The method can conveniently be explained with reference to a physical interpretation: In the frame every member is replaced by two new members: - a flexural member without extensional rigidity to transmi
Approximate Equilibrium Problems and Fixed Points
H. Mazaheri
2013-01-01
Full Text Available We find a common element of the set of fixed points of a map and the set of solutions of an approximate equilibrium problem in a Hilbert space. Then, we show that one of the sequences weakly converges. Also we obtain some theorems about equilibrium problems and fixed points.
Approximations in diagnosis: motivations and techniques
Harmelen, van F.A.H.; Teije, A. ten
1995-01-01
We argue that diagnosis should not be seen as solving a problem with a unique definition, but rather that there exists a whole space of reasonable notions of diagnosis. These notions can be seen as mutual approximations. We present a number of reasons for choosing among different notions of diagnos
Eignets for function approximation on manifolds
Mhaskar, H N
2009-01-01
Let $\\XX$ be a compact, smooth, connected, Riemannian manifold without boundary, $G:\\XX\\times\\XX\\to \\RR$ be a kernel. Analogous to a radial basis function network, an eignet is an expression of the form $\\sum_{j=1}^M a_jG(\\circ,y_j)$, where $a_j\\in\\RR$, $y_j\\in\\XX$, $1\\le j\\le M$. We describe a deterministic, universal algorithm for constructing an eignet for approximating functions in $L^p(\\mu;\\XX)$ for a general class of measures $\\mu$ and kernels $G$. Our algorithm yields linear operators. Using the minimal separation amongst the centers $y_j$ as the cost of approximation, we give modulus of smoothness estimates for the degree of approximation by our eignets, and show by means of a converse theorem that these are the best possible for every \\emph{individual function}. We also give estimates on the coefficients $a_j$ in terms of the norm of the eignet. Finally, we demonstrate that if any sequence of eignets satisfies the optimal estimates for the degree of approximation of a smooth function, measured in ter...
Approximations in diagnosis: motivations and techniques
Harmelen, van F.A.H.; Teije, A. ten
1995-01-01
We argue that diagnosis should not be seen as solving a problem with a unique definition, but rather that there exists a whole space of reasonable notions of diagnosis. These notions can be seen as mutual approximations. We present a number of reasons for choosing among different notions of
Empirical progress and nomic truth approximation revisited
Kuipers, Theodorus
2014-01-01
In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of
Faddeev Random Phase Approximation applied to molecules
Degroote, Matthias
2012-01-01
This Ph.D. thesis derives the equations of the Faddeev Random Phase Approximation (FRPA) and applies the method to a set of small atoms and molecules. The occurence of RPA instabilities in the dissociation limit is addressed in molecules and by the study of the Hubbard molecule as a test system with reduced dimensionality.
Auction analysis by normal form game approximation
Kaisers, Michael; Tuyls, Karl; Thuijsman, Frank; Parsons, Simon
2008-01-01
Auctions are pervasive in todaypsilas society and provide a variety of real markets. This article facilitates a strategic choice between a set of available trading strategies by introducing a methodology to approximate heuristic payoff tables by normal form games. An example from the auction domain
Fostering Formal Commutativity Knowledge with Approximate Arithmetic.
Sonja Maria Hansen
Full Text Available How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2 and third graders (Experiment 3. Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school.
Approximate fixed point of Reich operator
M. Saha
2013-01-01
Full Text Available In the present paper, we study the existence of approximate fixed pointfor Reich operator together with the property that the ε-fixed points are concentrated in a set with the diameter tends to zero if ε $to$ > 0.
Approximation of Aggregate Losses Using Simulation
Mohamed A. Mohamed
2010-01-01
Full Text Available Problem statement: The modeling of aggregate losses is one of the main objectives in actuarial theory and practice, especially in the process of making important business decisions regarding various aspects of insurance contracts. The aggregate losses over a fixed time period is often modeled by mixing the distributions of loss frequency and severity, whereby the distribution resulted from this approach is called a compound distribution. However, in many cases, realistic probability distributions for loss frequency and severity cannot be combined mathematically to derive the compound distribution of aggregate losses. Approach: This study aimed to approximate the aggregate loss distribution using simulation approach. In particular, the approximation of aggregate losses was based on a compound Poisson-Pareto distribution. The effects of deductible and policy limit on the individual loss as well as the aggregate losses were also investigated. Results: Based on the results, the approximation of compound Poisson-Pareto distribution via simulation approach agreed with the theoretical mean and variance of each of the loss frequency, loss severity and aggregate losses. Conclusion: This study approximated the compound distribution of aggregate losses using simulation approach. The investigation on retained losses and insurance claims allowed an insured or a company to select an insurance contract that fulfills its requirement. In particular, if a company wants to have an additional risk reduction, it can compare alternative policies by considering the worthiness of the additional expected total cost which can be estimated via simulation approach.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Approximations in the PE-method
Arranz, Marta Galindo
1996-01-01
Two differenct sources of errors may occur in the implementation of the PE methods; a phase error introduced in the approximation of a pseudo-differential operator and an amplitude error generated from the starting field. First, the inherent phase errors introduced in the solution are analyzed...
Approximating the DGP of China's Quarterly GDP
Ph.H.B.F. Franses (Philip Hans); H. Mees (Heleen)
2010-01-01
textabstractWe demonstrate that the data generating process (DGP) of China’s cumulated quarterly Gross Domestic Product (GDP, current prices), as it is reported by the National Bureau of Statistics of China, can be (very closely) approximated by a simple rule. This rule is that annual growth in any
OPTICAL QUANTIFICATION OF APPROXIMAL CARIES INVITRO
VANDERIJKE, JW; HERKSTROTER, FM; TENBOSCH, JJ
1991-01-01
A fluorescent dye was applied to extracted premolars with either early artificial lesions or natural white-spot lesions. The teeth were placed in an approximal geometry, and with a specially designed fibre-optic probe the fluorescence of the dye was measured in the lesions. The same fibre-optic
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a
An Approximation of Ultra-Parabolic Equations
Allaberen Ashyralyev
2012-01-01
Full Text Available The first and second order of accuracy difference schemes for the approximate solution of the initial boundary value problem for ultra-parabolic equations are presented. Stability of these difference schemes is established. Theoretical results are supported by the result of numerical examples.
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a
Approximating the DGP of China's Quarterly GDP
Ph.H.B.F. Franses (Philip Hans); H. Mees (Heleen)
2010-01-01
textabstractWe demonstrate that the data generating process (DGP) of China’s cumulated quarterly Gross Domestic Product (GDP, current prices), as it is reported by the National Bureau of Statistics of China, can be (very closely) approximated by a simple rule. This rule is that annual growth in any
$\\Phi$-derivable approximations in gauge theories
Arrizabalaga, A
2003-01-01
We discuss the method of $\\Phi$-derivable approximations in gauge theories. There, two complications arise, namely the violation of Bose symmetry in correlation functions and the gauge dependence. For the latter we argue that the error introduced by the gauge dependent terms is controlled, therefore not invalidating the method.
Approximations of Two-Attribute Utility Functions
1976-09-01
Introduction to Approximation Theory, McGraw-Hill, New York, 1966. Faber, G., Uber die interpolatorische Darstellung stetiger Funktionen, Deutsche...Management Review 14 (1972b) 37-50. Keeney, R. L., A decision analysis with multiple objectives: the Mexico City airport, Bell Journal of Economics
Approximate Furthest Neighbor in High Dimensions
Pagh, Rasmus; Silvestri, Francesco; Sivertsen, Johan von Tangen
2015-01-01
-dimensional Euclidean space. We build on the technique of Indyk (SODA 2003), storing random projections to provide sublinear query time for AFN. However, we introduce a different query algorithm, improving on Indyk’s approximation factor and reducing the running time by a logarithmic factor. We also present a variation...
Virial expansion coefficients in the harmonic approximation
R. Armstrong, J.; Zinner, Nikolaj Thomas; V. Fedorov, D.
2012-01-01
The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated...
Nonlinear approximation with dictionaries,.. II: Inverse estimates
Gribonval, Rémi; Nielsen, Morten
In this paper we study inverse estimates of the Bernstein type for nonlinear approximation with structured redundant dictionaries in a Banach space. The main results are for separated decomposable dictionaries in Hilbert spaces, which generalize the notion of joint block-diagonal mutually...
Intrinsic Diophantine approximation on general polynomial surfaces
Tiljeset, Morten Hein
2017-01-01
We study the Hausdorff measure and dimension of the set of intrinsically simultaneously -approximable points on a curve, surface, etc, given as a graph of integer polynomials. We obtain complete answers to these questions for algebraically “nice” manifolds. This generalizes earlier work done...
Turbo Equalization Using Partial Gaussian Approximation
Zhang, Chuanzong; Wang, Zhongyong; Manchón, Carles Navarro
2016-01-01
returned by the equalizer by using a partial Gaussian approximation (PGA). We exploit the specific structure of the ISI channel model to compute the latter messages from the beliefs obtained using a Kalman smoother/equalizer. Doing so leads to a significant complexity reduction compared to the initial PGA...
Subset Selection by Local Convex Approximation
Øjelund, Henrik; Sadegh, Payman; Madsen, Henrik
1999-01-01
least squares criterion. We propose an optimization technique for the posed probelm based on a modified version of the Newton-Raphson iterations, combined with a backward elimination type algorithm. THe Newton-Raphson modification concerns iterative approximations to the non-convex cost function...