Levels of acute phase proteins remain stable after ischemic stroke
Directory of Open Access Journals (Sweden)
Paik Myunghee C
2006-10-01
Full Text Available Abstract Background Inflammation and inflammatory biomarkers play an important role in atherosclerosis and cardiovascular disease. Little information is available, however, on time course of serum markers of inflammation after stroke. Methods First ischemic stroke patients ≥40 years old had levels of high-sensitivity C-reactive protein (hsCRP, serum amyloid A (SAA, and fibrinogen measured in plasma samples drawn at 1, 2, 3, 7, 14, 21 and 28 days after stroke. Levels were log-transformed as needed, and parametric and non-parametric statistical tests were used to test for evidence of a trend in levels over time. Levels of hsCRP and SAA were also compared with levels in a comparable population of stroke-free participants. Results Mean age of participants with repeated measures (n = 21 was 65.6 ± 11.6 years, and 13 (61.9% were men, and 15 (71.4% were Hispanic. Approximately 75% of patients (n = 15 had mild strokes (NIH Stroke Scale score 0–5. There was no evidence of a time trend in levels of hsCRP, SAA, or fibrinogen for any of the markers during the 28 days of follow-up. Mean log(hsCRP was 1.67 ± 1.07 mg/L (median hsCRP 6.48 mg/L among stroke participants and 1.00 ± 1.18 mg/L (median 2.82 mg/L in a group of 1176 randomly selected stroke-free participants from the same community (p = 0.0252. Conclusion Levels of hsCRP are higher in stroke patients than in stroke-free subjects. Levels of inflammatory biomarkers associated with atherosclerosis, including hsCRP, appear to be stable for at least 28 days after first ischemic stroke.
Impurity levels: corrections to the effective mass approximation
International Nuclear Information System (INIS)
Bentosela, F.
1977-07-01
Some rigorous results concerning the effective mass approximation used for the calculation of the impurity levels in semiconductors are presented. Each energy level is expressed as an asymptotic series in the inverse of the dielectric constant K, in the case where the impurity potential is 1/μ
Breakdown of the few-level approximation in collective systems
International Nuclear Information System (INIS)
Kiffner, M.; Evers, J.; Keitel, C. H.
2007-01-01
The validity of the few-level approximation in dipole-dipole interacting collective systems is discussed. As an example system, we study the archetype case of two dipole-dipole interacting atoms, each modeled by two complete sets of angular momentum multiplets. We establish the breakdown of the few-level approximation by first proving the intuitive result that the dipole-dipole induced energy shifts between collective two-atom states depend on the length of the vector connecting the atoms, but not on its orientation, if complete and degenerate multiplets are considered. A careful analysis of our findings reveals that the simplification of the atomic level scheme by artificially omitting Zeeman sublevels in a few-level approximation generally leads to incorrect predictions. We find that this breakdown can be traced back to the dipole-dipole coupling of transitions with orthogonal dipole moments. Our interpretation enables us to identify special geometries in which partial few-level approximations to two- or three-level systems are valid
Multi-level methods and approximating distribution functions
International Nuclear Information System (INIS)
Wilson, D.; Baker, R. E.
2016-01-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Multi-level methods and approximating distribution functions
Energy Technology Data Exchange (ETDEWEB)
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
The Remaining Service Time Upon Reaching a High Level in M/G/1 Queues
de Boer, Pieter-Tjerk; Nicola, V.F.; van Ommeren, Jan C.W.
The distribution of the remaining service time upon reaching some target level in an M/G/1 queue is of theoretical as well as practical interest. In general, this distribution depends on the initial level as well as on the target level, say, B. Two initial levels are of particular interest, namely,
Farmer's lung in Ireland (1983-1996) remains at a constant level.
LENUS (Irish Health Repository)
McGrath, D S
2012-02-03
A prospective study was undertaken by the Departments of Respiratory Medicine and Medical Microbiology at the Cork University Hospital, a. to investigate the epidemiology of Farmer\\'s Lung (F.L.) in the Republic of Ireland (pop. 3.5 million), with special reference to the South Western Region of this country (pop. 536,000) and b. to assess any relationship between the prevalence\\/incidence of F.L. with climatic factors in South West Ireland, between 1983 and 1996. F.L. incidence remained constant throughout the 13 yrs studied both on a national and a regional basis. A significant relationship was also found between total rainfall each summer and F.L. incidence and prevalence over the following yr (p < 0.005) in South-West Ireland. The persistence of F.L. in Ireland at a constant level suggests that farmers\\' working environment and farm practices need to be improved.
Muntaner, Caries; Rai, Nanky; Ng, Edwin; Chung, Haejoo
2012-01-01
Richard Wilkinson and Kate Pickett's latest book, The Spirit Level: Why Equality is Best for Everyone, has caught the attention of academics and policymakers and stimulated debate across the left-right political spectrum. Interest in income inequality has remained unabated since the publication of Wilkinson's previous volume, Unhealthy Societies: The Afflictions of Inequality. While both books detail the negative health effects of income inequality, The Spirit Level expands the scope of its argument to also include social issues. The book, however, deals extensively with the explanation of how income inequality affects individual health. Little attention is given to political and economic explanations on how income inequality is generated in the first place. The volume ends with political solutions that carefully avoid state interventions such as limiting the private sector's role in the production of goods and services (e.g., non-profit sector, employee-ownership schemes). Although well-intentioned, these alternatives are insufficient to significantly reduce the health inequalities generated by contemporary capitalism in wealthy countries, let alone around the world.
International Nuclear Information System (INIS)
Blauvelt, Richard; Small, Ken; Gelles, Christine; McKenney, Dale; Franz, Bill; Loveland, Kaylin; Lauer, Mike
2006-01-01
Faced with closure schedules as a driving force, significant progress has been made during the last 2 years on the disposition of DOE mixed waste streams thought previously to be problematic. Generators, the Department of Energy and commercial vendors have combined to develop unique disposition paths for former orphan streams. Recent successes and remaining issues will be discussed. The session will also provide an opportunity for Federal agencies to share lessons learned on low- level and mixed low-level waste challenges and identify opportunities for future collaboration. This panel discussion was organized by PAC member Dick Blauvelt, Navarro Research and Engineering Inc who served as co-chair along with Dave Eaton from INL. In addition, George Antonucci, Duratek Barnwell and Rich Conley, AFSC were invited members of the audience, prepared to contribute the Barnwell and DOD perspective to the issues as needed. Mr. Small provide information regarding the five year 20K M3 window of opportunity at the Nevada Test Site for DOE contractors to dispose of mixed waste that cannot be received at the Energy Solutions (Envirocare) site in Utah because of activity levels. He provided a summary of the waste acceptance criteria and the process sites must follow to be certified to ship. When the volume limit or time limit is met, the site will undergo a RCRA closure. Ms. Gelles summarized the status of the orphan issues, commercial options and the impact of the EM reorganization on her program. She also announced that there would be a follow-on meeting in 2006 to the very successful St. Louis meeting of last year. It will probably take place in Chicago in July. Details to be announced. Mr. McKenney discussed progress made at the Hanford Reservation regarding disposal of their mixed waste inventory. The news is good for the Hanford site but not good for the rest of the DOE complex since shipment for out of state of both low level and low level mixed waste will continue to be
Approximate Bisimulation for High-Level Datapaths in Intelligent Transportation Systems
Directory of Open Access Journals (Sweden)
Hui Deng
2013-01-01
Full Text Available A relation called approximate bisimulation is proposed to achieve behavior and structure optimization for a type of high-level datapath whose data exchange processes are expressed by nonlinear polynomial systems. The high-level datapaths are divided into small blocks with a partitioning method and then represented by polynomial transition systems. A standardized form based on Ritt-Wu's method is developed to represent the equivalence relation for the high-level datapaths. Furthermore, we establish an approximate bisimulation relation within a controllable error range and express the approximation with an error control function, which is processed by Sostools. Meanwhile, the error is controlled through tuning the equivalence restrictions. An example of high-level datapaths demonstrates the efficiency of our method.
Directory of Open Access Journals (Sweden)
Shen-yan Chen
2015-01-01
Full Text Available This paper presents an Improved Genetic Algorithm with Two-Level Approximation (IGATA to minimize truss weight by simultaneously optimizing size, shape, and topology variables. On the basis of a previously presented truss sizing/topology optimization method based on two-level approximation and genetic algorithm (GA, a new method for adding shape variables is presented, in which the nodal positions are corresponding to a set of coordinate lists. A uniform optimization model including size/shape/topology variables is established. First, a first-level approximate problem is constructed to transform the original implicit problem to an explicit problem. To solve this explicit problem which involves size/shape/topology variables, GA is used to optimize individuals which include discrete topology variables and shape variables. When calculating the fitness value of each member in the current generation, a second-level approximation method is used to optimize the continuous size variables. With the introduction of shape variables, the original optimization algorithm was improved in individual coding strategy as well as GA execution techniques. Meanwhile, the update strategy of the first-level approximation problem was also improved. The results of numerical examples show that the proposed method is effective in dealing with the three kinds of design variables simultaneously, and the required computational cost for structural analysis is quite small.
Evaluation of high-level waste pretreatment processes with an approximate reasoning model
International Nuclear Information System (INIS)
Bott, T.F.; Eisenhawer, S.W.; Agnew, S.F.
1999-01-01
The development of an approximate-reasoning (AR)-based model to analyze pretreatment options for high-level waste is presented. AR methods are used to emulate the processes used by experts in arriving at a judgment. In this paper, the authors first consider two specific issues in applying AR to the analysis of pretreatment options. They examine how to combine quantitative and qualitative evidence to infer the acceptability of a process result using the example of cesium content in low-level waste. They then demonstrate the use of simple physical models to structure expert elicitation and to produce inferences consistent with a problem involving waste particle size effects
Adaptive EMG noise reduction in ECG signals using noise level approximation
Marouf, Mohamed; Saranovac, Lazar
2017-12-01
In this paper the usage of noise level approximation for adaptive Electromyogram (EMG) noise reduction in the Electrocardiogram (ECG) signals is introduced. To achieve the adequate adaptiveness, a translation-invariant noise level approximation is employed. The approximation is done in the form of a guiding signal extracted as an estimation of the signal quality vs. EMG noise. The noise reduction framework is based on a bank of low pass filters. So, the adaptive noise reduction is achieved by selecting the appropriate filter with respect to the guiding signal aiming to obtain the best trade-off between the signal distortion caused by filtering and the signal readability. For the evaluation purposes; both real EMG and artificial noises are used. The tested ECG signals are from the MIT-BIH Arrhythmia Database Directory, while both real and artificial records of EMG noise are added and used in the evaluation process. Firstly, comparison with state of the art methods is conducted to verify the performance of the proposed approach in terms of noise cancellation while preserving the QRS complex waves. Additionally, the signal to noise ratio improvement after the adaptive noise reduction is computed and presented for the proposed method. Finally, the impact of adaptive noise reduction method on QRS complexes detection was studied. The tested signals are delineated using a state of the art method, and the QRS detection improvement for different SNR is presented.
Collision strengths from ground levels of Ti XIII using relativistic-Breit-Pauli approximation
International Nuclear Information System (INIS)
Mohan, M.; Hibbert, H.; Burke, P.G.; Keenan, F.
1998-09-01
The R-matrix method is used to calculate collision strengths from ground state to the first twenty-six fine structure levels of neon-like titanium by including the relativistic term coupling coefficients in the semi-Breit-Pauli approximation. Configuration interaction wave-functions are used to represent the first fifteen lowest LS-coupled target states in the R-matrix expansion. Results obtained are compared with other calculations. This is the first detailed calculation on this ion in which relativistic, exchange, channel couplings and short-range correlation effects are taken into account. (author)
The off-resonant aspects of decoherence and a critique of the two-level approximation
International Nuclear Information System (INIS)
Savran, Kerim; Hakioglu, T; Mese, E; Sevincli, Haldun
2006-01-01
Conditions in favour of a realistic multilevelled description of a decohering quantum system are examined. In this regard the first crucial observation is that the thermal effects, contrary to the conventional belief, play a minor role at low temperatures in the decoherence properties. The system-environment coupling and the environmental energy spectrum dominantly affect the decoherence. In particular, zero temperature quantum fluctuations or non-equilibrium sources can be present and influential on the decoherence rates in a wide energy range allowed by the spectrum of the environment. A crucial observation against the validity of the two-level approximation is that the decoherence rates are found to be dominated not by the long time resonant but the short time off-resonant processes. This observation is demonstrated in two stages. Firstly, our zero temperature numerical results reveal that the calculated short time decoherence rates are Gaussian-like (the time dependence of the density matrix is led by the second time derivative at t = 0). Exact analytical results are also permitted in the short time limit, which, consistent with our numerical results, reveal that this specific Gaussian-like behaviour is a property of the non-Markovian correlations in the environment. These Gaussian-like rates have no dependence on any spectral parameter (position and the width of the spectrum) except, in totality, the spectral area itself. The dependence on the spectral area is a power law. Furthermore, the Gaussian-like character at short times is independent of the number of levels (N), but the numerical value of the decoherence rates is a monotonic function of N. In this context, we demonstrate that leakage, as a characteristic multilevel effect, is dominated by the non-resonant processes. The long time behaviour of decoherence is also examined. Since our spectral model allows Markovian environmental correlations at long times, the decoherence rates in this regime become
An approximate reasoning-based method for screening high-level-waste tanks for flammable gas
International Nuclear Information System (INIS)
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
2000-01-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts
An Approximate Reasoning-Based Method for Screening High-Level-Waste Tanks for Flammable Gas
International Nuclear Information System (INIS)
Eisenhawer, Stephen W.; Bott, Terry F.; Smith, Ronald E.
2000-01-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts
An approximate reasoning-based method for screening high-level-waste tanks for flammable gas
Energy Technology Data Exchange (ETDEWEB)
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
2000-06-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.
Pairing in the BCS and LN approximations using continuum single particle level density
International Nuclear Information System (INIS)
Id Betan, R.M.; Repetto, C.E.
2017-01-01
Understanding the properties of drip line nuclei requires to take into account the correlations with the continuum spectrum of energy of the system. This paper has the purpose to show that the continuum single particle level density is a convenient way to consider the pairing correlation in the continuum. Isospin mean-field and isospin pairing strength are used to find the Bardeen–Cooper–Schrieffer (BCS) and Lipkin–Nogami (LN) approximate solutions of the pairing Hamiltonian. Several physical properties of the whole chain of the Tin isotope, as gap parameter, Fermi level, binding energy, and one- and two-neutron separation energies, were calculated and compared with other methods and with experimental data when they exist. It is shown that the use of the continuum single particle level density is an economical way to include explicitly the correlations with the continuum spectrum of energy in large scale mass calculation. It is also shown that the computed properties are in good agreement with experimental data and with more sophisticated treatment of the pairing interaction.
Validity of the lowest-Landau-level approximation for rotating Bose gases
International Nuclear Information System (INIS)
Morris, Alexis G.; Feder, David L.
2006-01-01
The energy spectrum for an ultracold rotating Bose gas in a harmonic trap is calculated exactly for small systems, allowing the atoms to occupy several Landau levels. Two vortexlike states and two strongly correlated states (the Pfaffian and Laughlin) are considered in detail. In particular, their critical rotation frequencies and energy gaps are determined as a function of particle number, interaction strength, and the number of Landau levels occupied (up to three). For the vortexlike states, the lowest-Landau-level (LLL) approximation is justified only if the interaction strength decreases with the number of particles; nevertheless, the constant of proportionality increases rapidly with the angular momentum per particle. For the strongly correlated states, however, the interaction strength can increase with particle number without violating the LLL condition. The results suggest that, in large systems, the Pfaffian and Laughlin states might be stabilized at rotation frequencies below the centrifugal limit for sufficiently large interaction strengths, with energy gaps a significant fraction of the trap energy
Gryb, Sean; Thebault, Karim
2014-01-01
On one popular view, the general covariance of gravity implies that change is relational in a strong sense, such that all it is for a physical degree of freedom to change is for it to vary with regard to a second physical degree of freedom. At a quantum level, this view of change as relative variation leads to a fundamentally timeless formalism for quantum gravity. Here, we will show how one may avoid this acute 'problem of time'. Under our view, duration is still regarded as relative, but te...
An approximate-reasoning-based method for screening high-level waste tanks for flammable gas
International Nuclear Information System (INIS)
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-01-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at Hanford have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. AR models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. The authors performed a pilot study to investigate the utility of AR for flammable gas screening. They found that the effort to implement such a model was acceptable and that computational requirements were reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
Energy Technology Data Exchange (ETDEWEB)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn [Institute of Natural Sciences, Department of Mathematics, and MOE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China); Lin, Guang, E-mail: lin491@purdue.edu [Department of Mathematics, School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States); Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Yang, Xu, E-mail: xuyang@math.ucsb.edu [Department of Mathematics, University of California, Santa Barbara, CA 93106 (United States)
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by three steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.
Energy Levels and B(E2) transition rates in the Hartree-Fock approximation with the Skyrme force
International Nuclear Information System (INIS)
Oliveira, D.R. de; Mizrahi, S.S.
1976-11-01
The Hartree-Fock approximation with the Skyrme force is applied to the A = 4n type of nuclei in the s-d shell. Energy levels and electric quadrupole transition probabilities within the ground states band are calculated from the projected states of good angular momentum. Strong approximations are made but the results concerning the spectra are better than those obtained with more sophisticated density independent two-body interactions. The transition rates are less sensitive to the interaction, as previously verified
Pauls-Worm, K.G.J.; Hendrix, E.M.T.; Haijema, R.; Vorst, van der J.G.A.J.
2014-01-01
We study the practical production planning problem of a food producer facing a non-stationary erratic demand for a perishable product with a fixed life time. In meeting the uncertain demand, the food producer uses a FIFO issuing policy. The food producer aims at meeting a certain service level at
Non-Hermitian wave packet approximation for coupled two-level systems in weak and intense fields
Energy Technology Data Exchange (ETDEWEB)
Puthumpally-Joseph, Raiju; Charron, Eric [Institut des Sciences Moléculaires d’Orsay (ISMO), CNRS, Univ. Paris-Sud, Université Paris-Saclay, F-91405 Orsay (France); Sukharev, Maxim [Science and Mathematics Faculty, College of Letters and Sciences, Arizona State University, Mesa, Arizona 85212 (United States)
2016-04-21
We introduce a non-Hermitian Schrödinger-type approximation of optical Bloch equations for two-level systems. This approximation provides a complete and accurate description of the coherence and decoherence dynamics in both weak and strong laser fields at the cost of losing accuracy in the description of populations. In this approach, it is sufficient to propagate the wave function of the quantum system instead of the density matrix, providing that relaxation and dephasing are taken into account via automatically adjusted time-dependent gain and decay rates. The developed formalism is applied to the problem of scattering and absorption of electromagnetic radiation by a thin layer comprised of interacting two-level emitters.
Energy Technology Data Exchange (ETDEWEB)
Harrington, B J; Shepard, H K [New Hampshire Univ., Durham (USA). Dept. of Physics
1976-03-22
By fully exploiting the mathematical and physical analogy to the Ginzburg-Landau theory of superconductivity, a complete discussion of the ground state behavior of the four-dimensional Abelian Higgs model in the static tree level approximation is presented. It is shown that a sufficiently strong external magnetic field can alter the ground state of the theory by restoring a spontaneously broken symmetry, or by creating a qualitatively different 'vortex' state. The energetically favored ground state is explicitly determined as a function of the external field and the ratio between coupling constants of the theory.
Analytical approximations for the long-term decay behavior of spent fuel and high-level waste
International Nuclear Information System (INIS)
Malbrain, C.M.; Deutch, J.M.; Lester, R.K.
1982-01-01
Simple analytical approximations are presented that describe the radioactivity and radiogenic decay heat behavior of high-level wastes (HLWs) from various nuclear fuel cycles during the first 100,000 years of waste life. The correlations are based on detailed computations of HLW properties carried out with the isotope generation and depletion code ORIGEN 2. The ambiguities encountered in using simple comparisons of the hazards posed by HLWs and naturally occurring mineral deposits to establish the longevity requirements for geologic waste disposal schemes are discussed
Climate and sea level in isotope stage 5: an East Antarctic ice surge at approximately 95,000 BP
International Nuclear Information System (INIS)
Hollin, J.T.
1980-01-01
Six high-resolution records correlated with marine isotope stage 5 suggest that substage 5c was essentially interglacial, and was terminated by a catastrophic cooling. Over sixty 230 Th dates indicate that the sea level in substage 5c rose to at least -2 m. Amino acid rations, archaeology, pollen and lithostratigraphy suggest that the sea later jumped to about +16 m. The combination of the cooling and the large jump points to an East Antartic ice surge, at approximately 95 kyr BP. (author)
Kim, Jong In; Kim, Gukbin
2016-10-01
The remaining years of healthy life expectancy (RYH) at age 65 years can be calculated as RYH (65) = healthy life expectancy-aged 65 years. This study confirms the associations between socioeconomic indicators and the RYH (65) in 148 countries. The RYH data were obtained from the World Health Organization. Significant positive correlations between RYH (65) in men and women and the socioeconomic indicators national income, education level, and improved drinking water were found. Finally, the predictors of RYH (65) in men and women were used to build a model of the RYH using higher socioeconomic indicators (R(2 )= 0.744, p educational attainment, national income level, and improved water quality influenced the RYH at 65 years. Therefore, policymaking to improve these country-level socioeconomic factors is expected to have latent effects on RYH in older age. © The Author(s) 2016.
Directory of Open Access Journals (Sweden)
Bin Zhang
2017-06-01
Full Text Available By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.
Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi
2017-06-15
By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.
Basura, G J; Walker, P D
1999-08-05
Sixty days after bilateral dopamine (DA) depletion (>98%) with 6-hydroxydopamine (6-OHDA) in neonatal rats, serotonin (5-HT) content doubled and 5-HT(2A) receptor mRNA expression rose 54% within the rostral striatum. To determine if striatal 5-HT(2A) receptor mRNA upregulation is dependent on increased 5-HT levels following DA depletion, neonatal rats received dual injections of 6-OHDA and 5,7-dihydroxytryptamine (5,7-DHT) which suppressed 5-HT content by approximately 90%. In these 6-OHDA/5,7-DHT-treated rats, striatal 5-HT(2A) receptor mRNA expression was still elevated (87% above vehicle controls). Comparative analysis of 5-HT(2C) receptor mRNA expression yielded no significant changes in any experimental group. These results demonstrate that upregulated 5-HT(2A) receptor biosynthesis in the DA-depleted rat is not dependent on subsequent 5-HT hyperinnervation. Copyright 1999 Elsevier Science B.V.
International Nuclear Information System (INIS)
Kawashima, S.; Matsumara, A.; Nishida, T.
1979-01-01
The compressible and heat-conductive Navier-Stokes equation obtained as the second approximation of the formal Chapman-Enskog expansion is investigated on its relations to the original nonlinear Boltzmann equation and also to the incompressible Navier-Stokes equation. The solutions of the Boltzmann equation and the incompressible Navier-Stokes equation for small initial data are proved to be asymptotically equivalent (mod decay rate tsup(-5/4)) as t → + infinitely to that of the compressible Navier-Stokes equation for the corresponding initial data. (orig.) 891 HJ/orig. 892 MKO
Obara, Vitor Yuzo; Zacas, Carolina Petrus; Carrilho, Claudia Maria Dantas de Maio; Delfino, Vinicius Daher Alvares
2016-01-01
This study aimed to assess whether currently used dosages of vancomycin for treatment of serious gram-positive bacterial infections in intensive care unit patients provided initial therapeutic vancomycin trough levels and to examine possible factors associated with the presence of adequate initial vancomycin trough levels in these patients. A prospective descriptive study with convenience sampling was performed. Nursing note and medical record data were collected from September 2013 to July 2014 for patients who met inclusion criteria. Eighty-three patients were included. Initial vancomycin trough levels were obtained immediately before vancomycin fourth dose. Acute kidney injury was defined as an increase of at least 0.3mg/dL in serum creatinine within 48 hours. Considering vancomycin trough levels recommended for serious gram-positive infection treatment (15 - 20µg/mL), patients were categorized as presenting with low, adequate, and high vancomycin trough levels (35 [42.2%], 18 [21.7%], and 30 [36.1%] patients, respectively). Acute kidney injury patients had significantly greater vancomycin trough levels (p = 0.0055, with significance for a trend, p = 0.0023). Surprisingly, more than 40% of the patients did not reach an effective initial vancomycin trough level. Studies on pharmacokinetic and dosage regimens of vancomycin in intensive care unit patients are necessary to circumvent this high proportion of failures to obtain adequate initial vancomycin trough levels. Vancomycin use without trough serum level monitoring in critically ill patients should be discouraged.
International Nuclear Information System (INIS)
Yu-Min, Liu; Zhong-Yuan, Yu; Xiao-Min, Ren
2009-01-01
Calculations of electronic structures about the semiconductor quantum dot and the semiconductor quantum ring are presented in this paper. To reduce the calculation costs, for the quantum dot and the quantum ring, their simplified axially symmetric shapes are utilized in our analysis. The energy dependent effective mass is taken into account in solving the Schrödinger equations in the single band effective mass approximation. The calculated results show that the energy dependent effective mass should be considered only for relatively small volume quantum dots or small quantum rings. For large size quantum materials, both the energy dependent effective mass and the parabolic effective mass can give the same results. The energy states and the effective masses of the quantum dot and the quantum ring as a function of geometric parameters are also discussed in detail. (general)
On the Accuracy of Fluid Approximations to a Class of Inventory-Level-Dependent EOQ and EPQ Models
Directory of Open Access Journals (Sweden)
Alexey Piunovskiy
2011-01-01
Full Text Available Deterministic Economic Order Quantity (EOQ models have been studied intensively in the literature, where the demand process is described by an ordinary differential equation, and the objective is to obtain an EOQ, which minimizes the total cost per unit time. The total cost per unit time consists of a “discrete” part, the setup cost, which is incurred at the time of ordering, and a “continuous” part, the holding cost, which is continuously accumulated over time. Quite formally, such deterministic EOQ models can be viewed as fluid approximations to the corresponding stochastic EOQ models, where the demand process is taken as a stochastic jump process. Suppose now an EOQ is obtained from a deterministic model. The question is how well does this quantity work in the corresponding stochastic model. In the present paper we justify a translation of EOQs obtained from deterministic models, under which the resulting order quantities are asymptotically optimal for the stochastic models, by showing that the difference between the performance measures and the optimal values converges to zero with respect to a scaling parameter. Moreover, we provide an estimate for the rate of convergence. The same issue regarding specific Economic Production Quantity (EPQ models is studied, too.
International Nuclear Information System (INIS)
Potter, W.E.
2005-01-01
The exact probability density function for paired counting can be expressed in terms of modified Bessel functions of integral order when the expected blank count is known. Exact decision levels and detection limits can be computed in a straightforward manner. For many applications perturbing half-integer corrections to Gaussian distributions yields satisfactory results for decision levels. When there is concern about the uncertainty for the expected value of the blank count, a way to bound the errors of both types using confidence intervals for the expected blank count is discussed. (author)
Directory of Open Access Journals (Sweden)
Matthew S Freiberg
Full Text Available The mechanism underlying the excess risk of non-AIDS diseases among HIV infected people is unclear. HIV associated inflammation/hypercoagulability likely plays a role. While antiretroviral therapy (ART may return this process to pre-HIV levels, this has not been directly demonstrated. We analyzed data/specimens on 249 HIV+ participants from the US Military HIV Natural History Study, a prospective, multicenter observational cohort of >5600 active duty military personnel and beneficiaries living with HIV. We used stored blood specimens to measure D-dimer and Interleukin-6 (IL-6 at three time points: pre-HIV seroconversion, ≥6 months post-HIV seroconversion but prior to ART initiation, and ≥6 months post-ART with documented HIV viral suppression on two successive evaluations. We evaluated the changes in biomarker levels between time points, and the association between these biomarker changes and future non-AIDS events. During a median follow-up of 3.7 years, there were 28 incident non-AIDS diseases. At ART initiation, the median CD4 count was 361cells/mm3; median duration of documented HIV infection 392 days; median time on ART was 354 days. Adjusted mean percent increase in D-dimer levels from pre-seroconversion to post-ART was 75.1% (95% confidence interval 24.6-148.0, p = 0.002. This increase in D-dimer was associated with a significant 22% increase risk of future non-AIDS events (p = 0.03. Changes in IL-6 levels across time points were small and not associated with future non-AIDS events. In conclusion, ART initiation and HIV viral suppression does not eliminate HIV associated elevation in D-dimer levels. This residual pathology is associated with an increased risk of future non-AIDS diseases.
Bunker, T W; Koetje, D S; Stephenson, L C; Creelman, R A; Mullet, J E; Grimes, H D
1995-08-01
The response of individual members of the lipoxygenase multigene family in soybeans to sink deprivation was analyzed. RNase protection assays indicated that a novel vegetative lipoxygenase gene, vlxC, and three other vegetative lipoxygenase mRNAs accumulated in mature leaves in response to a variety of sink limitations. These data suggest that several members of the lipoxygenase multigene family are involved in assimilate partitioning. The possible involvement of jasmonic acid as a signaling molecule regulating assimilate partitioning into the vegetative storage proteins and lipoxygenases was directly assessed by determining the endogenous level of jasmonic acid in leaves from plants with their pods removed. There was no rise in the level of endogenous jasmonic acid coincident with the strong increase in both vlxC and vegetative storage protein VspB transcripts in response to sink limitation. Thus, expression of the vegetative lipoxygenases and vegetative storage proteins is not regulated by jasmonic acid in sink-limited leaves.
Sakai, Kiyoshi; Kamijima, Michihiro; Shibata, Eiji; Ohno, Hiroyuki; Nakajima, Tamie
2010-09-01
This study aimed to clarify indoor air pollution levels of volatile organic compounds (VOCs), especially 2-ethyl-1-hexanol (2E1H) in large buildings after revising of the Act on Maintenance of Sanitation in Buildings in 2002. We measured indoor air VOC concentrations in 57 (97%) out of a total of 61 large buildings completed within one year in half of the area of Nagoya, Japan, from 2003 through 2007. Airborne concentrations of 13 carbonyl compounds were determined with diffusion samplers and high-performance liquid chromatography, and of the other 32 VOCs with diffusion samplers and gas chromatography with a mass spectrometer. Formaldehyde was detected in all samples of indoor air but the concentrations were lower than the indoor air quality standard value set in Japan (100 microg/m3). Geometric mean concentrations of the other major VOCs, namely toluene, xylene, ethylbenzene, styrene, p-dichlorobenzene and acetaldehyde were also low. 2E1H was found to be one of the predominating VOCs in indoor air of large buildings. A few rooms in a small number of buildings surveyed showed high concentrations of 2E1H, while low concentrations were observed in most rooms of those buildings as well as in other buildings. It was estimated that about 310 buildings had high indoor air pollution levels of 2E1H, with increase during the 5 years from 2003 in Japan. Indoor air pollution levels of VOCs in new large buildings are generally good, although a few rooms in a small number of buildings showed high concentrations in 2E1H, a possible causative chemical in sick building symptoms. Therefore, 2E1H needs particular attention as an important indoor air pollutant.
[PALEOPATHOLOGY OF HUMAN REMAINS].
Minozzi, Simona; Fornaciari, Gino
2015-01-01
Many diseases induce alterations in the human skeleton, leaving traces of their presence in ancient remains. Paleopathological examination of human remains not only allows the study of the history and evolution of the disease, but also the reconstruction of health conditions in the past populations. This paper describes the most interesting diseases observed in skeletal samples from the Roman Imperial Age necropoles found in urban and suburban areas of Rome during archaeological excavations in the last decades. The diseases observed were grouped into the following categories: articular diseases, traumas, infections, metabolic or nutritional diseases, congenital diseases and tumours, and some examples are reported for each group. Although extensive epidemiological investigation in ancient skeletal records is impossible, the palaeopathological study allowed to highlight the spread of numerous illnesses, many of which can be related to the life and health conditions of the Roman population.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Schmidt, Wolfgang M
1980-01-01
"In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Hill, Paul T.; Guin, Kacey; Celio, Mary Beth
2003-01-01
Argues that "A Nation at Risk" failed to address adequately problems of urban education, and thus the achievement gap between minority and white students still exists. Describes several problems that still plague low-performing urban schools, such as bureaucratic aversion to change, high levels of poverty, and low teacher quality and…
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
Diophantine approximation and badly approximable sets
DEFF Research Database (Denmark)
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
. The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
Barre, D E; Mizier-Barre, K A; Griscti, O; Hafez, K
2016-10-01
Elevated total serum free fatty acids (FFAs) concentrations have been suggested, controversially, to enhance insulin resistance and decrease percent remaining β-cell function. However, concentrations of individual serum FFAs have never been published in terms of their relationship (correlation) to homeostatic model assessment-insulin resistance (HOMA-IR) and percent remaining β-cell function (HOMA-%β) in the type 2 diabetics (T2Ds). Alpha-linolenic acid consumption has a negative correlation with the insulin resistance, which in turn is negatively correlated with the remaining β-cell function. The primary objective was to test the hypothesis that there would be different relationship (correlation) between the blood serum individual free FFA mol % levels and HOMA-IR and/or HOMA-%β in T2D. The secondary objective was to test the hypothesis that flaxseed oil, previously being shown to be ineffective in the glycemic control in T2Ds, may alter these correlations in a statistically significant manner as well as HOMA-IR and/or HOMA-%β. Patients were recruited via a newspaper advertisement and two physicians have been employed. All the patients came to visit one and three months later for a second visit. At the second visit, the subjects were randomly assigned (double blind) to flaxseed or safflower oil treatment for three months, until the third visit. Different statistically significant correlations or trends towards among some serum individual free FFA mol % levels and HOMA-IR and HOMA-%β, pre- and post-flaxseed and safflower oil supplementation were found. However, flaxseed oil had no impact on HOMA-IR or HOMA-%β despite statistically significant alterations in correlations compared to baseline HOMA-IR. The obtained data indicate that high doses of flaxseed oil have no statistically significant effect on HOMA-IR or HOMA-%β in T2Ds, probably due to the additive effects of negative and positive correlations.
Optimization and approximation
Pedregal, Pablo
2017-01-01
This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.
On badly approximable complex numbers
DEFF Research Database (Denmark)
Esdahl-Schou, Rune; Kristensen, S.
We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...
Ultrafast Approximation for Phylogenetic Bootstrap
Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and
International Nuclear Information System (INIS)
Ginsburg, C.A.
1980-01-01
In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)
Directory of Open Access Journals (Sweden)
Peter Read
2013-08-01
Full Text Available In most cultures the dead and their living relatives are held in a dialogic relationship. The dead have made it clear, while living, what they expect from their descendants. The living, for their part, wish to honour the tombs of their ancestors; at the least, to keep the graves of the recent dead from disrepair. Despite the strictures, the living can fail their responsibilities, for example, by migration to foreign countries. The peripatetic Chinese are one of the few cultures able to overcome the dilemma of the wanderer or the exile. With the help of a priest, an Australian Chinese migrant may summon the soul of an ancestor from an Asian grave to a Melbourne temple, where the spirit, though removed from its earthly vessel, will rest and remain at peace. Amongst cultures in which such practices are not culturally appropriate, to fail to honour the family dead can be exquisitely painful. Violence is the cause of most failure.
Red Assembly: the work remains
Directory of Open Access Journals (Sweden)
Leslie Witz
installed. What to do at this limit, at the transgressive encounter between saying yes and no to history, remains the challenge. It is the very challenge of what insistently remains.
Approximate symmetries of Hamiltonians
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
Green business will remain green
International Nuclear Information System (INIS)
Marcan, P.
2008-01-01
It all started with two words. Climate change. The carbon dioxide trading scheme, which was the politicians' idea on solving the number one global problem, followed. Four years ago, when the project was begun, there was no data for project initiation. Quotas for polluters mainly from energy production and other energy demanding industries were distributed based on spreadsheets, maximum output and expected future development of economies. Slovak companies have had a chance to profit from these arrangements since 2005. Many of them took advantage of the situation and turned the excessive quotas into an extraordinary profit which often reached hundreds of million Sk. The fact that the price of free quotas offered for sale dropped basically to 0 in 2006 only proved that the initial distribution was too generous. And the market reacted to the first official measurements of emissions. Slovak companies also contributed to this development. However, when planning the maximum emission volumes for 2008-2012 period, in spite of the fact that actual data were available, their expectations were not realistic. A glance at the figures in the proposal of the Ministry of Environment is sufficient to realize that there will be no major change in the future. And so for many Slovak companies business with a green future will remain green for the next five years. The state decided to give to selected companies even more free space as far as emissions are concerned. The most privileged companies can expect quotas increased by tens of percent. (author)
Approximating distributions from moments
Pawula, R. F.
1987-11-01
A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.
Silicon photonics: some remaining challenges
Reed, G. T.; Topley, R.; Khokhar, A. Z.; Thompson, D. J.; Stanković, S.; Reynolds, S.; Chen, X.; Soper, N.; Mitchell, C. J.; Hu, Y.; Shen, L.; Martinez-Jimenez, G.; Healy, N.; Mailis, S.; Peacock, A. C.; Nedeljkovic, M.; Gardes, F. Y.; Soler Penades, J.; Alonso-Ramos, C.; Ortega-Monux, A.; Wanguemert-Perez, G.; Molina-Fernandez, I.; Cheben, P.; Mashanovich, G. Z.
2016-03-01
This paper discusses some of the remaining challenges for silicon photonics, and how we at Southampton University have approached some of them. Despite phenomenal advances in the field of Silicon Photonics, there are a number of areas that still require development. For short to medium reach applications, there is a need to improve the power consumption of photonic circuits such that inter-chip, and perhaps intra-chip applications are viable. This means that yet smaller devices are required as well as thermally stable devices, and multiple wavelength channels. In turn this demands smaller, more efficient modulators, athermal circuits, and improved wavelength division multiplexers. The debate continues as to whether on-chip lasers are necessary for all applications, but an efficient low cost laser would benefit many applications. Multi-layer photonics offers the possibility of increasing the complexity and effectiveness of a given area of chip real estate, but it is a demanding challenge. Low cost packaging (in particular, passive alignment of fibre to waveguide), and effective wafer scale testing strategies, are also essential for mass market applications. Whilst solutions to these challenges would enhance most applications, a derivative technology is emerging, that of Mid Infra-Red (MIR) silicon photonics. This field will build on existing developments, but will require key enhancements to facilitate functionality at longer wavelengths. In common with mainstream silicon photonics, significant developments have been made, but there is still much left to do. Here we summarise some of our recent work towards wafer scale testing, passive alignment, multiplexing, and MIR silicon photonics technology.
Approximations to camera sensor noise
Jin, Xiaodan; Hirakawa, Keigo
2013-02-01
Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.
CONTRIBUTIONS TO RATIONAL APPROXIMATION,
Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Expectation Consistent Approximate Inference
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
Nuclear Hartree-Fock approximation testing and other related approximations
International Nuclear Information System (INIS)
Cohenca, J.M.
1970-01-01
Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt
Approximate and renormgroup symmetries
Energy Technology Data Exchange (ETDEWEB)
Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling
2009-07-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximate and renormgroup symmetries
International Nuclear Information System (INIS)
Ibragimov, Nail H.; Kovalev, Vladimir F.
2009-01-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximations of Fuzzy Systems
Directory of Open Access Journals (Sweden)
Vinai K. Singh
2013-03-01
Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions
Potvin, Guy
2015-10-01
We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
International Nuclear Information System (INIS)
Knobloch, A.F.
1980-01-01
A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
On Covering Approximation Subspaces
Directory of Open Access Journals (Sweden)
Xun Ge
2009-06-01
Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Prestack wavefield approximations
Alkhalifah, Tariq
2013-01-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg
2005-01-01
The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...
Approximation by Cylinder Surfaces
DEFF Research Database (Denmark)
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
The Human Remains from HMS Pandora
Directory of Open Access Journals (Sweden)
D.P. Steptoe
2002-04-01
Full Text Available In 1977 the wreck of HMS Pandora (the ship that was sent to re-capture the Bounty mutineers was discovered off the north coast of Queensland. Since 1983, the Queensland Museum Maritime Archaeology section has carried out systematic excavation of the wreck. During the years 1986 and 1995-1998, more than 200 human bone and bone fragments were recovered. Osteological investigation revealed that this material represented three males. Their ages were estimated at approximately 17 +/-2 years, 22 +/-3 years and 28 +/-4 years, with statures of 168 +/-4cm, 167 +/-4cm, and 166cm +/-3cm respectively. All three individuals were probably Caucasian, although precise determination of ethnicity was not possible. In addition to poor dental hygiene, signs of chronic diseases suggestive of rickets and syphilis were observed. Evidence of spina bifida was seen on one of the skeletons, as were other skeletal anomalies. Various taphonomic processes affecting the remains were also observed and described. Compact bone was observed under the scanning electron microscope and found to be structurally coherent. Profiles of the three skeletons were compared with historical information about the 35 men lost with the ship, but no precise identification could be made. The investigation did not reveal the cause of death. Further research, such as DNA analysis, is being carried out at the time of publication.
An improved saddlepoint approximation.
Gillespie, Colin S; Renshaw, Eric
2007-08-01
Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Topology, calculus and approximation
Komornik, Vilmos
2017-01-01
Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Approximating Preemptive Stochastic Scheduling
Megow Nicole; Vredeveld Tjark
2009-01-01
We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...
39 (APPROXIMATE ANALYTICAL SOLUTION)
African Journals Online (AJOL)
Rotating machines like motors, turbines, compressors etc. are generally subjected to periodic forces and the system parameters remain more or less constant. ... parameters change and, consequently, the natural frequencies too, due to reasons of changing gyroscopic moments, centrifugal forces, bearing characteristics,.
Cyclic approximation to stasis
Directory of Open Access Journals (Sweden)
Stewart D. Johnson
2009-06-01
Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.
International Nuclear Information System (INIS)
El Sawi, M.
1983-07-01
A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)
The relaxation time approximation
International Nuclear Information System (INIS)
Gairola, R.P.; Indu, B.D.
1991-01-01
A plausible approximation has been made to estimate the relaxation time from a knowledge of the transition probability of phonons from one state (r vector, q vector) to other state (r' vector, q' vector), as a result of collision. The relaxation time, thus obtained, shows a strong dependence on temperature and weak dependence on the wave vector. In view of this dependence, relaxation time has been expressed in terms of a temperature Taylor's series in the first Brillouin zone. Consequently, a simple model for estimating the thermal conductivity is suggested. the calculations become much easier than the Callaway model. (author). 14 refs
Polynomial approximation on polytopes
Totik, Vilmos
2014-01-01
Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
The random phase approximation
International Nuclear Information System (INIS)
Schuck, P.
1985-01-01
RPA is the adequate theory to describe vibrations of the nucleus of very small amplitudes. These vibrations can either be forced by an external electromagnetic field or can be eigenmodes of the nucleus. In a one dimensional analogue the potential corresponding to such eigenmodes of very small amplitude should be rather stiff otherwise the motion risks to be a large amplitude one and to enter a region where the approximation is not valid. This means that nuclei which are supposedly well described by RPA must have a very stable groundstate configuration (must e.g. be very stiff against deformation). This is usually the case for doubly magic nuclei or close to magic nuclei which are in the middle of proton and neutron shells which develop a very stable groundstate deformation; we take the deformation as an example but there are many other possible degrees of freedom as, for example, compression modes, isovector degrees of freedom, spin degrees of freedom, and many more
The quasilocalized charge approximation
International Nuclear Information System (INIS)
Kalman, G J; Golden, K I; Donko, Z; Hartmann, P
2005-01-01
The quasilocalized charge approximation (QLCA) has been used for some time as a formalism for the calculation of the dielectric response and for determining the collective mode dispersion in strongly coupled Coulomb and Yukawa liquids. The approach is based on a microscopic model in which the charges are quasilocalized on a short-time scale in local potential fluctuations. We review the conceptual basis and theoretical structure of the QLC approach and together with recent results from molecular dynamics simulations that corroborate and quantify the theoretical concepts. We also summarize the major applications of the QLCA to various physical systems, combined with the corresponding results of the molecular dynamics simulations and point out the general agreement and instances of disagreement between the two
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Approximate quantum Markov chains
Sutter, David
2018-01-01
This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Approximations for stop-loss reinsurance premiums
Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.
2005-01-01
Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are
Self-similar factor approximants
International Nuclear Information System (INIS)
Gluzman, S.; Yukalov, V.I.; Sornette, D.
2003-01-01
The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren
2017-01-01
, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose
Strong-coupling approximations
International Nuclear Information System (INIS)
Abbott, R.B.
1984-03-01
Standard path-integral techniques such as instanton calculations give good answers for weak-coupling problems, but become unreliable for strong-coupling. Here we consider a method of replacing the original potential by a suitably chosen harmonic oscillator potential. Physically this is motivated by the fact that potential barriers below the level of the ground-state energy of a quantum-mechanical system have little effect. Numerically, results are good, both for quantum-mechanical problems and for massive phi 4 field theory in 1 + 1 dimensions. 9 references, 6 figures
International Conference Approximation Theory XV
Schumaker, Larry
2017-01-01
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...
Finite approximations in fluid mechanics
International Nuclear Information System (INIS)
Hirschel, E.H.
1986-01-01
This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Forms of Approximate Radiation Transport
Brunner, G
2002-01-01
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Fish remains and humankind: part two
Directory of Open Access Journals (Sweden)
Andrew K G Jones
1998-07-01
Full Text Available The significance of aquatic resources to past human groups is not adequately reflected in the published literature - a deficiency which is gradually being acknowledged by the archaeological community world-wide. The publication of the following three papers goes some way to redress this problem. Originally presented at an International Council of Archaeozoology (ICAZ Fish Remains Working Group meeting in York, U.K. in 1987, these papers offer clear evidence of the range of interest in ancient fish remains across the world. Further papers from the York meeting were published in Internet Archaeology 3 in 1997.
Why Agricultural Educators Remain in the Classroom
Crutchfield, Nina; Ritz, Rudy; Burris, Scott
2013-01-01
The purpose of this study was to identify and describe factors that are related to agricultural educator career retention and to explore the relationships between work engagement, work-life balance, occupational commitment, and personal and career factors as related to the decision to remain in the teaching profession. The target population for…
Juveniles' Motivations for Remaining in Prostitution
Hwang, Shu-Ling; Bedford, Olwen
2004-01-01
Qualitative data from in-depth interviews were collected in 1990-1991, 1992, and 2000 with 49 prostituted juveniles remanded to two rehabilitation centers in Taiwan. These data are analyzed to explore Taiwanese prostituted juveniles' feelings about themselves and their work, their motivations for remaining in prostitution, and their difficulties…
Kadav Moun PSA (:60) (Human Remains)
Centers for Disease Control (CDC) Podcasts
2010-02-18
This is an important public health announcement about safety precautions for those handling human remains. Language: Haitian Creole. Created: 2/18/2010 by Centers for Disease Control and Prevention (CDC). Date Released: 2/18/2010.
The Annuity Puzzle Remains a Puzzle
Peijnenburg, J.M.J.; Werker, Bas; Nijman, Theo
We examine incomplete annuity menus and background risk as possible drivers of divergence from full annuitization. Contrary to what is often suggested in the literature, we find that full annuitization remains optimal if saving is possible after retirement. This holds irrespective of whether real or
Some results in Diophantine approximation
DEFF Research Database (Denmark)
Pedersen, Steffen Højris
the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...
Limitations of shallow nets approximation.
Lin, Shao-Bo
2017-10-01
In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Explosives remain preferred methods for platform abandonment
International Nuclear Information System (INIS)
Pulsipher, A.; Daniel, W. IV; Kiesler, J.E.; Mackey, V. III
1996-01-01
Economics and safety concerns indicate that methods involving explosives remain the most practical and cost-effective means for abandoning oil and gas structures in the Gulf of Mexico. A decade has passed since 51 dead sea turtles, many endangered Kemp's Ridleys, washed ashore on the Texas coast shortly after explosives helped remove several offshore platforms. Although no relationship between the explosions and the dead turtles was ever established, in response to widespread public concern, the US Minerals Management Service (MMS) and National Marine Fisheries Service (NMFS) implemented regulations limiting the size and timing of explosive charges. Also, more importantly, they required that operators pay for observers to survey waters surrounding platforms scheduled for removal for 48 hr before any detonations. If observers spot sea turtles or marine mammals within the danger zone, the platform abandonment is delayed until the turtles leave or are removed. However, concern about the effects of explosives on marine life remains
Spherical Approximation on Unit Sphere
Directory of Open Access Journals (Sweden)
Eman Samir Bhaya
2018-01-01
Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of functions in spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in spaces for by modulus of smoothness of functions.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Decomposition Technique for Remaining Useful Life Prediction
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
Industry remains stuck in a transitional mode
International Nuclear Information System (INIS)
Garb, F.A.
1991-01-01
The near future for industry remains foggy for several obvious reasons. The shake-up of the Soviet Union and how the pieces will reform remains unclear. How successful efforts are to privatize government oil company operations around the world has yet to be determined. A long sought peace in the Middle East seems to be inching closer, but will this continue? If it does continue, what impact will it have on world energy policy? Will American companies, which are now transferring their attention to foreign E and P, also maintain an interest in domestic activities? Is the U.S. economy really on the upswing? We are told that the worst of the recession is over, but try telling this to thousands of workers in the oil patch who are being released monthly by the big players in domestic operations. This paper reports that 1992 should be a better year than 1991, if measured in opportunity. There are more exploration and acquisition options available, both domestically and internationally, than there have been in years. Probably more opportunities exist than there are players-certainly more than can be funded with current financial resources
The efficiency of Flory approximation
International Nuclear Information System (INIS)
Obukhov, S.P.
1984-01-01
The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)
Shotgun microbial profiling of fossil remains
DEFF Research Database (Denmark)
Der Sarkissian, Clio; Ermini, Luca; Jónsson, Hákon
2014-01-01
the specimen of interest, but instead reflect environmental organisms that colonized the specimen after death. Here, we characterize the microbial diversity recovered from seven c. 200- to 13 000-year-old horse bones collected from northern Siberia. We use a robust, taxonomy-based assignment approach...... to identify the microorganisms present in ancient DNA extracts and quantify their relative abundance. Our results suggest that molecular preservation niches exist within ancient samples that can potentially be used to characterize the environments from which the remains are recovered. In addition, microbial...... community profiling of the seven specimens revealed site-specific environmental signatures. These microbial communities appear to comprise mainly organisms that colonized the fossils recently. Our approach significantly extends the amount of useful data that can be recovered from ancient specimens using...
Some remaining problems in HCDA analysis
International Nuclear Information System (INIS)
Chang, Y.W.
1981-01-01
The safety assessment and licensing of liquid-metal fast breeder reactors (LMFBRs) requires an analysis on the capability of the reactor primary system to sustain the consequences of a hypothetical core-disruptive accident (HCDA). Although computational methods and computer programs developed for HCDA analyses can predict reasonably well the response of the primary containment system, and follow up the phenomena of HCDA from the start of excursion to the time of dynamic equilibrium in the system, there remain areas in the HCDA analysis that merit further analytical and experimental studies. These are the analysis of fluid impact on reactor cover, three-dimensional analysis, the treatment of the perforated plates, material properties under high strain rates and under high temperatures, the treatment of multifield flows, and the treatment of prestressed concrete reactor vessels. The purpose of this paper is to discuss the structural mechanics of HCDA analysis in these areas where improvements are needed
Political, energy events will remain interwoven
International Nuclear Information System (INIS)
Jones, D.P.
1991-01-01
This paper reports that it is possible to discuss the significance of political and energy events separately, but, in truth, they are intricately interwoven. Furthermore, there are those who will argue that since the two are inseparable, the future is not predictable; so why bother in the endeavor. It is possible that the central point of the exercise may have been missed-yes, the future is unpredictable exclamation point However, the objective of prediction is secondary. The objective of understanding the dynamic forces of change is primary exclamation point With this view of recent history, it is perhaps appropriate to pause and think about the future of the petroleum industry. The future as shaped by political, energy, economic, environmental and technological forces will direct our lives and markets during this decade. Most importantly, what will be the direction that successful businesses take to remain competitive in a global environment? These are interesting issues worthy of provocative thoughts and innovative ideas
Nuclear remains an economic and ecologic asset
International Nuclear Information System (INIS)
Le Ngoc, Boris
2015-01-01
The author herein outlines the several benefits of nuclear energy and nuclear industry for France. He first outlines that France possesses 97 per cent of de-carbonated electricity thanks to nuclear energy (77 pc) and renewable energies (20 pc, mainly hydraulic), and that renewable energies must be developed in the building and transport sectors to be able to get rid of the environmentally and financially costly fossil energies. He outlines that reactor maintenance and the nuclear fuel cycle industry are fields of technological leadership for the French nuclear industry which is, after motor industry and aircraft industry, the third industrial sector in France. He indicates that nuclear electricity is to remain the most competitive one, and that nuclear energy and renewable energies must not be opposed to it but considered as complementary in the struggle against climate change, i.e. to reduce greenhouse gas emissions and to get rid of the prevalence of fossil energies
Population cycles: generalities, exceptions and remaining mysteries
2018-01-01
Population cycles are one of nature's great mysteries. For almost a hundred years, innumerable studies have probed the causes of cyclic dynamics in snowshoe hares, voles and lemmings, forest Lepidoptera and grouse. Even though cyclic species have very different life histories, similarities in mechanisms related to their dynamics are apparent. In addition to high reproductive rates and density-related mortality from predators, pathogens or parasitoids, other characteristics include transgenerational reduced reproduction and dispersal with increasing-peak densities, and genetic similarity among populations. Experiments to stop cyclic dynamics and comparisons of cyclic and noncyclic populations provide some understanding but both reproduction and mortality must be considered. What determines variation in amplitude and periodicity of population outbreaks remains a mystery. PMID:29563267
Does hypertension remain after kidney transplantation?
Directory of Open Access Journals (Sweden)
Gholamreza Pourmand
2015-05-01
Full Text Available Hypertension is a common complication of kidney transplantation with the prevalence of 80%. Studies in adults have shown a high prevalence of hypertension (HTN in the first three months of transplantation while this rate is reduced to 50- 60% at the end of the first year. HTN remains as a major risk factor for cardiovascular diseases, lower graft survival rates and poor function of transplanted kidney in adults and children. In this retrospective study, medical records of 400 kidney transplantation patients of Sina Hospital were evaluated. Patients were followed monthly for the 1st year, every two months in the 2nd year and every three months after that. In this study 244 (61% patients were male. Mean ± SD age of recipients was 39.3 ± 13.8 years. In most patients (40.8% the cause of end-stage renal disease (ESRD was unknown followed by HTN (26.3%. A total of 166 (41.5% patients had been hypertensive before transplantation and 234 (58.5% had normal blood pressure. Among these 234 individuals, 94 (40.2% developed post-transplantation HTN. On the other hand, among 166 pre-transplant hypertensive patients, 86 patients (56.8% remained hypertensive after transplantation. Totally 180 (45% patients had post-transplantation HTN and 220 patients (55% didn't develop HTN. Based on the findings, the incidence of post-transplantation hypertension is high, and kidney transplantation does not lead to remission of hypertension. On the other hand, hypertension is one of the main causes of ESRD. Thus, early screening of hypertension can prevent kidney damage and reduce further problems in renal transplant recipients.
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul
2017-01-01
is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Optical bistability without the rotating wave approximation
Energy Technology Data Exchange (ETDEWEB)
Sharaby, Yasser A., E-mail: Yasser_Sharaby@hotmail.co [Physics Department, Faculty of Applied Sciences, Suez Canal University, Suez (Egypt); Joshi, Amitabh, E-mail: ajoshi@eiu.ed [Department of Physics, Eastern Illinois University, Charleston, IL 61920 (United States); Hassan, Shoukry S., E-mail: Shoukryhassan@hotmail.co [Mathematics Department, College of Science, University of Bahrain, P.O. Box 32038 (Bahrain)
2010-04-26
Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.
Optical bistability without the rotating wave approximation
International Nuclear Information System (INIS)
Sharaby, Yasser A.; Joshi, Amitabh; Hassan, Shoukry S.
2010-01-01
Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.
Approximate Implicitization Using Linear Algebra
Directory of Open Access Journals (Sweden)
Oliver J. D. Barrowclough
2012-01-01
Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
Rollout sampling approximate policy iteration
Dimitrakakis, C.; Lagoudakis, M.G.
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Framework for sequential approximate optimization
Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.
2004-01-01
An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef
2017-06-30
Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.
SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES
Directory of Open Access Journals (Sweden)
F. Poux
2016-10-01
Full Text Available Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.
What remains of the Arrow oil?
International Nuclear Information System (INIS)
Sergy, G.; Owens, E.
1993-01-01
In February 1970, the tanker Arrow became grounded 6.5 km off the north shore of Chedabucto Bay, Nova Scotia, and nearly 72,000 bbl of Bunker C fuel oil were released from the vessel during its subsequent breakup and sinking. The oil was washed ashore in various degrees over an estimated 305 km of the bay's 604-km shoreline, of which only 48 km were cleaned. In addition, the tanker Kurdistan broke in two in pack ice in March 1979 in the Cabot Strait area, spilling ca 54,000 bbl of Bunker C, some of which was later found at 16 locations along the northeast and east shorelines of Chedabucto Bay. In summer 1992, a systematic ground survey of the bay's shorelines was conducted using Environment Canada Shoreline Cleanup Assessment Team (SCAT) procedures. Standard observations were made of oil distribution and width, thickness, and character of the oil residues in 419 coastal segments. Results from the survey are summarized. Oil was found to be present on 13.3 km of the shoreline, with heavy oiling restricted to 1.3 km primarily in the areas of Black Duck Cove and Lennox Passage. Some of this residual oil was identified as coming from the Arrow. Natural weathering processes account for removal of most of the spilled oil from the bay. Oil remaining on the shore was found in areas outside of the zone of physical wave action, in areas of nearshore mixing where fine sediments are not present to weather the oil through biophysical processes, or in crusts formed by oil weathered on the surface. The systematic description of oiled shorelines using the SCAT methodology proved very successful, even for such an old spill. 6 refs
Ghost Remains After Black Hole Eruption
2009-05-01
NASA's Chandra X-ray Observatory has found a cosmic "ghost" lurking around a distant supermassive black hole. This is the first detection of such a high-energy apparition, and scientists think it is evidence of a huge eruption produced by the black hole. This discovery presents astronomers with a valuable opportunity to observe phenomena that occurred when the Universe was very young. The X-ray ghost, so-called because a diffuse X-ray source has remained after other radiation from the outburst has died away, is in the Chandra Deep Field-North, one of the deepest X-ray images ever taken. The source, a.k.a. HDF 130, is over 10 billion light years away and existed at a time 3 billion years after the Big Bang, when galaxies and black holes were forming at a high rate. "We'd seen this fuzzy object a few years ago, but didn't realize until now that we were seeing a ghost", said Andy Fabian of the Cambridge University in the United Kingdom. "It's not out there to haunt us, rather it's telling us something - in this case what was happening in this galaxy billions of year ago." Fabian and colleagues think the X-ray glow from HDF 130 is evidence for a powerful outburst from its central black hole in the form of jets of energetic particles traveling at almost the speed of light. When the eruption was ongoing, it produced prodigious amounts of radio and X-radiation, but after several million years, the radio signal faded from view as the electrons radiated away their energy. HDF 130 Chandra X-ray Image of HDF 130 However, less energetic electrons can still produce X-rays by interacting with the pervasive sea of photons remaining from the Big Bang - the cosmic background radiation. Collisions between these electrons and the background photons can impart enough energy to the photons to boost them into the X-ray energy band. This process produces an extended X-ray source that lasts for another 30 million years or so. "This ghost tells us about the black hole's eruption long after
Prognostic modelling options for remaining useful life estimation by industry
Sikorska, J. Z.; Hodkiewicz, M.; Ma, L.
2011-07-01
Over recent years a significant amount of research has been undertaken to develop prognostic models that can be used to predict the remaining useful life of engineering assets. Implementations by industry have only had limited success. By design, models are subject to specific assumptions and approximations, some of which are mathematical, while others relate to practical implementation issues such as the amount of data required to validate and verify a proposed model. Therefore, appropriate model selection for successful practical implementation requires not only a mathematical understanding of each model type, but also an appreciation of how a particular business intends to utilise a model and its outputs. This paper discusses business issues that need to be considered when selecting an appropriate modelling approach for trial. It also presents classification tables and process flow diagrams to assist industry and research personnel select appropriate prognostic models for predicting the remaining useful life of engineering assets within their specific business environment. The paper then explores the strengths and weaknesses of the main prognostics model classes to establish what makes them better suited to certain applications than to others and summarises how each have been applied to engineering prognostics. Consequently, this paper should provide a starting point for young researchers first considering options for remaining useful life prediction. The models described in this paper are Knowledge-based (expert and fuzzy), Life expectancy (stochastic and statistical), Artificial Neural Networks, and Physical models.
Direct dating of Early Upper Palaeolithic human remains from Mladec.
Wild, Eva M; Teschler-Nicola, Maria; Kutschera, Walter; Steier, Peter; Trinkaus, Erik; Wanek, Wolfgang
2005-05-19
The human fossil assemblage from the Mladec Caves in Moravia (Czech Republic) has been considered to derive from a middle or later phase of the Central European Aurignacian period on the basis of archaeological remains (a few stone artefacts and organic items such as bone points, awls, perforated teeth), despite questions of association between the human fossils and the archaeological materials and concerning the chronological implications of the limited archaeological remains. The morphological variability in the human assemblage, the presence of apparently archaic features in some specimens, and the assumed early date of the remains have made this fossil assemblage pivotal in assessments of modern human emergence within Europe. We present here the first successful direct accelerator mass spectrometry radiocarbon dating of five representative human fossils from the site. We selected sample materials from teeth and from one bone for 14C dating. The four tooth samples yielded uncalibrated ages of approximately 31,000 14C years before present, and the bone sample (an ulna) provided an uncertain more-recent age. These data are sufficient to confirm that the Mladec human assemblage is the oldest cranial, dental and postcranial assemblage of early modern humans in Europe and is therefore central to discussions of modern human emergence in the northwestern Old World and the fate of the Neanderthals.
Shearlets and Optimally Sparse Approximations
DEFF Research Database (Denmark)
Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q
2012-01-01
Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....
Diophantine approximation and Dirichlet series
Queffélec, Hervé
2013-01-01
This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...
Rational approximations for tomographic reconstructions
International Nuclear Information System (INIS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-01-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
Approximate reasoning in physical systems
International Nuclear Information System (INIS)
Mutihac, R.
1991-01-01
The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)
Face Recognition using Approximate Arithmetic
DEFF Research Database (Denmark)
Marso, Karol
Face recognition is image processing technique which aims to identify human faces and found its use in various diﬀerent ﬁelds for example in security. Throughout the years this ﬁeld evolved and there are many approaches and many diﬀerent algorithms which aim to make the face recognition as eﬀective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....
Fate of nuclear waste site remains unclear
International Nuclear Information System (INIS)
Anderson, E.V.
1980-01-01
The only commercial nuclear fuel reprocessing plant in the U.S., located in West Valley, N.Y., has been shut down since 1972, and no efforts have yet been made to clean up the site. The site contains a spent-fuel pool, high level liquid waste storage tanks, and two radioactive waste burial grounds. Nuclear Fuel Services, Inc., has been leasing the site from the New York State Energy RandD Authority. Federal litigation may ensue, prompted by NRC and DOE, if the company refuses to decontaminate the area when its lease expires at the end of 1980. DOE has developed a plan to solidify the liquid wastes at the facility but needs additional legislation and funding to implement the scheme
Oil prices remain firm, despite economic slump
International Nuclear Information System (INIS)
Brady, Aaron; Giesecke Linda
2002-01-01
Despite all the evidence of sluggish economic growth throughout the world this year, WTI crude oil prices have averaged about $24/bbl year-to-date. Although prices have been lower than year-ago levels, they're a far cry from the lows that occurred in 1998 and at the beginning of 1999. Mounting tensions in the Middle East have given crude prices support. While the market has taken these tensions into account since the beginning of the year, more recent concerns about a possible U.S military conflict with Iraq have added a larger war premium to crude prices. Note that the halt of Iraqi exports itself may not be as detrimental as perceived, since these exports could easily be replaced by OPEC's excess capacity. In part, we have already seen a reduction in Iraqi exports this year due to a pricing dispute
Approximate Reanalysis in Topology Optimization
DEFF Research Database (Denmark)
Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole
2009-01-01
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...
Approximate Matching of Hierarchial Data
DEFF Research Database (Denmark)
Augsten, Nikolaus
-grams of a tree are all its subtrees of a particular shape. Intuitively, two trees are similar if they have many pq-grams in common. The pq-gram distance is an efficient and effective approximation of the tree edit distance. We analyze the properties of the pq-gram distance and compare it with the tree edit...
Approximation of Surfaces by Cylinders
DEFF Research Database (Denmark)
Randrup, Thomas
1998-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Approximation properties of haplotype tagging
Directory of Open Access Journals (Sweden)
Dreiseitl Stephan
2006-01-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.
All-Norm Approximation Algorithms
Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik
2002-01-01
A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation
Truthful approximations to range voting
DEFF Research Database (Denmark)
Filos-Ratsika, Aris; Miltersen, Peter Bro
We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...
Approximate reasoning in decision analysis
Energy Technology Data Exchange (ETDEWEB)
Gupta, M M; Sanchez, E
1982-01-01
The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.
Rational approximation of vertical segments
Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte
2007-08-01
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.
Pythagorean Approximations and Continued Fractions
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Remaining Calm in the Midst of Chaos
International Nuclear Information System (INIS)
Sullivan, Robin S.; Ryan, Grant W.; Young, Jonathan
2004-01-01
level of order in this seemingly chaotic situation. While the specifics of the safety basis strategy for the K Basins sludge removal project will be described in the paper, the general concepts of the strategy are applicable to similar projects throughout the DOE complex
CMB spectra and bispectra calculations: making the flat-sky approximation rigorous
International Nuclear Information System (INIS)
Bernardeau, Francis; Pitrou, Cyril; Uzan, Jean-Philippe
2011-01-01
This article constructs flat-sky approximations in a controlled way in the context of the cosmic microwave background observations for the computation of both spectra and bispectra. For angular spectra, it is explicitly shown that there exists a whole family of flat-sky approximations of similar accuracy for which the expression and amplitude of next to leading order terms can be explicitly computed. It is noted that in this context two limiting cases can be encountered for which the expressions can be further simplified. They correspond to cases where either the sources are localized in a narrow region (thin-shell approximation) or are slowly varying over a large distance (which leads to the so-called Limber approximation). Applying this to the calculation of the spectra it is shown that, as long as the late integrated Sachs-Wolfe contribution is neglected, the flat-sky approximation at leading order is accurate at 1% level for any multipole. Generalization of this construction scheme to the bispectra led to the introduction of an alternative description of the bispectra for which the flat-sky approximation is well controlled. This is not the case for the usual description of the bispectrum in terms of reduced bispectrum for which a flat-sky approximation is proposed but the next-to-leading order terms of which remain obscure
Beyond the random phase approximation
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian S.
2013-01-01
We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...
Hydrogen: Beyond the Classic Approximation
International Nuclear Information System (INIS)
Scivetti, Ivan
2003-01-01
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Duplex Alu Screening for Degraded DNA of Skeletal Human Remains
Directory of Open Access Journals (Sweden)
Fabian Haß
2017-10-01
Full Text Available The human-specific Alu elements, belonging to the class of Short INterspersed Elements (SINEs, have been shown to be a powerful tool for population genetic studies. An earlier study in this department showed that it was possible to analyze Alu presence/absence in 3000-year-old skeletal human remains from the Bronze Age Lichtenstein cave in Lower Saxony, Germany. We developed duplex Alu screening PCRs with flanking primers for two Alu elements, each combined with a single internal Alu primer. By adding an internal primer, the approximately 400–500 bp presence signals of Alu elements can be detected within a range of less than 200 bp. Thus, our PCR approach is suited for highly fragmented ancient DNA samples, whereas NGS analyses frequently are unable to handle repetitive elements. With this analysis system, we examined remains of 12 individuals from the Lichtenstein cave with different degrees of DNA degradation. The duplex PCRs showed fully informative amplification results for all of the chosen Alu loci in eight of the 12 samples. Our analysis system showed that Alu presence/absence analysis is possible in samples with different degrees of DNA degradation and it reduces the amount of valuable skeletal material needed by a factor of four, as compared with a singleplex approach.
WKB approximation in atomic physics
International Nuclear Information System (INIS)
Karnakov, Boris Mikhailovich
2013-01-01
Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.
Standard filter approximations for low power Continuous Wavelet Transforms.
Casson, Alexander J; Rodriguez-Villegas, Esther
2010-01-01
Analogue domain implementations of the Continuous Wavelet Transform (CWT) have proved popular in recent years as they can be implemented at very low power consumption levels. This is essential for use in wearable, long term physiological monitoring systems. Present analogue CWT implementations rely on taking mathematical a approximation of the wanted mother wavelet function to give a filter transfer function that is suitable for circuit implementation. This paper investigates the use of standard filter approximations (Butterworth, Chebyshev, Bessel) as an alternative wavelet approximation technique. This extends the number of approximation techniques available for generating analogue CWT filters. An example ECG analysis shows that signal information can be successfully extracted using these CWT approximations.
Approximate solutions to Mathieu's equation
Wilkinson, Samuel A.; Vogt, Nicolas; Golubev, Dmitry S.; Cole, Jared H.
2018-06-01
Mathieu's equation has many applications throughout theoretical physics. It is especially important to the theory of Josephson junctions, where it is equivalent to Schrödinger's equation. Mathieu's equation can be easily solved numerically, however there exists no closed-form analytic solution. Here we collect various approximations which appear throughout the physics and mathematics literature and examine their accuracy and regimes of applicability. Particular attention is paid to quantities relevant to the physics of Josephson junctions, but the arguments and notation are kept general so as to be of use to the broader physics community.
Approximate Inference for Wireless Communications
DEFF Research Database (Denmark)
Hansen, Morten
This thesis investigates signal processing techniques for wireless communication receivers. The aim is to improve the performance or reduce the computationally complexity of these, where the primary focus area is cellular systems such as Global System for Mobile communications (GSM) (and extensions...... to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum...
Quantum tunneling beyond semiclassical approximation
International Nuclear Information System (INIS)
Banerjee, Rabin; Majhi, Bibhas Ranjan
2008-01-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Generalized Gradient Approximation Made Simple
International Nuclear Information System (INIS)
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-01-01
Generalized gradient approximations (GGA close-quote s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. copyright 1996 The American Physical Society
Safety provision for nuclear power plants during remaining running time
International Nuclear Information System (INIS)
Rossnagel, Alexander; Hentschel, Anja
2012-01-01
With the phasing-out of the industrial use of nuclear energy for the power generation, the risk of the nuclear power plants has not been eliminated in principle, but only for a limited period of time. Therefore, the remaining nine nuclear power plants must also be used for the remaining ten years according to the state of science and technology. Regulatory authorities must substantiate the safety requirements for each nuclear power plant and enforce these requirements by means of various regulatory measures. The consequences of Fukushima must be included in the assessment of the safety level of nuclear power plants in Germany. In this respect, the regulatory authorities have the important tasks to investigate and assess the security risks as well as to develop instructions and orders.
Impulse approximation in solid helium
International Nuclear Information System (INIS)
Glyde, H.R.
1985-01-01
The incoherent dynamic form factor S/sub i/(Q, ω) is evaluated in solid helium for comparison with the impulse approximation (IA). The purpose is to determine the Q values for which the IA is valid for systems such a helium where the atoms interact via a potential having a steeply repulsive but not infinite hard core. For 3 He, S/sub i/(Q, ω) is evaluated from first principles, beginning with the pair potential. The density of states g(ω) is evaluated using the self-consistent phonon theory and S/sub i/(Q,ω) is expressed in terms of g(ω). For solid 4 He resonable models of g(ω) using observed input parameters are used to evaluate S/sub i/(Q,ω). In both cases S/sub i/(Q, ω) is found to approach the impulse approximation S/sub IA/(Q, ω) closely for wave vector transfers Q> or approx. =20 A -1 . The difference between S/sub i/ and S/sub IA/, which is due to final state interactions of the scattering atom with the remainder of the atoms in the solid, is also predominantly antisymmetric in (ω-ω/sub R/), where ω/sub R/ is the recoil frequency. This suggests that the symmetrization procedure proposed by Sears to eliminate final state contributions should work well in solid helium
Plasma Physics Approximations in Ares
International Nuclear Information System (INIS)
Managan, R. A.
2015-01-01
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A α (ζ ),A β (ζ ), ζ, f(ζ ) = (1 + e -μ/θ )F 1/2 (μ/θ), F 1/2 '/F 1/2 , F c α , and F c β . In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
Analysing organic transistors based on interface approximation
International Nuclear Information System (INIS)
Akiyama, Yuto; Mori, Takehiko
2014-01-01
Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region
Approximating the minimum cycle mean
Directory of Open Access Journals (Sweden)
Krishnendu Chatterjee
2013-07-01
Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.
Random-phase approximation and broken symmetry
International Nuclear Information System (INIS)
Davis, E.D.; Heiss, W.D.
1986-01-01
The validity of the random-phase approximation (RPA) in broken-symmetry bases is tested in an appropriate many-body system for which exact solutions are available. Initially the regions of stability of the self-consistent quasiparticle bases in this system are established and depicted in a 'phase' diagram. It is found that only stable bases can be used in an RPA calculation. This is particularly true for those RPA modes which are not associated with the onset of instability of the basis; it is seen that these modes do not describe any excited state when the basis is unstable, although from a formal point of view they remain acceptable. The RPA does well in a stable broken-symmetry basis provided one is not too close to a point where a phase transition occurs. This is true for both energies and matrix elements. (author)
Approximate spacetime symmetries and conservation laws
Energy Technology Data Exchange (ETDEWEB)
Harte, Abraham I [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States)], E-mail: harte@uchicago.edu
2008-10-21
A notion of geometric symmetry is introduced that generalizes the classical concepts of Killing fields and other affine collineations. There is a sense in which flows under these new vector fields minimize deformations of the connection near a specified observer. Any exact affine collineations that may exist are special cases. The remaining vector fields can all be interpreted as analogs of Poincare and other well-known symmetries near timelike worldlines. Approximate conservation laws generated by these objects are discussed for both geodesics and extended matter distributions. One example is a generalized Komar integral that may be taken to define the linear and angular momenta of a spacetime volume as seen by a particular observer. This is evaluated explicitly for a gravitational plane wave spacetime.
Tuberculosis remains a challenge despite economic growth in Panama.
Tarajia, M; Goodridge, A
2014-03-01
Tuberculosis (TB) is a disease associated with inequality, and wise investment of economic resources is considered critical to its control. Panama has recently secured its status as an upper-middle-income country with robust economic growth. However, the prioritisation of resources for TB control remains a major challenge. In this article, we highlight areas that urgently require action to effectively reduce TB burden to minimal levels. Our conclusions suggest the need for fund allocation and a multidisciplinary approach to ensure prompt laboratory diagnosis, treatment assurance and workforce reinforcement, complemented by applied and operational research, development and innovation.
Magnus approximation in neutrino oscillations
International Nuclear Information System (INIS)
Acero, Mario A; Aguilar-Arevalo, Alexis A; D'Olivo, J C
2011-01-01
Oscillations between active and sterile neutrinos remain as an open possibility to explain some anomalous experimental observations. In a four-neutrino (three active plus one sterile) mixing scheme, we use the Magnus expansion of the evolution operator to study the evolution of neutrino flavor amplitudes within the Earth. We apply this formalism to calculate the transition probabilities from active to sterile neutrinos with energies of the order of a few GeV, taking into account the matter effect for a varying terrestrial density.
Nonlinear approximation with dictionaries I. Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2004-01-01
We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...
Approximate cohomology in Banach algebras | Pourabbas ...
African Journals Online (AJOL)
We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...
Toward a consistent random phase approximation based on the relativistic Hartree approximation
International Nuclear Information System (INIS)
Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.
1992-01-01
We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data
Solving Math Problems Approximately: A Developmental Perspective.
Directory of Open Access Journals (Sweden)
Dana Ganor-Stern
Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.
Recognition of computerized facial approximations by familiar assessors.
Richard, Adam H; Monson, Keith L
2017-11-01
performance were examined, and it was ultimately concluded that ReFace facial approximations may have limited effectiveness if used in the traditional way. However, some promising alternative uses are explored that may expand the utility of facial approximations for aiding in the identification of unknown human remains. Published by Elsevier B.V.
Reduced-rank approximations to the far-field transform in the gridded fast multipole method
Hesford, Andrew J.; Waag, Robert C.
2011-05-01
The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Convergence estimates in approximation theory
Gupta, Vijay
2014-01-01
The study of linear positive operators is an area of mathematical studies with significant relevance to studies of computer-aided geometric design, numerical analysis, and differential equations. This book focuses on the convergence of linear positive operators in real and complex domains. The theoretical aspects of these operators have been an active area of research over the past few decades. In this volume, authors Gupta and Agarwal explore new and more efficient methods of applying this research to studies in Optimization and Analysis. The text will be of interest to upper-level students seeking an introduction to the field and to researchers developing innovative approaches.
Directory of Open Access Journals (Sweden)
Leora Halpern Lanz
2015-08-01
Full Text Available The hotel marketing budget, typically amounting to approximately 4-5% of an asset’s total revenue, must remain fluid, so that the marketing director can constantly adapt the marketing tools to meet consumer communications methods and demands. This article suggests how an independent hotel can maximize their marketing budget by using multiple channels and strategies.
Photoelectron spectroscopy and the dipole approximation
Energy Technology Data Exchange (ETDEWEB)
Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
APPROXIMATING INNOVATION POTENTIAL WITH NEUROFUZZY ROBUST MODEL
Directory of Open Access Journals (Sweden)
Kasa, Richard
2015-01-01
Full Text Available In a remarkably short time, economic globalisation has changed the world’s economic order, bringing new challenges and opportunities to SMEs. These processes pushed the need to measure innovation capability, which has become a crucial issue for today’s economic and political decision makers. Companies cannot compete in this new environment unless they become more innovative and respond more effectively to consumers’ needs and preferences – as mentioned in the EU’s innovation strategy. Decision makers cannot make accurate and efficient decisions without knowing the capability for innovation of companies in a sector or a region. This need is forcing economists to develop an integrated, unified and complete method of measuring, approximating and even forecasting the innovation performance not only on a macro but also a micro level. In this recent article a critical analysis of the literature on innovation potential approximation and prediction is given, showing their weaknesses and a possible alternative that eliminates the limitations and disadvantages of classical measuring and predictive methods.
The measurement of psychological literacy: a first approximation.
Roberts, Lynne D; Heritage, Brody; Gasson, Natalie
2015-01-01
Psychological literacy, the ability to apply psychological knowledge to personal, family, occupational, community and societal challenges, is promoted as the primary outcome of an undergraduate education in psychology. As the concept of psychological literacy becomes increasingly adopted as the core business of undergraduate psychology training courses world-wide, there is urgent need for the construct to be accurately measured so that student and institutional level progress can be assessed and monitored. Key to the measurement of psychological literacy is determining the underlying factor-structure of psychological literacy. In this paper we provide a first approximation of the measurement of psychological literacy by identifying and evaluating self-report measures for psychological literacy. Multi-item and single-item self-report measures of each of the proposed nine dimensions of psychological literacy were completed by two samples (N = 218 and N = 381) of undergraduate psychology students at an Australian university. Single and multi-item measures of each dimension were weakly to moderately correlated. Exploratory and confirmatory factor analyses of multi-item measures indicated a higher order three factor solution best represented the construct of psychological literacy. The three factors were reflective processes, generic graduate attributes, and psychology as a helping profession. For the measurement of psychological literacy to progress there is a need to further develop self-report measures and to identify/develop and evaluate objective measures of psychological literacy. Further approximations of the measurement of psychological literacy remain an imperative, given the construct's ties to measuring institutional efficacy in teaching psychology to an undergraduate audience.
Approximation algorithms for a genetic diagnostics problem.
Kosaraju, S R; Schäffer, A A; Biesecker, L G
1998-01-01
We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Premortal data in the process of skeletal remains identification
Directory of Open Access Journals (Sweden)
Marinković Nadica
2012-01-01
Full Text Available Background/Aim. The basic task of a forensic examiner during the exhumation of mass graves or in mass accidents is to establish identity of a person. The results obtained through these procedures depend on the level of perceptibility of post mortal changes and they are compared with premortal data obtained from family members of those missing or killed. Experience with exhumations has shown significant differences between the results obtained through exhumation and the premortal data. The aim of the study was to suggest the existance of the difference between premortal data and the results obtained by exhumation regarding the some parameters, as well as to direct premortal data colection to the specific skeletal forms. Methods. We performed comparative analysis of the results of exhumation of skeletal remains in a mass grave and the premortal data concerning the identified persons. The least number of individuals in this mass grave was calculated according to the upper parts of the right femur and it helped in calculating the smallest number of individuals in mass graves to be 48. A total of 27 persons were identified. Sex was determined by metrics and morphology of the pelvis. Personal age in the moment of death was determined by morphology features of groin symphisis and morphology of sternal edge of ribs and other parts of scelets observations. The hight was calculated as average results of length of long bones and Rollet coefficients. Results. There was a complete match in terms of sex and age matched within an interval that could be established based on the skeletal remains. All the other parameters were different, however, which made identification significantly more difficult. Conclusion. The premortal data is an important element of identification process and it should be obtained by the forensic doctor and directed towards more detailed examination of the skeletal system.
Some relations between entropy and approximation numbers
Institute of Scientific and Technical Information of China (English)
郑志明
1999-01-01
A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.
Axiomatic Characterizations of IVF Rough Approximation Operators
Directory of Open Access Journals (Sweden)
Guangji Yu
2014-01-01
Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.
An approximation for kanban controlled assembly systems
Topan, E.; Avsar, Z.M.
2011-01-01
An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated
Operator approximant problems arising from quantum theory
Maher, Philip J
2017-01-01
This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.
Lung Abscess Remains a Life-Threatening Condition in Pediatrics – A Case Report
Directory of Open Access Journals (Sweden)
Chirteș Ioana Raluca
2017-07-01
Full Text Available Pulmonary abscess or lung abscess is a lung infection which destroys the lung parenchyma leading to cavitations and central necrosis in localised areas formed by thick-walled purulent material. It can be primary or secondary. Lung abscesses can occur at any age, but it seems that paediatric pulmonary abscess morbidity is lower than in adults. We present the case of a one year and 5-month-old male child admitted to our clinic for fever, loss of appetite and an overall altered general status. Laboratory tests revealed elevated inflammatory biomarkers, leukocytosis with neutrophilia, anaemia, thrombocytosis, low serum iron concentration and increased lactate dehydrogenase level. Despite wide-spectrum antibiotic therapy, the patient’s progress remained poor after seven days of treatment and a CT scan established the diagnosis of a large lung abscess. Despite changing the antibiotic therapy, surgical intervention was eventually needed. There was a slow but steady improvment and eventually, the patient was discharged after approximately five weeks.
Approximate number word knowledge before the cardinal principle.
Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C
2015-02-01
Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.
Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis
Layton, William J
2012-01-01
This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.
Analysis of corrections to the eikonal approximation
Hebborn, C.; Capel, P.
2017-11-01
Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.
Mapping moveout approximations in TI media
Stovas, Alexey; Alkhalifah, Tariq Ali
2013-01-01
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
Analytical approximation of neutron physics data
International Nuclear Information System (INIS)
Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.
1984-01-01
The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy
A unified approach to the Darwin approximation
International Nuclear Information System (INIS)
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-01-01
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting
Mapping moveout approximations in TI media
Stovas, Alexey
2013-11-21
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Bounded-Degree Approximations of Stochastic Networks
Energy Technology Data Exchange (ETDEWEB)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identify the r-best approximations among these classes, enabling robust decision making.
AIDS, individual behaviour and the unexplained remaining variation.
Katz, Alison
2002-01-01
From the start of the AIDS pandemic, individual behaviour has been put forward, implicitly or explicitly, as the main explanatory concept for understanding the epidemiology of HIV infection and in particular for the rapid spread and high prevalence in sub-Saharan Africa. This has had enormous implications for the international response to AIDS and has heavily influenced public health policy and strategy and the design of prevention and care interventions at national, community and individual level. It is argued that individual behaviour alone cannot possibly account for the enormous variation in HIV prevalence between population groups, countries and regions and that the unexplained remaining variation has been neglected by the international AIDS community. Biological vulnerability to HIV due to seriously deficient immune systems has been ignored as a determinant of the high levels of infection in certain populations. This is in sharp contrast to well proven public health approaches to other infectious diseases. In particular, it is argued that poor nutrition and co-infection with the myriad of other diseases of poverty including tuberculosis, malaria, leishmaniasis and parasitic infections, have been neglected as root causes of susceptibility, infectiousness and high rates of transmission of HIV at the level of populations. Vulnerability in terms of non-biological factors such as labour migration, prostitution, exchange of sex for survival, population movements due to war and violence, has received some attention but the solutions proposed to these problems are also inappropriately focused on individual behaviour and suffer from the same neglect of economic and political root causes. As the foundation for the international community's response to the AIDS pandemic, explanations of HIV/AIDS epidemiology in terms of individual behaviour are not only grossly inadequate, they are highly stigmatising and may in some cases, be racist. They have diverted attention from
Common approximations for density operators may lead to imaginary entropy
International Nuclear Information System (INIS)
Lendi, K.; Amaral Junior, M.R. do
1983-01-01
The meaning and validity of usual second order approximations for density operators are illustrated with the help of a simple exactly soluble two-level model in which all relevant quantities can easily be controlled. This leads to exact upper bound error estimates which help to select more precisely permissible correlation times as frequently introduced if stochastic potentials are present. A final consideration of information entropy reveals clearly the limitations of this kind of approximation procedures. (Author) [pt
Spot market activity remains weak as prices continue to fall
International Nuclear Information System (INIS)
Anon.
1996-01-01
A summary of financial data for the uranium spot market in November 1996 is provided. Price ranges for the restricted and unrestricted markets, conversion, and separative work are listed, and total market volume and new contracts are noted. Transactions made are briefly described. Deals made and pending in the spot concentrates, medium and long-term, conversion, and markets are listed for U.S. and non-U.S. buyers. Spot market activity increased in November with just over 1.0 million lbs of U3O8 equivalent being transacted compared to October's total of 530,000 lbs of U3O8 equivalent. The restricted uranium spot market price range slipped from $15.50-$15.70/lb U3O8 last month to $14.85/lb - $15.25/lb U3O8 this month. The unrestricted uranium spot market price range also slipped to $14.85/lb - $15.00/lb this month from $15.00/lb - $15.45/lb in October. Spot prices for conversion and separative work units remained at their October levels
Cosmological applications of Padé approximant
International Nuclear Information System (INIS)
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation
Cosmological applications of Padé approximant
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
Cophylogeny reconstruction via an approximate Bayesian computation.
Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F
2015-05-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Errors due to the cylindrical cell approximation in lattice calculations
Energy Technology Data Exchange (ETDEWEB)
Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)
1960-06-15
It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)
An effective algorithm for approximating adaptive behavior in seasonal environments
DEFF Research Database (Denmark)
Sainmont, Julie; Andersen, Ken Haste; Thygesen, Uffe Høgsbro
2015-01-01
Behavior affects most aspects of ecological processes and rates, and yet modeling frameworks which efficiently predict and incorporate behavioral responses into ecosystem models remain elusive. Behavioral algorithms based on life-time optimization, adaptive dynamics or game theory are unsuited...... for large global models because of their high computational demand. We compare an easily integrated, computationally efficient behavioral algorithm known as Gilliam's rule against the solution from a life-history optimization. The approximation takes into account only the current conditions to optimize...... behavior; the so-called "myopic approximation", "short sighted", or "static optimization". We explore the performance of the myopic approximation with diel vertical migration (DVM) as an example of a daily routine, a behavior with seasonal dependence that trades off predation risk with foraging...
Uniform analytic approximation of Wigner rotation matrices
Hoffmann, Scott E.
2018-02-01
We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.
Exact and approximate multiple diffraction calculations
International Nuclear Information System (INIS)
Alexander, Y.; Wallace, S.J.; Sparrow, D.A.
1976-08-01
A three-body potential scattering problem is solved in the fixed scatterer model exactly and approximately to test the validity of commonly used assumptions of multiple scattering calculations. The model problem involves two-body amplitudes that show diffraction-like differential scattering similar to high energy hadron-nucleon amplitudes. The exact fixed scatterer calculations are compared to Glauber approximation, eikonal-expansion results and a noneikonal approximation
Bent approximations to synchrotron radiation optics
International Nuclear Information System (INIS)
Heald, S.
1981-01-01
Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors
Local density approximations for relativistic exchange energies
International Nuclear Information System (INIS)
MacDonald, A.H.
1986-01-01
The use of local density approximations to approximate exchange interactions in relativistic electron systems is reviewed. Particular attention is paid to the physical content of these exchange energies by discussing results for the uniform relativistic electron gas from a new point of view. Work on applying these local density approximations in atoms and solids is reviewed and it is concluded that good accuracy is usually possible provided self-interaction corrections are applied. The local density approximations necessary for spin-polarized relativistic systems are discussed and some new results are presented
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS
Directory of Open Access Journals (Sweden)
Kambo, N. S.
2012-11-01
Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.
Diagonal Pade approximations for initial value problems
International Nuclear Information System (INIS)
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab
Approximation properties of fine hyperbolic graphs
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use ... Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, People's Republic of China ...
Approximation properties of fine hyperbolic graphs
Indian Academy of Sciences (India)
2010 Mathematics Subject Classification. 46L07. 1. Introduction. Given a countable discrete group G, some nice approximation properties for the reduced. C∗-algebras C∗ r (G) can give us the approximation properties of G. For example, Lance. [7] proved that the nuclearity of C∗ r (G) is equivalent to the amenability of G; ...
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-01-01
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Simultaneous approximation in scales of Banach spaces
International Nuclear Information System (INIS)
Bramble, J.H.; Scott, R.
1978-01-01
The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods
Approximation algorithms for guarding holey polygons ...
African Journals Online (AJOL)
Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...
Efficient automata constructions and approximate automata
Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.
2008-01-01
In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern
Efficient automata constructions and approximate automata
Watson, B.W.; Kourie, D.G.; Ngassam, E.K.; Strauss, T.; Cleophas, L.G.W.A.; Holub, J.; Zdárek, J.
2006-01-01
In this paper, we present data structures and algorithms for efficiently constructing approximate automata. An approximate automaton for a regular language L is one which accepts at least L. Such automata can be used in a variety of practical applications, including network security pattern
Spline approximation, Part 1: Basic methodology
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Nonlinear approximation with general wave packets
DEFF Research Database (Denmark)
Borup, Lasse; Nielsen, Morten
2005-01-01
We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...
Quirks of Stirling's Approximation
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-06-23
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Improved Dutch Roll Approximation for Hypersonic Vehicle
Directory of Open Access Journals (Sweden)
Liang-Liang Yin
2014-06-01
Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
A Bayesian Framework for Remaining Useful Life Estimation
National Aeronautics and Space Administration — The estimation of remaining useful life (RUL) of a faulty component is at the center of system prognostics and health management. It gives operators a potent tool in...
Robotics to Enable Older Adults to Remain Living at Home
Pearce, Alan J.; Adair, Brooke; Miller, Kimberly; Ozanne, Elizabeth; Said, Catherine; Santamaria, Nick; Morris, Meg E.
2012-01-01
Given the rapidly ageing population, interest is growing in robots to enable older people to remain living at home. We conducted a systematic review and critical evaluation of the scientific literature, from 1990 to the present, on the use of robots in aged care. The key research questions were as follows: (1) what is the range of robotic devices available to enable older people to remain mobile, independent, and safe? and, (2) what is the evidence demonstrating that robotic devices are effec...
Approximate models for the analysis of laser velocimetry correlation functions
International Nuclear Information System (INIS)
Robinson, D.P.
1981-01-01
Velocity distributions in the subchannels of an eleven pin test section representing a slice through a Fast Reactor sub-assembly were measured with a dual beam laser velocimeter system using a Malvern K 7023 digital photon correlator for signal processing. Two techniques were used for data reduction of the correlation function to obtain velocity and turbulence values. Whilst both techniques were in excellent agreement on the velocity, marked discrepancies were apparent in the turbulence levels. As a consequence of this the turbulence data were not reported. Subsequent investigation has shown that the approximate technique used as the basis of Malvern's Data Processor 7023V is restricted in its range of application. In this note alternative approximate models are described and evaluated. The objective of this investigation was to develop an approximate model which could be used for on-line determination of the turbulence level. (author)
Regression with Sparse Approximations of Data
DEFF Research Database (Denmark)
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...
Conditional Density Approximations with Mixtures of Polynomials
DEFF Research Database (Denmark)
Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre
2015-01-01
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...
Hardness and Approximation for Network Flow Interdiction
Chestnut, Stephen R.; Zenklusen, Rico
2015-01-01
In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...
Approximation of the semi-infinite interval
Directory of Open Access Journals (Sweden)
A. McD. Mercer
1980-01-01
Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.
Mathematical analysis, approximation theory and their applications
Gupta, Vijay
2016-01-01
Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.
Forensic considerations when dealing with incinerated human dental remains.
Reesu, Gowri Vijay; Augustine, Jeyaseelan; Urs, Aadithya B
2015-01-01
Establishing the human dental identification process relies upon sufficient post-mortem data being recovered to allow for a meaningful comparison with ante-mortem records of the deceased person. Teeth are the most indestructible components of the human body and are structurally unique in their composition. They possess the highest resistance to most environmental effects like fire, desiccation, decomposition and prolonged immersion. In most natural as well as man-made disasters, teeth may provide the only means of positive identification of an otherwise unrecognizable body. It is imperative that dental evidence should not be destroyed through erroneous handling until appropriate radiographs, photographs, or impressions can be fabricated. Proper methods of physical stabilization of incinerated human dental remains should be followed. The maintenance of integrity of extremely fragile structures is crucial to the successful confirmation of identity. In such situations, the forensic dentist must stabilise these teeth before the fragile remains are transported to the mortuary to ensure preservation of possibly vital identification evidence. Thus, while dealing with any incinerated dental remains, a systematic approach must be followed through each stage of evaluation of incinerated dental remains to prevent the loss of potential dental evidence. This paper presents a composite review of various studies on incinerated human dental remains and discusses their impact on the process of human identification and suggests a step by step approach. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Approximate Approaches to the One-Dimensional Finite Potential Well
Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.
2011-01-01
The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Nonlinear Ritz approximation for Fredholm functionals
Directory of Open Access Journals (Sweden)
Mudhir A. Abdul Hussain
2015-11-01
Full Text Available In this article we use the modify Lyapunov-Schmidt reduction to find nonlinear Ritz approximation for a Fredholm functional. This functional corresponds to a nonlinear Fredholm operator defined by a nonlinear fourth-order differential equation.
Euclidean shortest paths exact or approximate algorithms
Li, Fajie
2014-01-01
This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.
Square well approximation to the optical potential
International Nuclear Information System (INIS)
Jain, A.K.; Gupta, M.C.; Marwadi, P.R.
1976-01-01
Approximations for obtaining T-matrix elements for a sum of several potentials in terms of T-matrices for individual potentials are studied. Based on model calculations for S-wave for a sum of two separable non-local potentials of Yukawa type form factors and a sum of two delta function potentials, it is shown that the T-matrix for a sum of several potentials can be approximated satisfactorily over all the energy regions by the sum of T-matrices for individual potentials. Based on this, an approximate method for finding T-matrix for any local potential by approximating it by a sum of suitable number of square wells is presented. This provides an interesting way to calculate the T-matrix for any arbitary potential in terms of Bessel functions to a good degree of accuracy. The method is applied to the Saxon-Wood potentials and good agreement with exact results is found. (author)
Approximation for the adjoint neutron spectrum
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2002-01-01
The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)
Saddlepoint approximation methods in financial engineering
Kwok, Yue Kuen
2018-01-01
This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables. The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...
Methods of Fourier analysis and approximation theory
Tikhonov, Sergey
2016-01-01
Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.
Pion-nucleus cross sections approximation
International Nuclear Information System (INIS)
Barashenkov, V.S.; Polanski, A.; Sosnin, A.N.
1990-01-01
Analytical approximation of pion-nucleus elastic and inelastic interaction cross-section is suggested, with could be applied in the energy range exceeding several dozens of MeV for nuclei heavier than beryllium. 3 refs.; 4 tabs
APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION
Directory of Open Access Journals (Sweden)
Mădălina Roxana Buneci
2016-12-01
Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere
Steepest descent approximations for accretive operator equations
International Nuclear Information System (INIS)
Chidume, C.E.
1993-03-01
A necessary and sufficient condition is established for the strong convergence of the steepest descent approximation to a solution of equations involving quasi-accretive operators defined on a uniformly smooth Banach space. (author). 49 refs
Seismic wave extrapolation using lowrank symbol approximation
Fomel, Sergey
2012-04-30
We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.
An overview on Approximate Bayesian computation*
Directory of Open Access Journals (Sweden)
Baragatti Meïli
2014-01-01
Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.
Approximate Computing Techniques for Iterative Graph Algorithms
Energy Technology Data Exchange (ETDEWEB)
Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram
2017-12-18
Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.
Development of a remaining lifetime management system for NPPS
International Nuclear Information System (INIS)
Galvan, J.C.; Regano, M.; Hevia Ruperez, F.
1994-01-01
The interest evinced by Spain nuclear power plants in providing a tool to support remaining lifetime management led to UNESA's application to OCIDE in 1992, and the latter's approval, for financing the project to develop a Remaining Lifetime Evaluation System for LWR nuclear power plants. This project is currently being developed under UNESA leadership, and the collaboration of three Spanish engineering companies and a research centre. The paper will describe its objectives, activities, current status and prospects. The project is defined in two phases, the first consisting of the identification and analysis of the main ageing phenomena and their significant parameters and specification of the Remaining Lifetime Evaluation System (RLES), and the second implementation of a pilot application of the RLES to verify its effectiveness. (Author)
Remaining life assessment of a high pressure turbine rotor
International Nuclear Information System (INIS)
Nguyen, Ninh; Little, Alfie
2012-01-01
This paper describes finite element and fracture mechanics based modelling work that provides a useful tool for evaluation of the remaining life of a high pressure (HP) steam turbine rotor that had experienced thermal fatigue cracking. An axis-symmetrical model of a HP rotor was constructed. Steam temperature, pressure and rotor speed data from start ups and shut downs were used for the thermal and stress analysis. Operating history and inspection records were used to benchmark the damage experienced by the rotor. Fracture mechanics crack growth analysis was carried out to evaluate the remaining life of the rotor under themal cyclic loading conditions. The work confirmed that the fracture mechanics approach in conjunction with finite element modelling provides a useful tool for assessing the remaining life of high temperature components in power plants.
Approximative solutions of stochastic optimization problem
Czech Academy of Sciences Publication Activity Database
Lachout, Petr
2010-01-01
Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf
Lattice quantum chromodynamics with approximately chiral fermions
Energy Technology Data Exchange (ETDEWEB)
Hierl, Dieter
2008-05-15
In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Stochastic quantization and mean field approximation
International Nuclear Information System (INIS)
Jengo, R.; Parga, N.
1983-09-01
In the context of the stochastic quantization we propose factorized approximate solutions for the Fokker-Planck equation for the XY and Zsub(N) spin systems in D dimensions. The resulting differential equation for a factor can be solved and it is found to give in the limit of t→infinity the mean field or, in the more general case, the Bethe-Peierls approximation. (author)
Polynomial approximation of functions in Sobolev spaces
International Nuclear Information System (INIS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces
Magnus approximation in the adiabatic picture
International Nuclear Information System (INIS)
Klarsfeld, S.; Oteo, J.A.
1991-01-01
A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs
Lattice quantum chromodynamics with approximately chiral fermions
International Nuclear Information System (INIS)
Hierl, Dieter
2008-05-01
In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the Θ + pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)
Approximating centrality in evolving graphs: toward sublinearity
Priest, Benjamin W.; Cybenko, George
2017-05-01
The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.
On random age and remaining lifetime for populations of items
DEFF Research Database (Denmark)
Finkelstein, M.; Vaupel, J.
2015-01-01
We consider items that are incepted into operation having already a random (initial) age and define the corresponding remaining lifetime. We show that these lifetimes are identically distributed when the age distribution is equal to the equilibrium distribution of the renewal theory. Then we...... develop the population studies approach to the problem and generalize the setting in terms of stationary and stable populations of items. We obtain new stochastic comparisons for the corresponding population ages and remaining lifetimes that can be useful in applications. Copyright (c) 2014 John Wiley...
Cheap contouring of costly functions: the Pilot Approximation Trajectory algorithm
International Nuclear Information System (INIS)
Huttunen, Janne M J; Stark, Philip B
2012-01-01
The Pilot Approximation Trajectory (PAT) contour algorithm can find the contour of a function accurately when it is not practical to evaluate the function on a grid dense enough to use a standard contour algorithm, for instance, when evaluating the function involves conducting a physical experiment or a computationally intensive simulation. PAT relies on an inexpensive pilot approximation to the function, such as interpolating from a sparse grid of inexact values, or solving a partial differential equation (PDE) numerically using a coarse discretization. For each level of interest, the location and ‘trajectory’ of an approximate contour of this pilot function are used to decide where to evaluate the original function to find points on its contour. Those points are joined by line segments to form the PAT approximation of the contour of the original function. Approximating a contour numerically amounts to estimating a lower level set of the function, the set of points on which the function does not exceed the contour level. The area of the symmetric difference between the true lower level set and the estimated lower level set measures the accuracy of the contour. PAT measures its own accuracy by finding an upper confidence bound for this area. In examples, PAT can estimate a contour more accurately than standard algorithms, using far fewer function evaluations than standard algorithms require. We illustrate PAT by constructing a confidence set for viscosity and thermal conductivity of a flowing gas from simulated noisy temperature measurements, a problem in which each evaluation of the function to be contoured requires solving a different set of coupled nonlinear PDEs. (paper)
Methodology for Extraction of Remaining Sodium of Used Sodium Containers
International Nuclear Information System (INIS)
Jung, Minhwan; Kim, Jongman; Cho, Youngil; Jeong, Jiyoung
2014-01-01
Sodium used as a coolant in the SFR (Sodium-cooled Fast Reactor) reacts easily with most elements due to its high reactivity. If sodium at high temperature leaks outside of a system boundary and makes contact with oxygen, it starts to burn and toxic aerosols are produced. In addition, it generates flammable hydrogen gas through a reaction with water. Hydrogen gas can be explosive within the range of 4.75 vol%. Therefore, the sodium should be handled carefully in accordance with standard procedures even though there is a small amount of target sodium remainings inside the containers and drums used for experiment. After the experiment, all sodium experimental apparatuses should be dismantled carefully through a series of draining, residual sodium extraction, and cleaning if they are no longer reused. In this work, a system for the extraction of the remaining sodium of used sodium drums has been developed and an operation procedure for the system has been established. In this work, a methodology for the extraction of remaining sodium out of the used sodium container has been developed as one of the sodium facility maintenance works. The sodium extraction system for remaining sodium of the used drums was designed and tested successfully. This work will contribute to an establishment of sodium handling technology for PGSFR. (Prototype Gen-IV Sodium-cooled Fast Reactor)
Predicting the Remaining Useful Life of Rolling Element Bearings
DEFF Research Database (Denmark)
Hooghoudt, Jan Otto; Jantunen, E; Yi, Yang
2018-01-01
Condition monitoring of rolling element bearings is of vital importance in order to keep the industrial wheels running. In wind industry this is especially important due to the challenges in practical maintenance. The paper presents an attempt to improve the capability of prediction of remaining...
The experiences of remaining nurse tutors during the transformation ...
African Journals Online (AJOL)
The transformation of public services and education in South Africa is part of the political and socioeconomic transition to democracy. Changes are occurring in every fi eld, including that of the health services. A qualitative study was undertaken to investigate the experiences of the remaining nurse tutors at a school of ...
Remaining childless : Causes and consequences from a life course perspective
Keizer, R.
2010-01-01
Little is know about childless individuals in the Netherlands, although currently one out of every five Dutch individuals remains childless. Who are they? How did they end up being childless? How and to what extent are their life outcomes influenced by their childlessness? By focusing on individual
Molecular genetic identification of skeletal remains of apartheid ...
African Journals Online (AJOL)
The Truth and Reconciliation Commission made significant progress in examining abuses committed during the apartheid era in South Africa. Despite information revealed by the commission, a large number of individuals remained missing when the commission closed its proceedings. This provided the impetus for the ...
Palmar, Patellar, and Pedal Human Remains from Pavlov
Czech Academy of Sciences Publication Activity Database
Trinkaus, E.; Wojtal, P.; Wilczyński, J.; Sázelová, Sandra; Svoboda, Jiří
2017-01-01
Roč. 2017, June (2017), s. 73-101 ISSN 1545-0031 Institutional support: RVO:68081758 Keywords : Gravettian * human remains * isolated bones * anatomically modern humans * Upper Paleolithic Subject RIV: AC - Archeology, Anthropology, Ethnology OBOR OECD: Archaeology http://paleoanthro.org/media/journal/content/PA20170073.pdf
Robotics to Enable Older Adults to Remain Living at Home
Directory of Open Access Journals (Sweden)
Alan J. Pearce
2012-01-01
Full Text Available Given the rapidly ageing population, interest is growing in robots to enable older people to remain living at home. We conducted a systematic review and critical evaluation of the scientific literature, from 1990 to the present, on the use of robots in aged care. The key research questions were as follows: (1 what is the range of robotic devices available to enable older people to remain mobile, independent, and safe? and, (2 what is the evidence demonstrating that robotic devices are effective in enabling independent living in community dwelling older people? Following database searches for relevant literature an initial yield of 161 articles was obtained. Titles and abstracts of articles were then reviewed by 2 independent people to determine suitability for inclusion. Forty-two articles met the criteria for question 1. Of these, 4 articles met the criteria for question 2. Results showed that robotics is currently available to assist older healthy people and people with disabilities to remain independent and to monitor their safety and social connectedness. Most studies were conducted in laboratories and hospital clinics. Currently limited evidence demonstrates that robots can be used to enable people to remain living at home, although this is an emerging smart technology that is rapidly evolving.
Authentic leadership: becoming and remaining an authentic nurse leader.
Murphy, Lin G
2012-11-01
This article explores how chief nurse executives became and remained authentic leaders. Using narrative inquiry, this qualitative study focused on the life stories of participants. Results demonstrate the importance of reframing, reflection in alignment with values, and the courage needed as nurse leaders progress to authenticity.
Robotics to enable older adults to remain living at home.
Pearce, Alan J; Adair, Brooke; Miller, Kimberly; Ozanne, Elizabeth; Said, Catherine; Santamaria, Nick; Morris, Meg E
2012-01-01
Given the rapidly ageing population, interest is growing in robots to enable older people to remain living at home. We conducted a systematic review and critical evaluation of the scientific literature, from 1990 to the present, on the use of robots in aged care. The key research questions were as follows: (1) what is the range of robotic devices available to enable older people to remain mobile, independent, and safe? and, (2) what is the evidence demonstrating that robotic devices are effective in enabling independent living in community dwelling older people? Following database searches for relevant literature an initial yield of 161 articles was obtained. Titles and abstracts of articles were then reviewed by 2 independent people to determine suitability for inclusion. Forty-two articles met the criteria for question 1. Of these, 4 articles met the criteria for question 2. Results showed that robotics is currently available to assist older healthy people and people with disabilities to remain independent and to monitor their safety and social connectedness. Most studies were conducted in laboratories and hospital clinics. Currently limited evidence demonstrates that robots can be used to enable people to remain living at home, although this is an emerging smart technology that is rapidly evolving.
Dinosaur remains from the type Maastrichtian: An update
Weishampel, David B.; Mulder, Eric W A; Dortangs, Rudi W.; Jagt, John W M; Jianu, Coralia Maria; Kuypers, Marcel M M; Peeters, Hans H G; Schulp, Anne S.
1999-01-01
Isolated cranial and post-cranial remains of hadrosaurid dinosaurs have been collected from various outcrops in the type area of the Maastrichtian stage during the last few years. In the present contribution, dentary and maxillary teeth are recorded from the area for the first time. Post-cranial
A Poisson process approximation for generalized K-5 confidence regions
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Multivariate statistics high-dimensional and large-sample approximations
Fujikoshi, Yasunori; Shimizu, Ryoichi
2010-01-01
A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic
Green's Kernels and meso-scale approximations in perforated domains
Maz'ya, Vladimir; Nieves, Michael
2013-01-01
There are a wide range of applications in physics and structural mechanics involving domains with singular perturbations of the boundary. Examples include perforated domains and bodies with defects of different types. The accurate direct numerical treatment of such problems remains a challenge. Asymptotic approximations offer an alternative, efficient solution. Green’s function is considered here as the main object of study rather than a tool for generating solutions of specific boundary value problems. The uniformity of the asymptotic approximations is the principal point of attention. We also show substantial links between Green’s functions and solutions of boundary value problems for meso-scale structures. Such systems involve a large number of small inclusions, so that a small parameter, the relative size of an inclusion, may compete with a large parameter, represented as an overall number of inclusions. The main focus of the present text is on two topics: (a) asymptotics of Green’s kernels in domai...
Approximation methods for efficient learning of Bayesian networks
Riggelsen, C
2008-01-01
This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
'LTE-diffusion approximation' for arc calculations
International Nuclear Information System (INIS)
Lowke, J J; Tanaka, M
2006-01-01
This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode
Semiclassical initial value approximation for Green's function.
Kay, Kenneth G
2010-06-28
A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.
Approximate Bayesian evaluations of measurement uncertainty
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
Smooth function approximation using neural networks.
Ferrari, Silvia; Stengel, Robert F
2005-01-01
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
Modified semiclassical approximation for trapped Bose gases
International Nuclear Information System (INIS)
Yukalov, V.I.
2005-01-01
A generalization of the semiclassical approximation is suggested allowing for an essential extension of its region of applicability. In particular, it becomes possible to describe Bose-Einstein condensation of a trapped gas in low-dimensional traps and in traps of low confining dimensions, for which the standard semiclassical approximation is not applicable. The result of the modified approach is shown to coincide with purely quantum-mechanical calculations for harmonic traps, including the one-dimensional harmonic trap. The advantage of the semiclassical approximation is in its simplicity and generality. Power-law potentials of arbitrary powers are considered. The effective thermodynamic limit is defined for any confining dimension. The behavior of the specific heat, isothermal compressibility, and density fluctuations is analyzed, with an emphasis on low confining dimensions, where the usual semiclassical method fails. The peculiarities of the thermodynamic characteristics in the effective thermodynamic limit are discussed
The binary collision approximation: Background and introduction
International Nuclear Information System (INIS)
Robinson, M.T.
1992-08-01
The binary collision approximation (BCA) has long been used in computer simulations of the interactions of energetic atoms with solid targets, as well as being the basis of most analytical theory in this area. While mainly a high-energy approximation, the BCA retains qualitative significance at low energies and, with proper formulation, gives useful quantitative information as well. Moreover, computer simulations based on the BCA can achieve good statistics in many situations where those based on full classical dynamical models require the most advanced computer hardware or are even impracticable. The foundations of the BCA in classical scattering are reviewed, including methods of evaluating the scattering integrals, interaction potentials, and electron excitation effects. The explicit evaluation of time at significant points on particle trajectories is discussed, as are scheduling algorithms for ordering the collisions in a developing cascade. An approximate treatment of nearly simultaneous collisions is outlined and the searching algorithms used in MARLOWE are presented
Self-similar continued root approximants
International Nuclear Information System (INIS)
Gluzman, S.; Yukalov, V.I.
2012-01-01
A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.
Ancilla-approximable quantum state transformations
Energy Technology Data Exchange (ETDEWEB)
Blass, Andreas [Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 (United States); Gurevich, Yuri [Microsoft Research, Redmond, Washington 98052 (United States)
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
On Born approximation in black hole scattering
Batic, D.; Kelkar, N. G.; Nowakowski, M.
2011-12-01
A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordström and Reissner-Nordström-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes.
Ancilla-approximable quantum state transformations
International Nuclear Information System (INIS)
Blass, Andreas; Gurevich, Yuri
2015-01-01
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation
On transparent potentials: a Born approximation study
International Nuclear Information System (INIS)
Coudray, C.
1980-01-01
In the frame of the scattering inverse problem at fixed energy, a class of potentials transparent in Born approximation is obtained. All these potentials are spherically symmetric and are oscillating functions of the reduced radial variable. Amongst them, the Born approximation of the transparent potential of the Newton-Sabatier method is found. In the same class, quasi-transparent potentials are exhibited. Very general features of potentials transparent in Born approximation are then stated. And bounds are given for the exact scattering amplitudes corresponding to most of the potentials previously exhibited. These bounds, obtained at fixed energy, and for large values of the angular momentum, are found to be independent on the energy
The adiabatic approximation in multichannel scattering
International Nuclear Information System (INIS)
Schulte, A.M.
1978-01-01
Using two-dimensional models, an attempt has been made to get an impression of the conditions of validity of the adiabatic approximation. For a nucleon bound to a rotating nucleus the Coriolis coupling is neglected and the relation between this nuclear Coriolis coupling and the classical Coriolis force has been examined. The approximation for particle scattering from an axially symmetric rotating nucleus based on a short duration of the collision, has been combined with an approximation based on the limitation of angular momentum transfer between particle and nucleus. Numerical calculations demonstrate the validity of the new combined method. The concept of time duration for quantum mechanical collisions has also been studied, as has the collective description of permanently deformed nuclei. (C.F.)
Minimal entropy approximation for cellular automata
International Nuclear Information System (INIS)
Fukś, Henryk
2014-01-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)
Resummation of perturbative QCD by pade approximants
International Nuclear Information System (INIS)
Gardi, E.
1997-01-01
In this lecture I present some of the new developments concerning the use of Pade Approximants (PA's) for resuming perturbative series in QCD. It is shown that PA's tend to reduce the renormalization scale and scheme dependence as compared to truncated series. In particular it is proven that in the limit where the β function is dominated by the 1-loop contribution, there is an exact symmetry that guarantees invariance of diagonal PA's under changing the renormalization scale. In addition it is shown that in the large β 0 approximation diagonal PA's can be interpreted as a systematic method for approximating the flow of momentum in Feynman diagrams. This corresponds to a new multiple scale generalization of the Brodsky-Lepage-Mackenzie (BLM) method to higher orders. I illustrate the method with the Bjorken sum rule and the vacuum polarization function. (author)
Fast wavelet based sparse approximate inverse preconditioner
Energy Technology Data Exchange (ETDEWEB)
Wan, W.L. [Univ. of California, Los Angeles, CA (United States)
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Structural remains at the early mediaeval fort at Raibania, Orissa
Directory of Open Access Journals (Sweden)
Bratati Sen
2013-11-01
Full Text Available The fortifications of mediaeval India occupy an eminent position in the history of military architecture. The present paper deals with the preliminary study of the structural remains at the early mediaeval fort at Raibania in the district of Balasore in Orissa. The fort was built of stone very loosely kept together. The three-walled fortification interspersed by two consecutive moats, a feature evidenced at Raibania, which is unparallel in the history of ancient and mediaeval forts and fortifications in India. Several other structures like the Jay-Chandi Temple Complex, a huge well, numerous tanks and remains of an ancient bridge add to the uniqueness of the Fort in the entire eastern region.
Mineral remains of early life on Earth? On Mars?
Iberall, Robbins E.; Iberall, A.S.
1991-01-01
The oldest sedimentary rocks on Earth, the 3.8-Ga Isua Iron-Formation in southwestern Greenland, are metamorphosed past the point where organic-walled fossils would remain. Acid residues and thin sections of these rocks reveal ferric microstructures that have filamentous, hollow rod, and spherical shapes not characteristic of crystalline minerals. Instead, they resemble ferric-coated remains of bacteria. Because there are no earlier sedimentary rocks to study on Earth, it may be necessary to expand the search elsewhere in the solar system for clues to any biotic precursors or other types of early life. A study of morphologies of iron oxide minerals collected in the southern highlands during a Mars sample return mission may therefore help to fill in important gaps in the history of Earth's earliest biosphere. -from Authors
USING CONDITION MONITORING TO PREDICT REMAINING LIFE OF ELECTRIC CABLES
International Nuclear Information System (INIS)
LOFARO, R.; SOO, P.; VILLARAN, M.; GROVE, E.
2001-01-01
Electric cables are passive components used extensively throughout nuclear power stations to perform numerous safety and non-safety functions. It is known that the polymers commonly used to insulate the conductors on these cables can degrade with time; the rate of degradation being dependent on the severity of the conditions in which the cables operate. Cables do not receive routine maintenance and, since it can be very costly, they are not replaced on a regular basis. Therefore, to ensure their continued functional performance, it would be beneficial if condition monitoring techniques could be used to estimate the remaining useful life of these components. A great deal of research has been performed on various condition monitoring techniques for use on electric cables. In a research program sponsored by the U.S. Nuclear Regulatory Commission, several promising techniques were evaluated and found to provide trendable information on the condition of low-voltage electric cables. These techniques may be useful for predicting remaining life if well defined limiting values for the aging properties being measured can be determined. However, each technique has advantages and limitations that must be addressed in order to use it effectively, and the necessary limiting values are not always easy to obtain. This paper discusses how condition monitoring measurements can be used to predict the remaining useful life of electric cables. The attributes of an appropriate condition monitoring technique are presented, and the process to be used in estimating the remaining useful life of a cable is discussed along with the difficulties that must be addressed
Study on remain actinides recovery in pyro reprocessing
International Nuclear Information System (INIS)
Suharto, Bambang
1996-01-01
The spent fuel reprocessing by dry process called pyro reprocessing have been studied. Most of U, Pu and MA (minor actinides) from the spent fuel will be recovered and be fed back to the reactor as new fuel. Accumulation of remain actinides will be separated by extraction process with liquid cadmium solvent. The research was conducted by computer simulation to calculate the stage number required. The calculation's results showed on the 20 stages extractor more than 99% actinides can be separated. (author)
US GAAP vs. IFRS – A COMPARISON OF REMAINING DIFFERENCES
Mihelčić, Eva
2008-01-01
In spite of the on-going harmonization process, there are still some differences between US GAAP and IFRS. Currently, companies listed on the New York Stock Exchange, which are reporting according to IFRS, must still prepare the reconciliation to US GAAP, to show the financial statements compliant with US GAAP as well. This article presents an overview of the remaining major differences between US GAAP and IFRS, descriptive as well as table-wise. First, the standards compared are shortly intr...
Structural remains at the early mediaeval fort at Raibania, Orissa
Sen, Bratati
2013-01-01
The fortifications of mediaeval India occupy an eminent position in the history of military architecture. The present paper deals with the preliminary study of the structural remains at the early mediaeval fort at Raibania in the district of Balasore in Orissa. The fort was built of stone very loosely kept together. The three-walled fortification interspersed by two consecutive moats, a feature evidenced at Raibania, w...
Neanderthal infant and adult infracranial remains from Marillac (Charente, France).
Dolores Garralda, María; Maureille, Bruno; Vandermeersch, Bernard
2014-09-01
At the site of Marillac, near the Ligonne River in Marillac-le-Franc (Charente, France), a remarkable stratigraphic sequence has yielded a wealth of archaeological information, palaeoenvironmental data, as well as faunal and human remains. Marillac must have been a sinkhole used by Neanderthal groups as a hunting camp during MIS 4 (TL date 57,600 ± 4,600BP), where Quina Mousterian lithics and fragmented bones of reindeer predominate. This article describes three infracranial skeleton fragments. Two of them are from adults and consist of the incomplete shafts of a right radius (Marillac 24) and a left fibula (Marillac 26). The third fragment is the diaphysis of the right femur of an immature individual (Marillac 25), the size and shape of which resembles those from Teshik-Tash and could be assigned to a child of a similar age. The three fossils have been compared with the remains of other Neanderthals or anatomically Modern Humans (AMH). Furthermore, the comparison of the infantile femora, Marillac 25 and Teshik-Tash, with the remains of several European children from the early Middle Ages clearly demonstrates the robustness and rounded shape of both Neanderthal diaphyses. Evidence of peri-mortem manipulations have been identified on all three bones, with spiral fractures, percussion pits and, in the case of the radius and femur, unquestionable cutmarks made with flint implements, probably during defleshing. Traces of periostosis appear on the fibula fragment and on the immature femoral diaphysis, although their aetiology remains unknown. Copyright © 2014 Wiley Periodicals, Inc.
Calibration of C-14 dates: some remaining uncertainties and limitations
International Nuclear Information System (INIS)
Burleigh, R.
1975-01-01
A brief review is presented of the interpretation of radiocarbon dates in terms of calendar years. An outline is given of the factors that make such correlations necessary and of the work that has so far been done to make them possible. The calibration of the C-14 timescale very largely depends at present on the bristlecone pine chronology, but it is clear that many detailed uncertainties still remain. These are discussed. (U.K.)
Remaining useful life estimation based on discriminating shapelet extraction
International Nuclear Information System (INIS)
Malinowski, Simon; Chebel-Morello, Brigitte; Zerhouni, Noureddine
2015-01-01
In the Prognostics and Health Management domain, estimating the remaining useful life (RUL) of critical machinery is a challenging task. Various research topics including data acquisition, fusion, diagnostics and prognostics are involved in this domain. This paper presents an approach, based on shapelet extraction, to estimate the RUL of equipment. This approach extracts, in an offline step, discriminative rul-shapelets from an history of run-to-failure data. These rul-shapelets are patterns that are selected for their correlation with the remaining useful life of the equipment. In other words, every selected rul-shapelet conveys its own information about the RUL of the equipment. In an online step, these rul-shapelets are compared to testing units and the ones that match these units are used to estimate their RULs. Therefore, RUL estimation is based on patterns that have been selected for their high correlation with the RUL. This approach is different from classical similarity-based approaches that attempt to match complete testing units (or only late instants of testing units) with training ones to estimate the RUL. The performance of our approach is evaluated on a case study on the remaining useful life estimation of turbofan engines and performance is compared with other similarity-based approaches. - Highlights: • A data-driven RUL estimation technique based on pattern extraction is proposed. • Patterns are extracted for their correlation with the RUL. • The proposed method shows good performance compared to other techniques
Remaining life diagnosis method and device for nuclear reactor
International Nuclear Information System (INIS)
Yamamoto, Michiyoshi.
1996-01-01
A neutron flux measuring means is inserted from the outside of a reactor pressure vessel during reactor operation to forecast neutron-degradation of materials of incore structural components in the vicinity of portions to be measured based on the measured values, and the remaining life of the reactor is diagnosed by the forecast degraded state. In this case, the neutron fluxes to be measured are desirably fast and/or medium neutron fluxes. As the positions where the measuring means is to be inserted, for example, the vicinity of the structural components at the periphery of the fuel assembly is selected. Aging degradation characteristics of the structural components are determined by using the aging degradation data for the structural materials. The remaining life is analyzed based on obtained aging degradation characteristics and stress evaluation data of the incore structural components at portions to be measured. Neutron irradiation amount of structural components at predetermined positions can be recognized accurately, and appropriate countermeasures can be taken depending on the forecast remaining life thereby enabling to improve the reliability of the reactor. (N.H.)
Postmortem Scavenging of Human Remains by Domestic Cats
Directory of Open Access Journals (Sweden)
Ananya Suntirukpong, M.D.
2017-11-01
Full Text Available Objective: Crime scene investigators, forensic medicine doctors and pathologists, and forensic anthropologists frequently encounter postmortem scavenging of human remains by household pets. Case presentation: The authors present a case report of a partially skeletonized adult male found dead after more than three months in his apartment in Thailand. The body was in an advanced stage of decomposition with nearly complete skeletonization of the head, neck, hands, and feet. The presence of maggots and necrophagous (flesh eating beetles on the body confirmed that insects had consumed much of the soft tissues. Examination of the hand and foot bones revealed canine tooth puncture marks. Evidence of chewing indicated that one or more of the decedent’s three house cats had fed on the body after death. Recognizing and identifying carnivore and rodent activity on the soft flesh and bones of human remains is important in interpreting and reconstructing postmortem damage. Thorough analysis may help explain why skeletal elements are missing, damaged, or out of anatomical position. Conclusion: This report presents a multi-disciplinary approach combining forensic anthropology and forensic medicine in examining and interpreting human remains.
Approximating perfection a mathematician's journey into the world of mechanics
Lebedev, Leonid P
2004-01-01
This is a book for those who enjoy thinking about how and why Nature can be described using mathematical tools. Approximating Perfection considers the background behind mechanics as well as the mathematical ideas that play key roles in mechanical applications. Concentrating on the models of applied mechanics, the book engages the reader in the types of nuts-and-bolts considerations that are normally avoided in formal engineering courses: how and why models remain imperfect, and the factors that motivated their development. The opening chapter reviews and reconsiders the basics of c
Perturbation expansions generated by an approximate propagator
International Nuclear Information System (INIS)
Znojil, M.
1987-01-01
Starting from a knowledge of an approximate propagator R at some trial energy guess E 0 , a new perturbative prescription for p-plet of bound states and of their energies is proposed. It generalizes the Rayleigh-Schroedinger (RS) degenerate perturbation theory to the nondiagonal operators R (eliminates a RS need of their diagnolisation) and defines an approximate Hamiltonian T by mere inversion. The deviation V of T from the exact Hamiltonian H is assumed small only after a substraction of a further auxiliary Hartree-Fock-like separable ''selfconsistent'' potential U of rank p. The convergence is illustrated numerically on the anharmonic oscillator example
Approximate Inference and Deep Generative Models
CERN. Geneva
2018-01-01
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.
Unambiguous results from variational matrix Pade approximants
International Nuclear Information System (INIS)
Pindor, Maciej.
1979-10-01
Variational Matrix Pade Approximants are studied as a nonlinear variational problem. It is shown that although a stationary value of the Schwinger functional is a stationary value of VMPA, the latter has also another stationary value. It is therefore proposed that instead of looking for a stationary point of VMPA, one minimizes some non-negative functional and then one calculates VMPA at the point where the former has the absolute minimum. This approach, which we call the Method of the Variational Gradient (MVG) gives unambiguous results and is also shown to minimize a distance between the approximate and the exact stationary values of the Schwinger functional
Faster and Simpler Approximation of Stable Matchings
Directory of Open Access Journals (Sweden)
Katarzyna Paluch
2014-04-01
Full Text Available We give a 3 2 -approximation algorithm for finding stable matchings that runs in O(m time. The previous most well-known algorithm, by McDermid, has the same approximation ratio but runs in O(n3/2m time, where n denotes the number of people andm is the total length of the preference lists in a given instance. In addition, the algorithm and the analysis are much simpler. We also give the extension of the algorithm for computing stable many-to-many matchings.
APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS
Directory of Open Access Journals (Sweden)
T. I. Aliev
2013-03-01
Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.
On the dipole approximation with error estimates
Boßmann, Lea; Grummt, Robert; Kolb, Martin
2018-01-01
The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Hardness of approximation for strip packing
DEFF Research Database (Denmark)
Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin
2017-01-01
Strip packing is a classical packing problem, where the goal is to pack a set of rectangular objects into a strip of a given width, while minimizing the total height of the packing. The problem has multiple applications, for example, in scheduling and stock-cutting, and has been studied extensively......)-approximation by two independent research groups [FSTTCS 2016,WALCOM 2017]. This raises a questionwhether strip packing with polynomially bounded input data admits a quasi-polynomial time approximation scheme, as is the case for related twodimensional packing problems like maximum independent set of rectangles or two...
Approximate approaches to the one-dimensional finite potential well
International Nuclear Information System (INIS)
Singh, Shilpi; Pathak, Praveen; Singh, Vijay A
2011-01-01
The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m i ) is taken to be distinct from mass outside (m o ). A relevant parameter is the mass discontinuity ratio β = m i /m o . To correctly account for the mass discontinuity, we apply the BenDaniel-Duke boundary condition. We obtain approximate solutions for two cases: when the well is shallow and when the well is deep. We compare the approximate results with the exact results and find that higher-order approximations are quite robust. For the shallow case, the approximate solution can be expressed in terms of a dimensionless parameter σ l = 2m o V 0 L 2 /ℎ 2 (or σ = β 2 σ l for the deep case). We show that the lowest-order results are related by a duality transform. We also discuss how the energy upscales with L (E∼1/L γ ) and obtain the exponent γ. Exponent γ → 2 when the well is sufficiently deep and β → 1. The ratio of the masses dictates the physics. Our presentation is pedagogical and should be useful to students on a first course on elementary quantum mechanics or low-dimensional semiconductors.
Analytical models approximating individual processes: a validation method.
Favier, C; Degallier, N; Menkès, C E
2010-12-01
Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.
Lanz, Leora Halpern; Carmichael, Megan
2015-01-01
The hotel marketing budget, typically amounting to approximately 4-5% of an asset’s total revenue, must remain fluid so that the marketing director can constantly adapt the marketing tools to meet consumer communications methods and demands. Though only a small amount of a hotel’s revenue is traditionally allocated for the marketing budget, the hotel’s success is directly reliant on how effectively that budget is utilized. Thus far in 2015, over 55% percent of hotel bookings are happening onl...
Analyzing the errors of DFT approximations for compressed water systems
International Nuclear Information System (INIS)
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-01-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying
2015-01-01
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Large hierarchies from approximate R symmetries
International Nuclear Information System (INIS)
Kappl, Rolf; Ratz, Michael; Vaudrevange, Patrick K.S.
2008-12-01
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales. (orig.)
Approximate Networking for Universal Internet Access
Directory of Open Access Journals (Sweden)
Junaid Qadir
2017-12-01
Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.
Uncertainty relations for approximation and estimation
Energy Technology Data Exchange (ETDEWEB)
Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)
2016-05-27
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
Uncertainty relations for approximation and estimation
International Nuclear Information System (INIS)
Lee, Jaeha; Tsutsui, Izumi
2016-01-01
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.
Intrinsic Diophantine approximation on general polynomial surfaces
DEFF Research Database (Denmark)
Tiljeset, Morten Hein
2017-01-01
We study the Hausdorff measure and dimension of the set of intrinsically simultaneously -approximable points on a curve, surface, etc, given as a graph of integer polynomials. We obtain complete answers to these questions for algebraically “nice” manifolds. This generalizes earlier work done...
Perturbation of operators and approximation of spectrum
Indian Academy of Sciences (India)
outside the bounds of essential spectrum of A(x) can be approximated ... some perturbed discrete Schrödinger operators treating them as block ...... particular, one may think of estimating the spectrum and spectral gaps of Schrödinger.
Quasilinear theory without the random phase approximation
International Nuclear Information System (INIS)
Weibel, E.S.; Vaclavik, J.
1980-08-01
The system of quasilinear equations is derived without making use of the random phase approximation. The fluctuating quantities are described by the autocorrelation function of the electric field using the techniques of Fourier analysis. The resulting equations posses the necessary conservation properties, but comprise new terms which hitherto have been lost in the conventional derivations
Rational approximations and quantum algorithms with postselection
Mahadev, U.; de Wolf, R.
2015-01-01
We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We
Padé approximations and diophantine geometry.
Chudnovsky, D V; Chudnovsky, G V
1985-04-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves.
Approximate systems with confluent bonding mappings
Lončar, Ivan
2001-01-01
If X = {Xn, pnm, N} is a usual inverse system with confluent (monotone) bonding mappings, then the projections are confluent (monotone). This is not true for approximate inverse system. The main purpose of this paper is to show that the property of Kelley (smoothness) of the space Xn is a sufficient condition for the confluence (monotonicity) of the projections.
Function approximation with polynomial regression slines
International Nuclear Information System (INIS)
Urbanski, P.
1996-01-01
Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
On the parametric approximation in quantum optics
Energy Technology Data Exchange (ETDEWEB)
D' Ariano, G.M.; Paris, M.G.A.; Sacchi, M.F. [Istituto Nazionale di Fisica Nucleare, Pavia (Italy); Pavia Univ. (Italy). Dipt. di Fisica ' Alessandro Volta'
1999-03-01
The authors perform the exact numerical diagonalization of Hamiltonians that describe both degenerate and nondegenerate parametric amplifiers, by exploiting the conservation laws pertaining each device. It is clarify the conditions under which the parametric approximation holds, showing that the most relevant requirements is the coherence of the pump after the interaction, rather than its un depletion.
On the parametric approximation in quantum optics
International Nuclear Information System (INIS)
D'Ariano, G.M.; Paris, M.G.A.; Sacchi, M.F.; Pavia Univ.
1999-01-01
The authors perform the exact numerical diagonalization of Hamiltonians that describe both degenerate and nondegenerate parametric amplifiers, by exploiting the conservation laws pertaining each device. It is clarify the conditions under which the parametric approximation holds, showing that the most relevant requirements is the coherence of the pump after the interaction, rather than its un depletion
Uniform semiclassical approximation for absorptive scattering systems
International Nuclear Information System (INIS)
Hussein, M.S.; Pato, M.P.
1987-07-01
The uniform semiclassical approximation of the elastic scattering amplitude is generalized to absorptive systems. An integral equation is derived which connects the absorption modified amplitude to the absorption free one. Division of the amplitude into a diffractive and refractive components is then made possible. (Author) [pt
Tension and Approximation in Poetic Translation
Al-Shabab, Omar A. S.; Baka, Farida H.
2015-01-01
Simple observation reveals that each language and each culture enjoys specific linguistic features and rhetorical traditions. In poetry translation difference and the resultant linguistic tension create a gap between Source Language and Target language, a gap that needs to be bridged by creating an approximation processed through the translator's…
Variational Gaussian approximation for Poisson data
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
Quasiclassical approximation for ultralocal scalar fields
International Nuclear Information System (INIS)
Francisco, G.
1984-01-01
It is shown how to obtain the quasiclassical evolution of a class of field theories called ultralocal fields. Coherent states that follow the 'classical' orbit as defined by Klauder's weak corespondence principle and restricted action principle is explicitly shown to approximate the quantum evolutions as (h/2π) → o. (Author) [pt
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Multidimensional stochastic approximation using locally contractive functions
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Pade approximant calculations for neutron escape probability
International Nuclear Information System (INIS)
El Wakil, S.A.; Saad, E.A.; Hendi, A.A.
1984-07-01
The neutron escape probability from a non-multiplying slab containing internal source is defined in terms of a functional relation for the scattering function for the diffuse reflection problem. The Pade approximant technique is used to get numerical results which compare with exact results. (author)
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
RATIONAL APPROXIMATIONS TO GENERALIZED HYPERGEOMETRIC FUNCTIONS.
Under weak restrictions on the various free parameters, general theorems for rational representations of the generalized hypergeometric functions...and certain Meijer G-functions are developed. Upon specialization, these theorems yield a sequency of rational approximations which converge to the
A rational approximation of the effectiveness factor
DEFF Research Database (Denmark)
Wedel, Stig; Luss, Dan
1980-01-01
A fast, approximate method of calculating the effectiveness factor for arbitrary rate expressions is presented. The method does not require any iterative or interpolative calculations. It utilizes the well known asymptotic behavior for small and large Thiele moduli to derive a rational function...
Decision-theoretic troubleshooting: Hardness of approximation
Czech Academy of Sciences Publication Activity Database
Lín, Václav
2014-01-01
Roč. 55, č. 4 (2014), s. 977-988 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Decision-theoretic troubleshooting * Hardness of approximation * NP-completeness Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.451, year: 2014
Approximate solution methods in engineering mechanics
International Nuclear Information System (INIS)
Boresi, A.P.; Cong, K.P.
1991-01-01
This is a short book of 147 pages including references and sometimes bibliographies at the end of each chapter, and subject and author indices at the end of the book. The test includes an introduction of 3 pages, 29 pages explaining approximate analysis, 41 pages on finite differences, 36 pages on finite elements, and 17 pages on specialized methods
Yellow Fever Remains a Potential Threat to Public Health.
Vasconcelos, Pedro F C; Monath, Thomas P
2016-08-01
Yellow fever (YF) remains a serious public health threat in endemic countries. The recent re-emergence in Africa, initiating in Angola and spreading to Democratic Republic of Congo and Uganda, with imported cases in China and Kenya is of concern. There is such a shortage of YF vaccine in the world that the World Health Organization has proposed the use of reduced doses (1/5) during emergencies. In this short communication, we discuss these and other problems including the risk of spread of YF to areas free of YF for decades or never before affected by this arbovirus disease.
The Artificial Leaf: Recent Progress and Remaining Challenges
Directory of Open Access Journals (Sweden)
Mark D Symes
2016-12-01
Full Text Available The prospect of a device that uses solar energy to split water into H2 and O2 is highly attractive in terms of producing hydrogen as a carbon-neutral fuel. In this mini review, key research milestones that have been reached in this field over the last two decades will be discussed, with special focus on devices that use earth-abundant materials. Finally, the remaining challenges in the development of such “artificial leaves” will be highlighted.
Leprosy: ancient disease remains a public health problem nowadays.
Noriega, Leandro Fonseca; Chiacchio, Nilton Di; Noriega, Angélica Fonseca; Pereira, Gilmayara Alves Abreu Maciel; Vieira, Marina Lino
2016-01-01
Despite being an ancient disease, leprosy remains a public health problem in several countries -particularly in India, Brazil and Indonesia. The current operational guidelines emphasize the evaluation of disability from the time of diagnosis and stipulate as fundamental principles for disease control: early detection and proper treatment. Continued efforts are needed to establish and improve quality leprosy services. A qualified primary care network that is integrated into specialized service and the development of educational activities are part of the arsenal in the fight against the disease, considered neglected and stigmatizing.
Studies on protozoa in ancient remains - A Review
Directory of Open Access Journals (Sweden)
Liesbeth Frías
2013-02-01
Full Text Available Paleoparasitological research has made important contributions to the understanding of parasite evolution and ecology. Although parasitic protozoa exhibit a worldwide distribution, recovering these organisms from an archaeological context is still exceptional and relies on the availability and distribution of evidence, the ecology of infectious diseases and adequate detection techniques. Here, we present a review of the findings related to protozoa in ancient remains, with an emphasis on their geographical distribution in the past and the methodologies used for their retrieval. The development of more sensitive detection methods has increased the number of identified parasitic species, promising interesting insights from research in the future.
Encephalitozoon cuniculi in Raw Cow's Milk Remains Infectious After Pasteurization.
Kváč, Martin; Tomanová, Vendula; Samková, Eva; Koubová, Jana; Kotková, Michaela; Hlásková, Lenka; McEvoy, John; Sak, Bohumil
2016-02-01
This study describes the prevalence of Encephalitozoon cuniculi in raw cow's milk and evaluates the effect of different milk pasteurization treatments on E. cuniculi infectivity for severe combined immunodeficient (SCID) mice. Using a nested polymerase chain reaction approach, 1 of 50 milking cows was found to repeatedly shed E. cuniculi in its feces and milk. Under experimental conditions, E. cuniculi spores in milk remained infective for SCID mice following pasteurization treatments at 72 °C for 15 s or 85 °C for 5 s. Based on these findings, pasteurized cow's milk should be considered a potential source of E. cuniculi infection in humans.
"Recent" macrofossil remains from the Lomonosov Ridge, central Arctic Ocean
Le Duc, Cynthia; de Vernal, Anne; Archambault, Philippe; Brice, Camille; Roberge, Philippe
2016-04-01
The examination of surface sediment samples collected from 17 sites along the Lomonosov Ridge at water depths ranging from 737 to 3339 meters during Polarstern Expedition PS87 in 2014 (Stein, 2015), indicates a rich biogenic content almost exclusively dominated by calcareous remains. Amongst biogenic remains, microfossils (planktic and benthic foraminifers, pteropods, ostracods, etc.) dominate but millimetric to centrimetric macrofossils occurred frequently at the surface of the sediment. The macrofossil remains consist of a large variety of taxa, including gastropods, bivalvia, polychaete tubes, scaphopods, echinoderm plates and spines, and fish otoliths. Among the Bivalvia, the most abundant taxa are Portlandia arctica, Hyalopecten frigidus, Cuspidaria glacilis, Policordia densicostata, Bathyarca spp., and Yoldiella spp. Whereas a few specimens are well preserved and apparently pristine, most mollusk shells displayed extensive alteration features. Moreover, most shells were covered by millimeter scale tubes of the serpulid polychaete Spirorbis sp. suggesting transport from low intertidal or subtidal zone. Both the ecological affinity and known geographic distribution of identified bivalvia as named above support the hypothesis of transportation rather than local development. In addition to mollusk shells, more than a hundred fish otoliths were recovered in surface sediments. The otoliths mostly belong to the Gadidae family. Most of them are well preserved and without serpulid tubes attached to their surface, suggesting a local/regional origin, unlike the shell remains. Although recovered at the surface, the macrofaunal assemblages of the Lomonosov Ridge do not necessarily represent the "modern" environments as they may result from reworking and because their occurrence at the surface of the sediment may also be due to winnowing of finer particles. Although the shells were not dated, we suspect that their actual ages may range from modern to several thousands of
Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.
Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole
2015-05-12
Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.
A 3 Year-Old Male Child Ingested Approximately 750 Grams of Elemental Mercury
Directory of Open Access Journals (Sweden)
Metin Uysalol
2016-08-01
Full Text Available Background: The oral ingestion of elemental mercury is unlikely to cause systemic toxicity, as it is poorly absorbed through the gastrointestinal system. However, abnormal gastrointestinal function or anatomy may allow elemental mercury into the bloodstream and the peritoneal space. Systemic effects of massive oral intake of mercury have rarely been reported. Case Report: In this paper, we are presenting the highest ingle oral intake of elemental mercury by a child aged 3 years. A Libyan boy aged 3 years ingested approximately 750 grams of elemental mercury and was still asymptomatic. Conclusion: The patient had no existing disease or abnormal gastrointestinal function or anatomy. The physical examination was normal. His serum mercury level was 91 μg/L (normal: <5 μg/L, and he showed no clinical manifestations. Exposure to mercury in children through different circumstances remains a likely occurrence.
Fossil human remains from Bolomor Cave (Valencia, Spain).
Arsuaga, Juan Luis; Fernández Peris, Josep; Gracia-Téllez, Ana; Quam, Rolf; Carretero, José Miguel; Barciela González, Virginia; Blasco, Ruth; Cuartero, Felipe; Sañudo, Pablo
2012-05-01
Systematic excavations carried out since 1989 at Bolomor Cave have led to the recovery of four Pleistocene human fossil remains, consisting of a fibular fragment, two isolated teeth, and a nearly complete adult parietal bone. All of these specimens date to the late Middle and early Late Pleistocene (MIS 7-5e). The fibular fragment shows thick cortical bone, an archaic feature found in non-modern (i.e. non-Homo sapiens) members of the genus Homo. Among the dental remains, the lack of a midtrigonid crest in the M(1) represents a departure from the morphology reported for the majority of Neandertal specimens, while the large dimensions and pronounced shoveling of the marginal ridges in the C(1) are similar to other European Middle and late Pleistocene fossils. The parietal bone is very thick, with dimensions that generally fall above Neandertal fossils and resemble more closely the Middle Pleistocene Atapuerca (SH) adult specimens. Based on the presence of archaic features, all the fossils from Bolomor are attributed to the Neandertal evolutionary lineage. Copyright © 2012 Elsevier Ltd. All rights reserved.
Determination of Remaining Useful Life of Gas Turbine Blade
Directory of Open Access Journals (Sweden)
Meor Said Mior Azman
2016-01-01
Full Text Available The aim of this research is to determine the remaining useful life of gas turbine blade, using service-exposed turbine blades. This task is performed using Stress Rupture Test (SRT under accelerated test conditions where the applied stresses to the specimen is between 400 MPa to 600 MPa and the test temperature is 850°C. The study will focus on the creep behaviour of the 52000 hours service-exposed blades, complemented with creep-rupture modelling using JMatPro software and microstructure examination using optical microscope. The test specimens, made up of Ni-based superalloy of the first stage turbine blades, are machined based on International Standard (ISO 24. The results from the SRT will be analyzed using these two main equations – Larson-Miller Parameter and Life Fraction Rule. Based on the results of the remaining useful life analysis, the 52000h service-exposed blade has the condition to operate in the range of another 4751 hr to 18362 hr. The microstructure examinations shows traces of carbide precipitation that deteriorate the grain boundaries that occurs during creep process. Creep-rupture life modelling using JMatPro software has shown good agreement with the accelerated creep rupture test with minimal error.
A method for defleshing human remains using household bleach.
Mann, Robert W; Berryman, Hugh E
2012-03-01
Medical examiners and forensic anthropologists are often faced with the difficult task of removing soft tissue from the human skeleton without damaging the bones, teeth and, in some cases, cartilage. While there are a number of acceptable methods that can be used to remove soft tissue including macerating in water, simmering or boiling, soaking in ammonia, removing with scissors, knife, scalpel or stiff brush, and dermestid beetles, each has its drawback in time, safety, or potential to damage bone. This technical report using the chest plate of a stabbing victim presents a safe and effective alternative method for removing soft tissue from human remains, in particular the chest plate, following autopsy, without damaging or separating the ribs, sternum, and costal cartilage. This method can be used to reveal subtle blunt force trauma to bone, slicing and stabbing injuries, and other forms of trauma obscured by overlying soft tissue. Despite the published cautionary notes, when done properly household bleach (3-6% sodium hypochlorite) is a quick, safe, and effective method for examining cartilage and exposing skeletal trauma by removing soft tissue from human skeletal remains. 2011 American Academy of Forensic Sciences. Published 2011. This article is a U.S. Government work and is in the public domain in the U.S.A.
New fossil remains of Homo naledi from the Lesedi Chamber, South Africa
Hawks, John; Elliott, Marina; Schmid, Peter; Churchill, Steven E; de Ruiter, Darryl J; Roberts, Eric M; Hilbert-Wolf, Hannah; Garvin, Heather M; Williams, Scott A; Delezene, Lucas K; Feuerriegel, Elen M; Randolph-Quinney, Patrick; Kivell, Tracy L; Laird, Myra F; Tawane, Gaokgatlhe; DeSilva, Jeremy M; Bailey, Shara E; Brophy, Juliet K; Meyer, Marc R; Skinner, Matthew M; Tocheri, Matthew W; VanSickle, Caroline; Walker, Christopher S; Campbell, Timothy L; Kuhn, Brian; Kruger, Ashley; Tucker, Steven; Gurtov, Alia; Hlophe, Nompumelelo; Hunter, Rick; Morris, Hannah; Peixotto, Becca; Ramalepa, Maropeng; van Rooyen, Dirk; Tsikoane, Mathabela; Boshoff, Pedro; Dirks, Paul HGM; Berger, Lee R
2017-01-01
The Rising Star cave system has produced abundant fossil hominin remains within the Dinaledi Chamber, representing a minimum of 15 individuals attributed to Homo naledi. Further exploration led to the discovery of hominin material, now comprising 131 hominin specimens, within a second chamber, the Lesedi Chamber. The Lesedi Chamber is far separated from the Dinaledi Chamber within the Rising Star cave system, and represents a second depositional context for hominin remains. In each of three collection areas within the Lesedi Chamber, diagnostic skeletal material allows a clear attribution to H. naledi. Both adult and immature material is present. The hominin remains represent at least three individuals based upon duplication of elements, but more individuals are likely present based upon the spatial context. The most significant specimen is the near-complete cranium of a large individual, designated LES1, with an endocranial volume of approximately 610 ml and associated postcranial remains. The Lesedi Chamber skeletal sample extends our knowledge of the morphology and variation of H. naledi, and evidence of H. naledi from both recovery localities shows a consistent pattern of differentiation from other hominin species. DOI: http://dx.doi.org/10.7554/eLife.24232.001 PMID:28483039
Wang, Zhao-Qiang; Hu, Chang-Hua; Si, Xiao-Sheng; Zio, Enrico
2018-02-01
Current degradation modeling and remaining useful life prediction studies share a common assumption that the degrading systems are not maintained or maintained perfectly (i.e., to an as-good-as new state). This paper concerns the issues of how to model the degradation process and predict the remaining useful life of degrading systems subjected to imperfect maintenance activities, which can restore the health condition of a degrading system to any degradation level between as-good-as new and as-bad-as old. Toward this end, a nonlinear model driven by Wiener process is first proposed to characterize the degradation trajectory of the degrading system subjected to imperfect maintenance, where negative jumps are incorporated to quantify the influence of imperfect maintenance activities on the system's degradation. Then, the probability density function of the remaining useful life is derived analytically by a space-scale transformation, i.e., transforming the constructed degradation model with negative jumps crossing a constant threshold level to a Wiener process model crossing a random threshold level. To implement the proposed method, unknown parameters in the degradation model are estimated by the maximum likelihood estimation method. Finally, the proposed degradation modeling and remaining useful life prediction method are applied to a practical case of draught fans belonging to a kind of mechanical systems from steel mills. The results reveal that, for a degrading system subjected to imperfect maintenance, our proposed method can obtain more accurate remaining useful life predictions than those of the benchmark model in literature.
On use of radial evanescence remain term in kinematic hardening
International Nuclear Information System (INIS)
Geyer, P.
1995-10-01
A fine modelling of the material' behaviour can be necessary to study the mechanical strength of nuclear power plant' components under cyclic loads. Ratchetting is one of the last phenomena for which numerical models have to be improved. We discuss in this paper on use of radial evanescence remain term in kinematic hardening to improve the description of ratchetting in biaxial loading tests. It's well known that Chaboche elastoplastic model with two non linear kinematic hardening variables initially proposed by Armstrong and Frederick, usually over-predicts accumulation of ratchetting strain. Burlet and Cailletaud proposed in 1987 a non linear kinematic rule with a radial evanescence remain term. The two models lead to identical formulation for proportional loadings. In the case of a biaxial loading test (primary+secondary loading), Burlet and Cailletaud model leads to accommodation, when Chaboche one's leads to ratchetting with a constant increment of strain. So we can have an under-estimate with the first model and an over-estimate with the second. An easy method to improve the description of ratchetting is to combine the two kinematic rules. Such an idea is already used by Delobelle in his model. With analytical results in the case of tension-torsion tests, we show in a first part of the paper, the interest of radial evanescence remain term in the non linear kinematic rule to describe ratchetting: we give the conditions to get adaptation, accommodation or ratchetting and the value of the strain increment in the last case. In the second part of the paper, we propose to modify the elastoplastic Chaboche model by coupling the two types of hardening by means of two scalar parameters which can be identified independently on biaxial loading tests. Identification of these two parameters returns to speculate on the directions of strain in order to adjust the ratchetting to experimental observations. We use the experimental results on the austenitic steel 316L at room
Highly efficient DNA extraction method from skeletal remains
Directory of Open Access Journals (Sweden)
Irena Zupanič Pajnič
2011-03-01
Full Text Available Background: This paper precisely describes the method of DNA extraction developed to acquire high quality DNA from the Second World War skeletal remains. The same method is also used for molecular genetic identification of unknown decomposed bodies in routine forensic casework where only bones and teeth are suitable for DNA typing. We analysed 109 bones and two teeth from WWII mass graves in Slovenia. Methods: We cleaned the bones and teeth, removed surface contaminants and ground the bones into powder, using liquid nitrogen . Prior to isolating the DNA in parallel using the BioRobot EZ1 (Qiagen, the powder was decalcified for three days. The nuclear DNA of the samples were quantified by real-time PCR method. We acquired autosomal genetic profiles and Y-chromosome haplotypes of the bones and teeth with PCR amplification of microsatellites, and mtDNA haplotypes 99. For the purpose of traceability in the event of contamination, we prepared elimination data bases including genetic profiles of the nuclear and mtDNA of all persons who have been in touch with the skeletal remains in any way. Results: We extracted up to 55 ng DNA/g of the teeth, up to 100 ng DNA/g of the femurs, up to 30 ng DNA/g of the tibias and up to 0.5 ng DNA/g of the humerus. The typing of autosomal and YSTR loci was successful in all of the teeth, in 98 % dekalof the femurs, and in 75 % to 81 % of the tibias and humerus. The typing of mtDNA was successful in all of the teeth, and in 96 % to 98 % of the bones. Conclusions: We managed to obtain nuclear DNA for successful STR typing from skeletal remains that were over 60 years old . The method of DNA extraction described here has proved to be highly efficient. We obtained 0.8 to 100 ng DNA/g of teeth or bones and complete genetic profiles of autosomal DNA, Y-STR haplotypes, and mtDNA haplotypes from only 0.5g bone and teeth samples.
Approximated solutions to Born-Infeld dynamics
Energy Technology Data Exchange (ETDEWEB)
Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
The Hartree-Fock seniority approximation
International Nuclear Information System (INIS)
Gomez, J.M.G.; Prieto, C.
1986-01-01
A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)
Analytical Ballistic Trajectories with Approximately Linear Drag
Directory of Open Access Journals (Sweden)
Giliam J. P. de Carpentier
2014-01-01
Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.
Simple Lie groups without the approximation property
DEFF Research Database (Denmark)
Haagerup, Uffe; de Laat, Tim
2013-01-01
For a locally compact group G, let A(G) denote its Fourier algebra, and let M0A(G) denote the space of completely bounded Fourier multipliers on G. The group G is said to have the Approximation Property (AP) if the constant function 1 can be approximated by a net in A(G) in the weak-∗ topology...... on the space M0A(G). Recently, Lafforgue and de la Salle proved that SL(3,R) does not have the AP, implying the first example of an exact discrete group without it, namely, SL(3,Z). In this paper we prove that Sp(2,R) does not have the AP. It follows that all connected simple Lie groups with finite center...
The optimal XFEM approximation for fracture analysis
International Nuclear Information System (INIS)
Jiang Shouyan; Du Chengbin; Ying Zongquan
2010-01-01
The extended finite element method (XFEM) provides an effective tool for analyzing fracture mechanics problems. A XFEM approximation consists of standard finite elements which are used in the major part of the domain and enriched elements in the enriched sub-domain for capturing special solution properties such as discontinuities and singularities. However, two issues in the standard XFEM should specially be concerned: efficient numerical integration methods and an appropriate construction of the blending elements. In the paper, an optimal XFEM approximation is proposed to overcome the disadvantage mentioned above in the standard XFEM. The modified enrichment functions are presented that can reproduced exactly everywhere in the domain. The corresponding FORTRAN program is developed for fracture analysis. A classic problem of fracture mechanics is used to benchmark the program. The results indicate that the optimal XFEM can alleviate the errors and improve numerical precision.
Approximated solutions to Born-Infeld dynamics
International Nuclear Information System (INIS)
Ferraro, Rafael; Nigro, Mauro
2016-01-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Traveltime approximations for inhomogeneous HTI media
Alkhalifah, Tariq Ali
2011-01-01
Traveltimes information is convenient for parameter estimation especially if the medium is described by an anisotropic set of parameters. This is especially true if we could relate traveltimes analytically to these medium parameters, which is generally hard to do in inhomogeneous media. As a result, I develop traveltimes approximations for horizontaly transversely isotropic (HTI) media as simplified and even linear functions of the anisotropic parameters. This is accomplished by perturbing the solution of the HTI eikonal equation with respect to η and the azimuthal symmetry direction (usually used to describe the fracture direction) from a generally inhomogeneous elliptically anisotropic background medium. The resulting approximations can provide accurate analytical description of the traveltime in a homogenous background compared to other published moveout equations out there. These equations will allow us to readily extend the inhomogenous background elliptical anisotropic model to an HTI with a variable, but smoothly varying, η and horizontal symmetry direction values. © 2011 Society of Exploration Geophysicists.
Approximate radiative solutions of the Einstein equations
International Nuclear Information System (INIS)
Kuusk, P.; Unt, V.
1976-01-01
In this paper the external field of a bounded source emitting gravitational radiation is considered. A successive approximation method is used to integrate the Einstein equations in Bondi's coordinates (Bondi et al, Proc. R. Soc.; A269:21 (1962)). A method of separation of angular variables is worked out and the approximate Einstein equations are reduced to key equations. The losses of mass, momentum, and angular momentum due to gravitational multipole radiation are found. It is demonstrated that in the case of proper treatment a real mass occurs instead of a mass aspect in a solution of the Einstein equations. In an appendix Bondi's new function is given in terms of sources. (author)
Nonlinear analysis approximation theory, optimization and applications
2014-01-01
Many of our daily-life problems can be written in the form of an optimization problem. Therefore, solution methods are needed to solve such problems. Due to the complexity of the problems, it is not always easy to find the exact solution. However, approximate solutions can be found. The theory of the best approximation is applicable in a variety of problems arising in nonlinear functional analysis and optimization. This book highlights interesting aspects of nonlinear analysis and optimization together with many applications in the areas of physical and social sciences including engineering. It is immensely helpful for young graduates and researchers who are pursuing research in this field, as it provides abundant research resources for researchers and post-doctoral fellows. This will be a valuable addition to the library of anyone who works in the field of applied mathematics, economics and engineering.
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming
2013-01-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Fast Approximate Joint Diagonalization Incorporating Weight Matrices
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Yeredor, A.
2009-01-01
Roč. 57, č. 3 (2009), s. 878-891 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : autoregressive processes * blind source separation * nonstationary random processes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.212, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/tichavsky-fast approximate joint diagonalization incorporating weight matrices.pdf
Mean-field approximation minimizes relative entropy
International Nuclear Information System (INIS)
Bilbro, G.L.; Snyder, W.E.; Mann, R.C.
1991-01-01
The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach
On approximation of functions by product operators
Directory of Open Access Journals (Sweden)
Hare Krishna Nigam
2013-12-01
Full Text Available In the present paper, two quite new reults on the degree of approximation of a function f belonging to the class Lip(α,r, 1≤ r <∞ and the weighted class W(Lr,ξ(t, 1≤ r <∞ by (C,2(E,1 product operators have been obtained. The results obtained in the present paper generalize various known results on single operators.
Markdown Optimization via Approximate Dynamic Programming
Directory of Open Access Journals (Sweden)
Cos?gun
2013-02-01
Full Text Available We consider the markdown optimization problem faced by the leading apparel retail chain. Because of substitution among products the markdown policy of one product affects the sales of other products. Therefore, markdown policies for product groups having a significant crossprice elasticity among each other should be jointly determined. Since the state space of the problem is very huge, we use Approximate Dynamic Programming. Finally, we provide insights on the behavior of how each product price affects the markdown policy.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Factorized Approximate Inverses With Adaptive Dropping
Czech Academy of Sciences Publication Activity Database
Kopal, Jiří; Rozložník, Miroslav; Tůma, Miroslav
2016-01-01
Roč. 38, č. 3 (2016), A1807-A1820 ISSN 1064-8275 R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : approximate inverses * incomplete factorization * Gram–Schmidt orthogonalization * preconditioned iterative methods Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016
Semiclassical approximation in Batalin-Vilkovisky formalism
International Nuclear Information System (INIS)
Schwarz, A.
1993-01-01
The geometry of supermanifolds provided with a Q-structure (i.e. with an odd vector field Q satisfying {Q, Q}=0), a P-structure (odd symplectic structure) and an S-structure (volume element) or with various combinations of these structures is studied. The results are applied to the analysis of the Batalin-Vilkovisky approach to the quantization of gauge theories. In particular the semiclassical approximation in this approach is expressed in terms of Reidemeister torsion. (orig.)
Approximation for limit cycles and their isochrons.
Demongeot, Jacques; Françoise, Jean-Pierre
2006-12-01
Local analysis of trajectories of dynamical systems near an attractive periodic orbit displays the notion of asymptotic phase and isochrons. These notions are quite useful in applications to biosciences. In this note, we give an expression for the first approximation of equations of isochrons in the setting of perturbations of polynomial Hamiltonian systems. This method can be generalized to perturbations of systems that have a polynomial integral factor (like the Lotka-Volterra equation).
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Approximate Inverse Preconditioners with Adaptive Dropping
Czech Academy of Sciences Publication Activity Database
Kopal, J.; Rozložník, Miroslav; Tůma, Miroslav
2015-01-01
Roč. 84, June (2015), s. 13-20 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GAP108/11/0853; GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : approximate inverse * Gram-Schmidt orthogonalization * incomplete decomposition * preconditioned conjugate gradient method * algebraic preconditioning * pivoting Subject RIV: BA - General Mathematics Impact factor: 1.673, year: 2015
Approximations and Implementations of Nonlinear Filtering Schemes.
1988-02-01
sias k an Ykar repctively the input and the output vectors. Asfold. First, there are intrinsic errors, due to explained in the previous section, the...e.g.[BV,P]). In the above example of a a-algebra, the distributive property SIA (S 2vS3) - (SIAS2)v(SIAS3) holds. A complete orthocomplemented...process can be approximated by a switched Control Systems: Stochastic Stability and parameter process depending on the aggregated slow Dynamic Relaibility
An analytical approximation for resonance integral
International Nuclear Information System (INIS)
Magalhaes, C.G. de; Martinez, A.S.
1985-01-01
It is developed a method which allows to obtain an analytical solution for the resonance integral. The problem formulation is completely theoretical and based in concepts of physics of general character. The analytical expression for integral does not involve any empiric correlation or parameter. Results of approximation are compared with pattern values for each individual resonance and for sum of all resonances. (M.C.K.) [pt
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika
2013-02-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
TMI in perspective: reactor containment stands up, difficult decisions remain
International Nuclear Information System (INIS)
Corey, G.R.
1979-01-01
Commonwealth Edison Co. is increasing its commitment to nuclear energy after reviewing the performance of the Three Mile Island reactor containment systems. Both the reactor vessel and the secondary containment remained intact and no radiation was reported in the soil or water. The public discussion of energy options which followed the accident will benefit both the public and technical community even if there is a temporary slowdown in nuclear power development. The realities of energy supplies have become evident; i.e., that nuclear and coal are the only available options for the short-term. The discussion should also lead to better personnel training, regulatory reforms, risk-sharing insurance, and international standards. The public hysteria triggered by the accident stemmed partly from the combination of unfortunate incidents and the media coverage, which led to hasty conclusions
Oldest Directly Dated Remains of Sheep in China
Dodson, John; Dodson, Eoin; Banati, Richard; Li, Xiaoqiang; Atahan, Pia; Hu, Songmei; Middleton, Ryan J.; Zhou, Xinying; Nan, Sun
2014-11-01
The origins of domesticated sheep (Ovis sp.) in China remain unknown. Previous workers have speculated that sheep may have been present in China up to 7000 years ago, however many claims are based on associations with archaeological material rather than independent dates on sheep material. Here we present 7 radiocarbon dates on sheep bone from Inner Mongolia, Ningxia and Shaanxi provinces. DNA analysis on one of the bones confirms it is Ovis sp. The oldest ages are about 4700 to 4400 BCE and are thus the oldest objectively dated Ovis material in eastern Asia. The graphitisised bone collagen had δ13C values indicating some millet was represented in the diet. This probably indicates sheep were in a domestic setting where millet was grown. The younger samples had δ13C values indicating that even more millet was in the diet, and this was likely related to changes in foddering practices
On use of radial evanescence remain term in kinematic hardening
International Nuclear Information System (INIS)
Geyer, P.
1995-01-01
This paper presents the interest which lies in non-linear kinematic hardening rule with radial evanescence remain term as proposed for modelling multiaxial ratchetting. From analytical calculations in the case of the tension/torsion test, this ratchetting is compared with that proposed by Armstrong and Frederick. A modification is then proposed for Chaboche's elastoplastic model with two non-linear kinematic variables, by coupling the two types of hardening by means of two scalar parameters. Identification of these two parameters returns to speculate on the directions of strain in order to adjust the ratchetting to experimental observations. Using biaxial ratchetting tests on stainless steel 316 L specimens at ambient temperature, it is shown that satisfactory modelling of multiaxial ratchetting is obtained. (author). 4 refs., 5 figs
Psychotherapy for Borderline Personality Disorder: Progress and Remaining Challenges.
Links, Paul S; Shah, Ravi; Eynan, Rahel
2017-03-01
The main purpose of this review was to critically evaluate the literature on psychotherapies for borderline personality disorder (BPD) published over the past 5 years to identify the progress with remaining challenges and to determine priority areas for future research. A systematic review of the literature over the last 5 years was undertaken. The review yielded 184 relevant abstracts, and after applying inclusion criteria, 16 articles were fully reviewed based on the articles' implications for future research and/or clinical practice. Our review indicated that patients with various severities benefited from psychotherapy; more intensive therapies were not significantly superior to less intensive therapies; enhancing emotion regulation processes and fostering more coherent self-identity were important mechanisms of change; therapies had been extended to patients with BPD and posttraumatic stress disorder; and more research was needed to be directed at functional outcomes.
[Alcohol and work: remaining sober and return to work].
Vittadini, G; Bandirali, M
2007-01-01
One of the most complex alcohol-driven problems is the job loss and the subsequent attempts to return to a professional activity. In order to better understand the issue, an epidemiologic investigation was carried out on a group of 162 alcoholics whilst hospitalised in a specialised clinic. The outcome shows the importance of remaining sober to keep or to be returned to one's own job. Unfortunately, local resources at hand, first of all joining an auto-mutual-help group, re still too little known and thus clearly underemployed. Therefore, an informative action within companies is highly desirable. Those alcoholics suffering from serious illnesses, especially mental ones represent a different issue. For these people a higher involvement of public authorities is desirable in creating protected job openings.
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
Reidentification of avian embryonic remains from the cretaceous of mongolia.
Varricchio, David J; Balanoff, Amy M; Norell, Mark A
2015-01-01
Embryonic remains within a small (4.75 by 2.23 cm) egg from the Late Cretaceous, Mongolia are here re-described. High-resolution X-ray computed tomography (HRCT) was used to digitally prepare and describe the enclosed embryonic bones. The egg, IGM (Mongolian Institute for Geology, Ulaanbaatar) 100/2010, with a three-part shell microstructure, was originally assigned to Neoceratopsia implying extensive homoplasy among eggshell characters across Dinosauria. Re-examination finds the forelimb significantly longer than the hindlimbs, proportions suggesting an avian identification. Additional, postcranial apomorphies (strut-like coracoid, cranially located humeral condyles, olecranon fossa, slender radius relative to the ulna, trochanteric crest on the femur, and ulna longer than the humerus) identify the embryo as avian. Presence of a dorsal coracoid fossa and a craniocaudally compressed distal humerus with a strongly angled distal margin support a diagnosis of IGM 100/2010 as an enantiornithine. Re-identification eliminates the implied homoplasy of this tri-laminate eggshell structure, and instead associates enantiornithine birds with eggshell microstructure composed of a mammillary, squamatic, and external zones. Posture of the embryo follows that of other theropods with fore- and hindlimbs folded parallel to the vertebral column and the elbow pointing caudally just dorsal to the knees. The size of the egg and embryo of IGM 100/2010 is similar to the two other Mongolian enantiornithine eggs. Well-ossified skeletons, as in this specimen, characterize all known enantiornithine embryos suggesting precocial hatchlings, comparing closely to late stage embryos of modern precocial birds that are both flight- and run-capable upon hatching. Extensive ossification in enantiornithine embryos may contribute to their relatively abundant representation in the fossil record. Neoceratopsian eggs remain unrecognized in the fossil record.
Reidentification of avian embryonic remains from the cretaceous of mongolia.
Directory of Open Access Journals (Sweden)
David J Varricchio
Full Text Available Embryonic remains within a small (4.75 by 2.23 cm egg from the Late Cretaceous, Mongolia are here re-described. High-resolution X-ray computed tomography (HRCT was used to digitally prepare and describe the enclosed embryonic bones. The egg, IGM (Mongolian Institute for Geology, Ulaanbaatar 100/2010, with a three-part shell microstructure, was originally assigned to Neoceratopsia implying extensive homoplasy among eggshell characters across Dinosauria. Re-examination finds the forelimb significantly longer than the hindlimbs, proportions suggesting an avian identification. Additional, postcranial apomorphies (strut-like coracoid, cranially located humeral condyles, olecranon fossa, slender radius relative to the ulna, trochanteric crest on the femur, and ulna longer than the humerus identify the embryo as avian. Presence of a dorsal coracoid fossa and a craniocaudally compressed distal humerus with a strongly angled distal margin support a diagnosis of IGM 100/2010 as an enantiornithine. Re-identification eliminates the implied homoplasy of this tri-laminate eggshell structure, and instead associates enantiornithine birds with eggshell microstructure composed of a mammillary, squamatic, and external zones. Posture of the embryo follows that of other theropods with fore- and hindlimbs folded parallel to the vertebral column and the elbow pointing caudally just dorsal to the knees. The size of the egg and embryo of IGM 100/2010 is similar to the two other Mongolian enantiornithine eggs. Well-ossified skeletons, as in this specimen, characterize all known enantiornithine embryos suggesting precocial hatchlings, comparing closely to late stage embryos of modern precocial birds that are both flight- and run-capable upon hatching. Extensive ossification in enantiornithine embryos may contribute to their relatively abundant representation in the fossil record. Neoceratopsian eggs remain unrecognized in the fossil record.
Conference on Abstract Spaces and Approximation
Szökefalvi-Nagy, B; Abstrakte Räume und Approximation; Abstract spaces and approximation
1969-01-01
The present conference took place at Oberwolfach, July 18-27, 1968, as a direct follow-up on a meeting on Approximation Theory [1] held there from August 4-10, 1963. The emphasis was on theoretical aspects of approximation, rather than the numerical side. Particular importance was placed on the related fields of functional analysis and operator theory. Thirty-nine papers were presented at the conference and one more was subsequently submitted in writing. All of these are included in these proceedings. In addition there is areport on new and unsolved problems based upon a special problem session and later communications from the partici pants. A special role is played by the survey papers also presented in full. They cover a broad range of topics, including invariant subspaces, scattering theory, Wiener-Hopf equations, interpolation theorems, contraction operators, approximation in Banach spaces, etc. The papers have been classified according to subject matter into five chapters, but it needs littl...
Development of the relativistic impulse approximation
International Nuclear Information System (INIS)
Wallace, S.J.
1985-01-01
This talk contains three parts. Part I reviews the developments which led to the relativistic impulse approximation for proton-nucleus scattering. In Part II, problems with the impulse approximation in its original form - principally the low energy problem - are discussed and traced to pionic contributions. Use of pseudovector covariants in place of pseudoscalar ones in the NN amplitude provides more satisfactory low energy results, however, the difference between pseudovector and pseudoscalar results is ambiguous in the sense that it is not controlled by NN data. Only with further theoretical input can the ambiguity be removed. Part III of the talk presents a new development of the relativistic impulse approximation which is the result of work done in the past year and a half in collaboration with J.A. Tjon. A complete NN amplitude representation is developed and a complete set of Lorentz invariant amplitudes are calculated based on a one-meson exchange model and appropriate integral equations. A meson theoretical basis for the important pair contributions to proton-nucleus scattering is established by the new developments. 28 references
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
A Gaussian Approximation Potential for Silicon
Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor
We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.
Approximate modal analysis using Fourier decomposition
International Nuclear Information System (INIS)
Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana
2010-01-01
The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.
Green-Ampt approximations: A comprehensive analysis
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
An Origami Approximation to the Cosmic Web
Neyrinck, Mark C.
2016-10-01
The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.
Function approximation of tasks by neural networks
International Nuclear Information System (INIS)
Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.
2008-01-01
For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.
2011-05-12
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
Blind sensor calibration using approximate message passing
International Nuclear Information System (INIS)
Schülke, Christophe; Caltagirone, Francesco; Zdeborová, Lenka
2015-01-01
The ubiquity of approximately sparse data has led a variety of communities to take great interest in compressed sensing algorithms. Although these are very successful and well understood for linear measurements with additive noise, applying them to real data can be problematic if imperfect sensing devices introduce deviations from this ideal signal acquisition process, caused by sensor decalibration or failure. We propose a message passing algorithm called calibration approximate message passing (Cal-AMP) that can treat a variety of such sensor-induced imperfections. In addition to deriving the general form of the algorithm, we numerically investigate two particular settings. In the first, a fraction of the sensors is faulty, giving readings unrelated to the signal. In the second, sensors are decalibrated and each one introduces a different multiplicative gain to the measurements. Cal-AMP shares the scalability of approximate message passing, allowing us to treat large sized instances of these problems, and experimentally exhibits a phase transition between domains of success and failure. (paper)
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.; Heemink, A.W.; Verlaan, M.; Hoteit, Ibrahim
2011-01-01
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
Local approximation of a metapopulation's equilibrium.
Barbour, A D; McVinish, R; Pollett, P K
2018-04-18
We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.
Approximate particle number projection in hot nuclei
International Nuclear Information System (INIS)
Kosov, D.S.; Vdovin, A.I.
1995-01-01
Heated finite systems like, e.g., hot atomic nuclei have to be described by the canonical partition function. But this is a quite difficult technical problem and, as a rule, the grand canonical partition function is used in the studies. As a result, some shortcomings of the theoretical description appear because of the thermal fluctuations of the number of particles. Moreover, in nuclei with pairing correlations the quantum number fluctuations are introduced by some approximate methods (e.g., by the standard BCS method). The exact particle number projection is very cumbersome and an approximate number projection method for T ≠ 0 basing on the formalism of thermo field dynamics is proposed. The idea of the Lipkin-Nogami method to perform any operator as a series in the number operator powers is used. The system of equations for the coefficients of this expansion is written and the solution of the system in the next approximation after the BCS one is obtained. The method which is of the 'projection after variation' type is applied to a degenerate single j-shell model. 14 refs., 1 tab
Nonresonant approximations to the optical potential
International Nuclear Information System (INIS)
Kowalski, K.L.
1982-01-01
A new class of approximations to the optical potential, which includes those of the multiple-scattering variety, is investigated. These approximations are constructed so that the optical potential maintains the correct unitarity properties along with a proper treatment of nucleon identity. The special case of nucleon-nucleus scattering with complete inclusion of Pauli effects is studied in detail. The treatment is such that the optical potential receives contributions only from subsystems embedded in their own physically correct antisymmetrized subspaces. It is found that a systematic development of even the lowest-order approximations requires the use of the off-shell extension due to Alt, Grassberger, and Sandhas along with a consistent set of dynamical equations for the optical potential. In nucleon-nucleus scattering a lowest-order optical potential is obtained as part of a systematic, exact, inclusive connectivity expansion which is expected to be useful at moderately high energies. This lowest-order potential consists of an energy-shifted (trho)-type term with three-body kinematics plus a heavy-particle exchange or pickup term. The natural appearance of the exchange term additivity in the optical potential clarifies the role of the elastic distortion in connection with the treatment of these processes. The relationship of the relevant aspects of the present analysis of the optical potential to conventional multiple scattering methods is discussed
DEFF Research Database (Denmark)
Sadegh, Payman; Spall, J. C.
1998-01-01
simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo...
Data-aware remaining time prediction of business process instances
Polato, M.; Sperduti, A.; Burattin, A.; Leoni, de M.
2014-01-01
Accurate prediction of the completion time of a business process instance would constitute a valuable tool when managing processes under service level agreement constraints. Such prediction, however, is a very challenging task. A wide variety of factors could influence the trend of a process
Salt Marsh Sustainability in New England: Progress and Remaining Challenges
Natural resource managers, conservationists, and scientists described marsh loss and degradation in many New England coastal systems at the 2014 “Effects of Sea Level Rise on Rhode Island’s Salt Marshes” workshop, organized by the Narragansett Bay National Estuarine Research Rese...
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright
Evaluation of Gaussian approximations for data assimilation in reservoir models
Iglesias, Marco A.
2013-07-14
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our
National Research Council Canada - National Science Library
Franke, Richard
2001-01-01
.... It was found that for all levels the approximation of the covariance data for pressure height innovations by Legendre functions led to positive coefficients for up to 25 terms except at the some low and high levels...
Gaussian and 1/N approximations in semiclassical cosmology
International Nuclear Information System (INIS)
Mazzitelli, F.D.; Paz, J.P.
1989-01-01
We study the λphi 4 theory and the interacting O(N) model in a curved background using the Gaussian approximation for the former and the large-N approximation for the latter. We obtain the renormalized version of the semiclassical Einstein equations having in mind a future application of these models to investigate the physics of the very early Universe. We show that, while the Gaussian approximation has two different phases, in the large-N limit only one is present. The different features of the two phases are analyzed at the level of the effective field equations. We discuss the initial-value problem and find the initial conditions that make the theory renormalizable. As an example, we study the de Sitter self-consistent solutions of the semiclassical Einstein equations. Finally, for an identically zero mean value of the field we find the evolution equations for the classical field Ω(x) = (λ 2 >)/sup 1/2/ and the spacetime metric. They are very similar to the ones obtained by replacing the classical potential by the one-loop effective potential in the classical equations but do not have the drawbacks of the one-loop approximation
Improved approximate inspirals of test bodies into Kerr black holes
International Nuclear Information System (INIS)
Gair, Jonathan R; Glampedakis, Kostas
2006-01-01
We present an improved version of the approximate scheme for generating inspirals of test bodies into a Kerr black hole recently developed by Glampedakis, Hughes and Kennefick. Their original 'hybrid' scheme was based on combining exact relativistic expressions for the evolution of the orbital elements (the semilatus rectum p and eccentricity e) with an approximate, weak-field, formula for the energy and angular momentum fluxes, amended by the assumption of constant inclination angle ι during the inspiral. Despite the fact that the resulting inspirals were overall well behaved, certain pathologies remained for orbits in the strong-field regime and for orbits which are nearly circular and/or nearly polar. In this paper we eliminate these problems by incorporating an array of improvements in the approximate fluxes. First, we add certain corrections which ensure the correct behavior of the fluxes in the limit of vanishing eccentricity and/or 90 deg. inclination. Second, we use higher order post-Newtonian formulas, adapted for generic orbits. Third, we drop the assumption of constant inclination. Instead, we first evolve the Carter constant by means of an approximate post-Newtonian expression and subsequently extract the evolution of ι. Finally, we improve the evolution of circular orbits by using fits to the angular momentum and inclination evolution determined by Teukolsky-based calculations. As an application of our improved scheme, we provide a sample of generic Kerr inspirals which we expect to be the most accurate to date, and for the specific case of nearly circular orbits we locate the critical radius where orbits begin to decircularize under radiation reaction. These easy-to-generate inspirals should become a useful tool for exploring LISA data analysis issues and may ultimately play a role in the detection of inspiral signals in the LISA data
Background approximation in automatic qualitative X-ray-fluorescent analysis
International Nuclear Information System (INIS)
Jordanov, J.; Tsanov, T.; Stefanov, R.; Jordanov, N.; Paunov, M.
1982-01-01
An empirical method of finding the dependence of the background intensity (Isub(bg) on the wavelength is proposed, based on the approximation of the experimentally found values for the background in the course of an automatic qualitative X-ray fluorescent analysis with pre-set curve. It is assumed that the dependence I(lambda) will be well approximated by a curve of the type Isub(bg)=(lambda-lambda sub(o)sup(fsub(1)(lambda))exp[fsub(2)(lambda)] where fsub(1) (lambda) and f 2 (lambda) are linear functions with respect to the sought parameters. This assumption was checked out on a ''pure'' starch background, in which it is not known beforehand which points belong to the background. It was assumed that the dependence I(lambda) can be found from all minima in the spectrum. Three types of minima has been distinguished: 1. the lowest point between two well-solved X-ray lines; 2. a minimum obtained as a result of statistical fluctuations of the measured signal; 3. the lowest point between two overlapped lines. The minima strongly deviating from the background are removed from the obtained set. The sum-total of the remaining minima serves as a base for the approximation of the dependence I(lambda). The unknown parameters are determined by means of the LSM. The approximated curve obtained by this method is closer to the real background than the background determined by the method described by Kigaki Denki, as the effect of all recorded minima is taken into account. As an example the PbTe spectrum recorded with crystal LiF 220 is shown graphically. The curve well describes the background of the spectrum even in the regions in which there are no minima belonging to the background. (authors)
Future Remains: Industrial Heritage at the Hanford Plutonium Works
Freer, Brian
This dissertation argues that U.S. environmental and historic preservation regulations, industrial heritage projects, history, and art only provide partial frameworks for successfully transmitting an informed story into the long range future about nuclear technology and its related environmental legacy. This argument is important because plutonium from nuclear weapons production is toxic to humans in very small amounts, threatens environmental health, has a half-life of 24, 110 years and because the industrial heritage project at Hanford is the first time an entire U.S. Department of Energy weapons production site has been designated a U.S. Historic District. This research is situated within anthropological interest in industrial heritage studies, environmental anthropology, applied visual anthropology, as well as wider discourses on nuclear studies. However, none of these disciplines is really designed or intended to be a completely satisfactory frame of reference for addressing this perplexing challenge of documenting and conveying an informed story about nuclear technology and its related environmental legacy into the long range future. Others have thought about this question and have made important contributions toward a potential solution. Examples here include: future generations movements concerning intergenerational equity as evidenced in scholarship, law, and amongst Native American groups; Nez Perce and Confederated Tribes of the Umatilla Indian Reservation responses to the Hanford End State Vision and Hanford's Canyon Disposition Initiative; as well as the findings of organizational scholars on the advantages realized by organizations that have a long term future perspective. While these ideas inform the main line inquiry of this dissertation, the principal approach put forth by the researcher of how to convey an informed story about nuclear technology and waste into the long range future is implementation of the proposed Future Remains clause, as
New Evidence Links Stellar Remains to Oldest Recorded Supernova
2006-09-01
Recent observations have uncovered evidence that helps to confirm the identification of the remains of one of the earliest stellar explosions recorded by humans. The new study shows that the supernova remnant RCW 86 is much younger than previously thought. As such, the formation of the remnant appears to coincide with a supernova observed by Chinese astronomers in 185 A.D. The study used data from NASA's Chandra X-ray Observatory and the European Space Agency's XMM-Newton Observatory, "There have been previous suggestions that RCW 86 is the remains of the supernova from 185 A.D.," said Jacco Vink of University of Utrecht, the Netherlands, and lead author of the study. "These new X-ray data greatly strengthen the case." When a massive star runs out of fuel, it collapses on itself, creating a supernova that can outshine an entire galaxy. The intense explosion hurls the outer layers of the star into space and produces powerful shock waves. The remains of the star and the material it encounters are heated to millions of degrees and can emit intense X-ray radiation for thousands of years. Animation of a Massive Star Explosion Animation of a Massive Star Explosion In their stellar forensic work, Vink and colleagues studied the debris in RCW 86 to estimate when its progenitor star originally exploded. They calculated how quickly the shocked, or energized, shell is moving in RCW 86, by studying one part of the remnant. They combined this expansion velocity with the size of the remnant and a basic understanding of how supernovas expand to estimate the age of RCW 86. "Our new calculations tell us the remnant is about 2,000 years old," said Aya Bamba, a coauthor from the Institute of Physical and Chemical Research (RIKEN), Japan. "Previously astronomers had estimated an age of 10,000 years." The younger age for RCW 86 may explain an astronomical event observed almost 2000 years ago. In 185 AD, Chinese astronomers (and possibly the Romans) recorded the appearance of a new
Briquettes of plant remains from the greenhouses of Almeria (Spain)
Energy Technology Data Exchange (ETDEWEB)
Callejon-Ferre, A. J.; Lopez-Martinez, J. A.
2009-07-01
Since ancient times, plant biomass has been used as a primary fuel, and today, with the impending depletion of fossil fuels, these vegetal sources constitute a cleaner alternative and furthermore have a multitude of uses. The aim of the present study is to design a method of recycling and reuse of plant wastes from intensive agriculture under plastic, by manufacturing briquettes in an environmentally friendly manner. In Almeria (SE Spain), agriculture generates 769,500 t year{sup -}1 of plant remains from greenhouse-grown horticultural crops, a resource currently used for composting and for producing electricity.With the machinery and procedures of the present study, another potential use has been developed by detoxifying and eliminating the plastic wastes of the original biomass for the fabrication of briquettes for fireplaces. The results were slightly inferior to the commercial briquette from other non-horticultural plant materials (no forestry material), specifically 2512 kJ kg{sup -}1, in the least favourable case. On the contrary, the heating value with respect to the two charcoals was significantly lower, with a difference of 12,142 kJ kg{sup -}1. In conclusion; a procedure, applicable in ecological cultivation without agrochemicals or plastic cords, has been developed and tested to reuse and transform plant materials from intensive cultivation into a stable non-toxic product similar to composite logs, applicable in commercial settings or in residential fireplaces. (Author) 48 refs.
Are the alleged remains of Johann Sebastian Bach authentic?
Zegers, Richard H C; Maas, Mario; Koopman, A Ton G; Maat, George J R
2009-02-16
A skeleton alleged to be that of Johann Sebastian Bach (1685-1750) was exhumed from a graveyard in Leipzig, Germany, in 1894, but its authenticity is not established. In 1895, anatomist Wilhelm His concluded from his examination of the skeleton and reconstruction of the face that it most likely belonged to Bach. In 1949, surgeon Wolfgang Rosenthal noticed exostoses on the skeleton and on x-rays of 11 living organists and proposed a condition, Organistenkrankheit, which he interpreted as evidence that the skeleton was Bach's. However, our critical assessment of the remains analysis raises doubts: the localisation of the grave was dubious, and the methods used by His to reconstruct the face are controversial. Also, our study of the pelvic x-rays of 12 living professional organists failed to find evidence for the existence of Organistenkrankheit. We believe it is unlikely that the skeleton is that of Bach; techniques such as DNA analysis might help resolve the question but, to date, church authorities have not approved their use on the skeleton.
Factors influencing home care nurse intention to remain employed.
Tourangeau, Ann; Patterson, Erin; Rowe, Alissa; Saari, Margaret; Thomson, Heather; MacDonald, Geraldine; Cranley, Lisa; Squires, Mae
2014-11-01
To identify factors affecting Canadian home care nurse intention to remain employed (ITR). In developed nations, healthcare continues to shift into community settings. Although considerable research exists on examining nurse ITR in hospitals, similar research related to nurses employed in home care is limited. In the face of a global nursing shortage, it is important to understand the factors influencing nurse ITR across healthcare sectors. A qualitative exploratory descriptive design was used. Focus groups were conducted with home care nurses. Data were analysed using qualitative content analysis. Six categories of influencing factors were identified by home care nurses as affecting ITR: job characteristics; work structures; relationships/communication; work environment; nurse responses to work; and employment conditions. Findings suggest the following factors influence home care nurse ITR: having autonomy; flexible scheduling; reasonable and varied workloads; supportive work relationships; and receiving adequate pay and benefits. Home care nurses did not identify job satisfaction as a single concept influencing ITR. Home care nursing management should support nurse autonomy, allow flexible scheduling, promote reasonable workloads and create opportunities for team building that strengthen supportive relationships among home care nurses and other health team members. © 2013 John Wiley & Sons Ltd.
Carnivoran remains from the Malapa hominin site, South Africa.
Directory of Open Access Journals (Sweden)
Brian F Kuhn
Full Text Available Recent discoveries at the new hominin-bearing deposits of Malapa, South Africa, have yielded a rich faunal assemblage associated with the newly described hominin taxon Australopithecus sediba. Dating of this deposit using U-Pb and palaeomagnetic methods has provided an age of 1.977 Ma, being one of the most accurately dated, time constrained deposits in the Plio-Pleistocene of southern Africa. To date, 81 carnivoran specimens have been identified at this site including members of the families Canidae, Viverridae, Herpestidae, Hyaenidae and Felidae. Of note is the presence of the extinct taxon Dinofelis cf. D. barlowi that may represent the last appearance date for this species. Extant large carnivores are represented by specimens of leopard (Panthera pardus and brown hyaena (Parahyaena brunnea. Smaller carnivores are also represented, and include the genera Atilax and Genetta, as well as Vulpes cf. V. chama. Malapa may also represent the first appearance date for Felis nigripes (Black-footed cat. The geochronological age of Malapa and the associated hominin taxa and carnivoran remains provide a window of research into mammalian evolution during a relatively unknown period in South Africa and elsewhere. In particular, the fauna represented at Malapa has the potential to elucidate aspects of the evolution of Dinofelis and may help resolve competing hypotheses about faunal exchange between East and Southern Africa during the late Pliocene or early Pleistocene.
DNA Profiling Success Rates from Degraded Skeletal Remains in Guatemala.
Johnston, Emma; Stephenson, Mishel
2016-07-01
No data are available regarding the success of DNA Short Tandem Repeat (STR) profiling from degraded skeletal remains in Guatemala. Therefore, DNA profiling success rates relating to 2595 skeletons from eleven cases at the Forensic Anthropology Foundation of Guatemala (FAFG) are presented. The typical postmortem interval was 30 years. DNA was extracted from bone powder and amplified using Identifiler and Minifler. DNA profiling success rates differed between cases, ranging from 50.8% to 7.0%, the overall success rate for samples was 36.3%. The best DNA profiling success rates were obtained from femur (36.2%) and tooth (33.7%) samples. DNA profiles were significantly better from lower body bones than upper body bones (p = <0.0001). Bone samples from males gave significantly better profiles than samples from females (p = <0.0001). These results are believed to be related to bone density. The findings are important for designing forensic DNA sampling strategies in future victim recovery investigations. © 2016 American Academy of Forensic Sciences.
Using contractors to decommission while remaining as licensee
International Nuclear Information System (INIS)
Rankine, A.
1997-01-01
Over the last few years the role of the United Kingdom Atomic Energy Authority (UKAEA) has changed from one involved in research and development in the field of nuclear power and associated technology, to one of managing the liabilities left over from its previous mission. This period has also seen two significant portions of the organization move to the private sector with sale of the Facilities Services Division to PROCORD and the privatization of AEA Technology. The new UKAEA is therefore a focused liabilities management organization, making the best use of expertise in the private sector in carrying out its mission, but retaining adequate internal resource and expertise to fulful its role and responsibilities as the licensee. UKAEA continues to be committed to giving the highest priority to meeting high standards of safety and environmental protection required of the holder of the Nuclear Site Licence under the Nuclear Installations Act. This paper describes the safety management system within the UKAEA which ensures that UKAEA remains the proper and effective licensee and gives some examples of how this has worked in practice. (author)
Efforts to standardize wildlife toxicity values remain unrealized.
Mayfield, David B; Fairbrother, Anne
2013-01-01
Wildlife toxicity reference values (TRVs) are routinely used during screening level and baseline ecological risk assessments (ERAs). Risk assessment professionals often adopt TRVs from published sources to expedite risk analyses. The US Environmental Protection Agency (USEPA) developed ecological soil screening levels (Eco-SSLs) to provide a source of TRVs that would improve consistency among risk assessments. We conducted a survey and evaluated more than 50 publicly available, large-scale ERAs published in the last decade to evaluate if USEPA's goal of uniformity in the use of wildlife TRVs has been met. In addition, these ERAs were reviewed to understand current practices for wildlife TRV use and development within the risk assessment community. The use of no observed and lowest observed adverse effect levels culled from published compendia was common practice among the majority of ERAs reviewed. We found increasing use over time of TRVs established in the Eco-SSL documents; however, Eco-SSL TRV values were not used in the majority of recent ERAs and there continues to be wide variation in TRVs for commonly studied contaminants (e.g., metals, pesticides, PAHs, and PCBs). Variability in the toxicity values was driven by differences in the key studies selected, dose estimation methods, and use of uncertainty factors. These differences result in TRVs that span multiple orders of magnitude for many of the chemicals examined. This lack of consistency in TRV development leads to highly variable results in ecological risk assessments conducted throughout the United States. Copyright © 2012 SETAC.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
Pentaquarks in the Jaffe-Wilczek approximation
International Nuclear Information System (INIS)
Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.
2005-01-01
The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone-boson-exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV [ru
Localization and stationary phase approximation on supermanifolds
Zakharevich, Valentin
2017-08-01
Given an odd vector field Q on a supermanifold M and a Q-invariant density μ on M, under certain compactness conditions on Q, the value of the integral ∫Mμ is determined by the value of μ on any neighborhood of the vanishing locus N of Q. We present a formula for the integral in the case where N is a subsupermanifold which is appropriately non-degenerate with respect to Q. In the process, we discuss the linear algebra necessary to express our result in a coordinate independent way. We also extend the stationary phase approximation and the Morse-Bott lemma to supermanifolds.
SAM revisited: uniform semiclassical approximation with absorption
International Nuclear Information System (INIS)
Hussein, M.S.; Pato, M.P.
1986-01-01
The uniform semiclassical approximation is modified to take into account strong absorption. The resulting theory, very similar to the one developed by Frahn and Gross is used to discuss heavy-ion elastic scattering at intermediate energies. The theory permits a reasonably unambiguos separation of refractive and diffractive effects. The systems 12 C+ 12 C and 12 C+ 16 O, which seem to exhibit a remnant of a nuclear rainbow at E=20 Mev/N, are analysed with theory which is built directly on a model for the S-matrix. Simple relations between the fit S-matrix and the underlying complex potential are derived. (Author) [pt
TMB: Automatic differentiation and laplace approximation
DEFF Research Database (Denmark)
Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte
2016-01-01
TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable) models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011). In addition, it offers easy access to parallel...... computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects...
Shape theory categorical methods of approximation
Cordier, J M
2008-01-01
This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and
On one approximation in quantum chromodynamics
International Nuclear Information System (INIS)
Alekseev, A.I.; Bajkov, V.A.; Boos, Eh.Eh.
1982-01-01
Form of a complete fermion propagator near the mass shell is investigated. Considered is a nodel of quantum chromodynamics (MQC) where in the fermion section the Block-Nordsic approximation has been made, i. e. u-numbers are substituted for ν matrices. The model was investigated by means of the Schwinger-Dyson equation for a quark propagator in the infrared region. The Schwinger-Dyson equation was managed to reduce to a differential equation which is easily solved. At that, the Green function is suitable to represent as integral transformation
Static correlation beyond the random phase approximation
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian Sommer
2014-01-01
derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...... and confirms that BSE greatly improves the RPA and TDHF results despite the fact that the BSE excitation spectrum breaks down in the dissociation limit. In contrast, second order screened exchange gives a poor description of the dissociation limit, which can be attributed to the fact that it cannot be derived...
Multi-compartment linear noise approximation
International Nuclear Information System (INIS)
Challenger, Joseph D; McKane, Alan J; Pahle, Jürgen
2012-01-01
The ability to quantify the stochastic fluctuations present in biochemical and other systems is becoming increasing important. Analytical descriptions of these fluctuations are attractive, as stochastic simulations are computationally expensive. Building on previous work, a linear noise approximation is developed for biochemical models with many compartments, for example cells. The procedure is then implemented in the software package COPASI. This technique is illustrated with two simple examples and is then applied to a more realistic biochemical model. Expressions for the noise, given in the form of covariance matrices, are presented. (paper)
Approximation of Moessbauer spectra of metallic glasses
International Nuclear Information System (INIS)
Miglierini, M.; Sitek, J.
1988-01-01
Moessbauer spectra of iron-rich metallic glasses are approximated by means of six broadened lines which have line position relations similar to those of α-Fe. It is shown via the results of the DISPA (dispersion mode vs. absorption mode) line shape analysis that each spectral peak is broadened owing to a sum of Lorentzian lines weighted by a Gaussian distribution in the peak position. Moessbauer parameters of amorphous metallic Fe 83 B 17 and Fe 40 Ni 40 B 20 alloys are presented, derived from the fitted spectra. (author). 2 figs., 2 tabs., 21 refs
High energy approximations in quantum field theory
International Nuclear Information System (INIS)
Orzalesi, C.A.
1975-01-01
New theoretical methods in hadron physics based on a high-energy perturbation theory are discussed. The approximated solutions to quantum field theory obtained by this method appear to be sufficiently simple and rich in structure to encourage hadron dynamics studies. Operator eikonal form for field - theoretic Green's functions is derived and discussion is held on how the eikonal perturbation theory is to be renormalized. This method is extended to massive quantum electrodynamics of scalar charged bosons. Possible developments and applications of this theory are given [pt
Weak field approximation of new general relativity
International Nuclear Information System (INIS)
Fukui, Masayasu; Masukawa, Junnichi
1985-01-01
In the weak field approximation, gravitational field equations of new general relativity with arbitrary parameters are examined. Assuming a conservation law delta sup(μ)T sub(μν) = 0 of the energy-momentum tensor T sub(μν) for matter fields in addition to the usual one delta sup(ν)T sub(μν) = 0, we show that the linearized gravitational field equations are decomposed into equations for a Lorentz scalar field and symmetric and antisymmetric Lorentz tensor fields. (author)
Pentaquarks in the Jaffe-Wilczek Approximation
International Nuclear Information System (INIS)
Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.
2005-01-01
The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and the spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone boson exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV
Turbo Equalization Using Partial Gaussian Approximation
DEFF Research Database (Denmark)
Zhang, Chuanzong; Wang, Zhongyong; Manchón, Carles Navarro
2016-01-01
This letter deals with turbo equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation propagation rule to convert messages passed from the demodulator and decoder to the equalizer and computes messages...... returned by the equalizer by using a partial Gaussian approximation (PGA). We exploit the specific structure of the ISI channel model to compute the latter messages from the beliefs obtained using a Kalman smoother/equalizer. Doing so leads to a significant complexity reduction compared to the initial PGA...
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Mineralized remains of morphotypes of filamentous cyanobacteria in carbonaceous meteorites
Hoover, Richard B.
2005-09-01
rocks, living, cryopreserved and fossilized extremophiles and cyanobacteria. These studies have resulted in the detection of mineralized remains of morphotypes of filamentous cyanobacteria, mats and consortia in many carbonaceous meteorites. These well-preserved and embedded microfossils are consistent with the size, morphology and ultra-microstructure of filamentous trichomic prokaryotes and degraded remains of microfibrils of cyanobacterial sheaths. EDAX elemental studies reveal that the forms in the meteorites often have highly carbonized sheaths in close association with permineralized filaments, trichomes, and microbial cells. The eextensive protocols and methodologies that have been developed to protect the samples from contamination and to distinguish recent contaminants from indigenous microfossils are described recent bio-contaminants. Ratios of critical bioelements (C:O, C:N, C:P, and C:S) reveal dramatic differences between microfossils in Earth rocks and meteorites and in the cells, filaments, trichomes, and hormogonia of recently living cyanobacteria. The results of comparative optical, ESEM and FESEM studies and EDAX elemental analyses of recent cyanobacteria (e.g. Calothrix, Oscillatoria, and Lyngbya) of similar size, morphology and microstructure to microfossils found embedded in the Murchison CM2 and the Orgueil CI1 carbonaceous meteorites are presented
Remaining lifetime modeling using State-of-Health estimation
Beganovic, Nejra; Söffker, Dirk
2017-08-01
Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model
Clarifying some remaining questions in the anomaly puzzle
International Nuclear Information System (INIS)
Huang, Xing; Parker, Leonard
2011-01-01
We discuss several points that may help to clarify some questions that remain about the anomaly puzzle in supersymmetric theories. In particular, we consider a general N=1 supersymmetric Yang-Mills theory. The anomaly puzzle concerns the question of whether there is a consistent way in the quantized theory to put the R-current and the stress tensor in a single supermultiplet called the supercurrent, even though in the classical theory they are in the same supermultiplet. It was proposed that the classically conserved supercurrent bifurcates into two supercurrents having different anomalies in the quantum regime. The most interesting result we obtain is an explicit expression for the lowest component of one of the two supercurrents in 4-dimensional spacetime, namely the supercurrent that has the energy-momentum tensor as one of its components. This expression for the lowest component is an energy-dependent linear combination of two chiral currents, which itself does not correspond to a classically conserved chiral current. The lowest component of the other supercurrent, namely, the R-current, satisfies the Adler-Bardeen theorem. The lowest component of the first supercurrent has an anomaly, which we show is consistent with the anomaly of the trace of the energy-momentum tensor. Therefore, we conclude that there is no consistent way to construct a single supercurrent multiplet that contains the R-current and the stress tensor in the straightforward way originally proposed. We also discuss and try to clarify some technical points in the derivations of the two supercurrents in the literature. These latter points concern the significance of infrared contributions to the NSVZ β-function and the role of the equations of motion in deriving the two supercurrents. (orig.)
Will southern California remain a premium market for natural gas?
International Nuclear Information System (INIS)
John, F.E.
1991-01-01
Average yearly demand for natural gas in southern California totalled just over 3 billion ft 3 /d in 1991 and is projected to increase to just over 3.2 billion ft 3 /d in 2000 and 3.4 billion ft 3 /d in 2010. In the core residential market, demand is being driven by population growth and offset by conservation measures. In the core commercial and industrial market, demand is driven by employment growth and offset by conservation. In the noncore market, natural gas use is expected to fall from 262 million ft 3 /d in 1991 to 223 million ft 3 /d in 2010. Demand for natural gas for cogeneration is expected to either remain stagnant or decrease. The largest potential for market growth in southern California is for utility electric generation. Demand in this sector is expected to increase from 468 million ft 3 /d in 1991 to 1 billion ft 3 in 2010. Air quality concerns furnish a market opportunity for natural gas vehicles, and a substantial increase in natural gas demand might be obtained from even a modest market share of the region's 10 million vehicles. Existing pipeline capacity is sufficient to supply current average year requirements, and the need for new capacity hinges on the issues of satisfying high-year demand, meeting market growth, and accessing more desirable supply regions. Planned capacity additions of 2,150 million ft 3 /d, if completed, will bring substantial excess capacity to southern California in the late 1990s. The competitive advantages of various producing regions will then be greatly influenced by the rate designs used on the pipelines connecting them to the market. 4 tabs
Neutron activation analysis of the prehistoric and ancient bone remains
International Nuclear Information System (INIS)
Vasidov, A.; Osinskaya, N.S.; Khatamov, Sh.; Rakhmanova, T.; Akhmadshaev, A.Sh.
2006-01-01
Full text: In the work results of the instrumental neutron activation analysis (INAA) of prehistoric bone remains of dinosaurs and ancient bones of bear, archantrop found out on the territory of Uzbekistan are presents. A bone of dinosaur from Mongolia, standard a bone of the person and soils taken from a surface and from of the femoral joint of a dinosaur were also subject to INAA. The INAA method determines of contents of about 30 elements in bones and soils in an interval 0.043-3600 mg / kg. Among found elements Ca (46 %), Sc, Cr, Fe (up to 2.2 g/kg), Ni, Zn, Sr (up to 3.6 g/kg), Sb, Ba, Sb and some others are mainly found in bones. The contents of some elements in bones of dinosaurs reach very high values 280-3200 mg / kg, and are mainly lanthanides La, Ce, Nd, Sm, Eu, Tb, Yb and Lu. In our opinion, lanthanides and some other elements, like As, Br, and Mo in bones were formed as a result of fission of uranium and transuranium elements. Because content of uranium in bones of dinosaurs is very high, up to 180 mg / kg, and those of thorium is 20 mg/ kg. However U and Th in soils are 4.8 mg/kg and 3.7 mg / kg, respectively. The content of uranium in bones of the archantrop is 1.53 mg / kg, while U in standard bone of the human is less than 0,016 mg/kg. (author)
The broad spectrum revisited: evidence from plant remains.
Weiss, Ehud; Wetterstrom, Wilma; Nadel, Dani; Bar-Yosef, Ofer
2004-06-29
The beginning of agriculture is one of the most important developments in human history, with enormous consequences that paved the way for settled life and complex society. Much of the research on the origins of agriculture over the last 40 years has been guided by Flannery's [Flannery, K. V. (1969) in The Domestication and Exploitation of Plants and Animals, eds. Ucko, P. J. & Dimbleby, G. W. (Duckworth, London), pp. 73-100] "broad spectrum revolution" (BSR) hypothesis, which posits that the transition to farming in southwest Asia entailed a period during which foragers broadened their resource base to encompass a wide array of foods that were previously ignored in an attempt to overcome food shortages. Although these resources undoubtedly included plants, nearly all BSR hypothesis-inspired research has focused on animals because of a dearth of Upper Paleolithic archaeobotanical assemblages. Now, however, a collection of >90,000 plant remains, recently recovered from the Stone Age site Ohalo II (23,000 B.P.), Israel, offers insights into the plant foods of the late Upper Paleolithic. The staple foods of this assemblage were wild grasses, pushing back the dietary shift to grains some 10,000 years earlier than previously recognized. Besides the cereals (wild wheat and barley), small-grained grasses made up a large component of the assemblage, indicating that the BSR in the Levant was even broader than originally conceived, encompassing what would have been low-ranked plant foods. Over the next 15,000 years small-grained grasses were gradually replaced by the cereals and ultimately disappeared from the Levantine diet.
Analytic approximate radiation effects due to Bremsstrahlung
Energy Technology Data Exchange (ETDEWEB)
Ben-Zvi I.
2012-02-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.
TMB: Automatic Differentiation and Laplace Approximation
Directory of Open Access Journals (Sweden)
Kasper Kristensen
2016-04-01
Full Text Available TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011. In addition, it offers easy access to parallel computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three of the joint likelihood. The computations are designed to be fast for problems with many random effects (≈ 106 and parameters (≈ 103 . Computation times using ADMB and TMB are compared on a suite of examples ranging from simple models to large spatial models where the random effects are a Gaussian random field. Speedups ranging from 1.5 to about 100 are obtained with increasing gains for large problems. The package and examples are available at http://tmb-project.org/.
On some applications of diophantine approximations.
Chudnovsky, G V
1984-03-01
Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to "almost all" numbers. In particular, any such number has the "2 + epsilon" exponent of irrationality: Theta - p/q > q(-2-epsilon) for relatively prime rational integers p,q, with q >/= q(0) (Theta, epsilon). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162].
Detecting Change-Point via Saddlepoint Approximations
Institute of Scientific and Technical Information of China (English)
Zhaoyuan LI; Maozai TIAN
2017-01-01
It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.
Traveling cluster approximation for uncorrelated amorphous systems
International Nuclear Information System (INIS)
Kaplan, T.; Sen, A.K.; Gray, L.J.; Mills, R.
1985-01-01
In this paper, the authors apply the TCA concepts to spatially disordered, uncorrelated systems (e.g., fluids or amorphous metals without short-range order). This is the first approximation scheme for amorphous systems that takes cluster effects into account while preserving the Herglotz property for any amount of disorder. They have performed some computer calculations for the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results are compared with exact calculations (which, in principle, taken into account all cluster effects) and with the CPA, which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA, and yet, apparently, the pair approximation distorts some of the features of the exact results. They conclude that the effects of large clusters are much more important in an uncorrelated liquid metal than in a substitutional alloy. As a result, the pair TCA, which does quite a nice job for alloys, is not adequate for the liquid. Larger clusters must be treated exactly, and therefore an n-TCA with n > 2 must be used
Approximating Markov Chains: What and why
International Nuclear Information System (INIS)
Pincus, S.
1996-01-01
Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics
Approximation to estimation of critical state
International Nuclear Information System (INIS)
Orso, Jose A.; Rosario, Universidad Nacional
2011-01-01
The position of the control rod for the critical state of the nuclear reactor depends on several factors; including, but not limited to the temperature and configuration of the fuel elements inside the core. Therefore, the position can not be known in advance. In this paper theoretical estimations are developed to obtain an equation that allows calculating the position of the control rod for the critical state (approximation to critical) of the nuclear reactor RA-4; and will be used to create a software performing the estimation by entering the count rate of the reactor pulse channel and the length obtained from the control rod (in cm). For the final estimation of the approximation to critical state, a function obtained experimentally indicating control rods reactivity according to the function of their position is used, work is done mathematically to obtain a linear function, which gets the length of the control rod, which has to be removed to get the reactor in critical position. (author) [es
Analytic approximate radiation effects due to Bremsstrahlung
International Nuclear Information System (INIS)
Ben-Zvi, I.
2012-01-01
The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R and D Energy Recovery Linac.
Approximate analytic theory of the multijunction grill
International Nuclear Information System (INIS)
Hurtak, O.; Preinhaelter, J.
1991-03-01
An approximate analytic theory of the general multijunction grill is developed. Omitting the evanescent modes in the subsidiary waveguides both at the junction and at the grill mouth and neglecting multiple wave reflection, simple formulae are derived for the reflection coefficient, the amplitudes of the incident and reflected waves and the spectral power density. These quantities are expressed through the basic grill parameters (the electric length of the structure and phase shift between adjacent waveguides) and two sets of reflection coefficients describing wave reflections in the subsidiary waveguides at the junction and at the plasma. Approximate expressions for these coefficients are also given. The results are compared with a numerical solution of two specific examples; they were shown to be useful for the optimization and design of multijunction grills.For the JET structure it is shown that, in the case of a dense plasma,many results can be obtained from the simple formulae for a two-waveguide multijunction grill. (author) 12 figs., 12 refs
Material aging and degradation detection and remaining life assessment for plant life management
International Nuclear Information System (INIS)
Ramuhalli, P.; Henager, C.H. Jr.; Griffin, J.W.; Meyer, R.M.; Coble, J.B.; Pitman, S.G.; Bond, L.J.
2012-01-01
One of the major factors that may impact long-term operations is structural material degradation. Detecting materials degradation, estimating the remaining useful life (RUL) of the component, and determining approaches to mitigating the degradation are important from the perspective of long-term operations. In this study, multiple nondestructive measurement and monitoring methods were evaluated for their ability to assess the material degradation state. Metrics quantifying the level of damage from these measurements were defined and evaluated for their ability to provide estimates of remaining life of the component. An example of estimating the RUL from nondestructive measurements of material degradation condition is provided. (author)
Approximate Dynamic Programming for Military Resource Allocation
2014-12-26
To my wife and children ; Mom, Poppa, and Grammy. v Acknowledgements I would like to express my sincere appreciation to my research advisor Dr. Darryl...the first stage and using weapons in the second stage. Because the second stage uti - lizes a single resource class, the optimality of the MMR...generates two children , so the size of the next generation remains constant. Based on the recommendations of Chu and Beasley [27], these operators
Calculation of a hydrogen molecule in the adiabatic approximation
International Nuclear Information System (INIS)
Vukajlovich, F.R.; Mogilevskij, O.A.; Ponomarev, L.I.
1979-01-01
The adiabatic approximation js used for calculating the energy levels of a hydrogen molecule, i.e. of the simplest four-body system with a Coulomb interaction. The aim of this paper is the investigation of the possible use of the adiabatic method in the molecular problems. The most effective regions of its application are discussed. An infinite system of integro-differential equations is constructed, which describes the hydrogen molecule in the adiabatic approximation with the effective potentials taking into account the corrections to the nuclear motion. The energy of the first three vibrational states of the hydrogen molecule is calculated and compared with the experimental data. The convergence of the method is discussed
Low-temperature excitations within the Bethe approximation
International Nuclear Information System (INIS)
Biazzo, I; Ramezanpour, A
2013-01-01
We propose the variational quantum cavity method to construct a minimal energy subspace of wavevectors that are used to obtain some upper bounds for the energy cost of the low-temperature excitations. Given a trial wavefunction we use the cavity method of statistical physics to estimate the Hamiltonian expectation and to find the optimal variational parameters in the subspace of wavevectors orthogonal to the lower-energy wavefunctions. To this end, we write the overlap between two wavefunctions within the Bethe approximation, which allows us to replace the global orthogonality constraint with some local constraints on the variational parameters. The method is applied to the transverse Ising model and different levels of approximations are compared with the exact numerical solutions for small systems. (paper)
Autonomous vehicle motion control, approximate maps, and fuzzy logic
Ruspini, Enrique H.
1993-01-01
Progress on research on the control of actions of autonomous mobile agents using fuzzy logic is presented. The innovations described encompass theoretical and applied developments. At the theoretical level, results of research leading to the combined utilization of conventional artificial planning techniques with fuzzy logic approaches for the control of local motion and perception actions are presented. Also formulations of dynamic programming approaches to optimal control in the context of the analysis of approximate models of the real world are examined. Also a new approach to goal conflict resolution that does not require specification of numerical values representing relative goal importance is reviewed. Applied developments include the introduction of the notion of approximate map. A fuzzy relational database structure for the representation of vague and imprecise information about the robot's environment is proposed. Also the central notions of control point and control structure are discussed.
Swiss-cheese models and the Dyer-Roeder approximation
Energy Technology Data Exchange (ETDEWEB)
Fleury, Pierre, E-mail: fleury@iap.fr [Institut d' Astrophysique de Paris, UMR-7095 du CNRS, Université Pierre et Marie Curie, 98 bis, boulevard Arago, 75014 Paris (France)
2014-06-01
In view of interpreting the cosmological observations precisely, especially when they involve narrow light beams, it is crucial to understand how light propagates in our statistically homogeneous, clumpy, Universe. Among the various approaches to tackle this issue, Swiss-cheese models propose an inhomogeneous spacetime geometry which is an exact solution of Einstein's equation, while the Dyer-Roeder approximation deals with inhomogeneity in an effective way. In this article, we demonstrate that the distance-redshift relation of a certain class of Swiss-cheese models is the same as the one predicted by the Dyer-Roeder approach, at a well-controlled level of approximation. Both methods are therefore equivalent when applied to the interpretation of, e.g., supernova obervations. The proof relies on completely analytical arguments, and is illustrated by numerical results.
Weighted Polynomial Approximation for Automated Detection of Inspiratory Flow Limitation
Directory of Open Access Journals (Sweden)
Sheng-Cheng Huang
2017-01-01
Full Text Available Inspiratory flow limitation (IFL is a critical symptom of sleep breathing disorders. A characteristic flattened flow-time curve indicates the presence of highest resistance flow limitation. This study involved investigating a real-time algorithm for detecting IFL during sleep. Three categories of inspiratory flow shape were collected from previous studies for use as a development set. Of these, 16 cases were labeled as non-IFL and 78 as IFL which were further categorized into minor level (20 cases and severe level (58 cases of obstruction. In this study, algorithms using polynomial functions were proposed for extracting the features of IFL. Methods using first- to third-order polynomial approximations were applied to calculate the fitting curve to obtain the mean absolute error. The proposed algorithm is described by the weighted third-order (w.3rd-order polynomial function. For validation, a total of 1,093 inspiratory breaths were acquired as a test set. The accuracy levels of the classifications produced by the presented feature detection methods were analyzed, and the performance levels were compared using a misclassification cobweb. According to the results, the algorithm using the w.3rd-order polynomial approximation achieved an accuracy of 94.14% for IFL classification. We concluded that this algorithm achieved effective automatic IFL detection during sleep.
DEFF Research Database (Denmark)
Sadegh, Payman
1997-01-01
This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....
The Right to Remain Silent in Criminal Trial
Directory of Open Access Journals (Sweden)
Gianina Anemona Radu
2013-05-01
Full Text Available A person's right not to incriminate oneself or to remain silent and not contribute to their own incrimination is a basic requirement of due process, although the right not to testify against oneself is not expressly guaranteed. This legal right is intended to protect the accused/ the defendant against the authorities’ abusive coercion. The scope of the right not to incriminate oneself is related to criminal matter under the Convention, and thus susceptible or applicable to criminal proceedings concerning all types of crimes as a guarantee to a fair trial. The European Court of Justice ruled that despite the fact that art. 6 paragraph 2 of the Convention does not expressly mention the right not to incriminate oneself and the right not to contribute to their own incrimination (nemo tenetur are ipsum accusare these are generally recognized international rules that are in consistence with the notion of “fair trial” stipulated in art. 6. By virtue of the right to silence, the person charged with a crime is free to answer the questions or not, as he/she believes it is in his/her interest. Therefore, the right to silence involves not only the right not to testify against oneself, but also the right of the accused/ defendant not to incriminate oneself. Thus, the accused/defendant cannot be compelled to assist in the production of evidence and cannot be sanctioned for failing to provide certain documents or other evidence. Obligation to testify against personal will, under the constraint of a fine or any other form of coercion constitutes an interference with the negative aspect of the right to freedom of expression which must be necessary in a democratic society. It is essential to clarify certain issues as far as this right is concerned. First of all, the statutory provision in question is specific to adversarial systems, which are found mainly in Anglo-Saxon countries and are totally different from that underlying the current Romanian Criminal
Non-Gaussianity in two-field inflation beyond the slow-roll approximation
Energy Technology Data Exchange (ETDEWEB)
Jung, Gabriel; Tent, Bartjan van, E-mail: gabriel.jung@th.u-psud.fr, E-mail: bartjan.van-tent@th.u-psud.fr [Laboratoire de Physique Théorique (UMR 8627), CNRS, Univ. Paris-Sud, Université Paris-Saclay, Bâtiment 210, 91405 Orsay Cedex (France)
2017-05-01
We use the long-wavelength formalism to investigate the level of bispectral non-Gaussianity produced in two-field inflation models with standard kinetic terms. Even though the Planck satellite has so far not detected any primordial non-Gaussianity, it has tightened the constraints significantly, and it is important to better understand what regions of inflation model space have been ruled out, as well as prepare for the next generation of experiments that might reach the important milestone of Δ f {sub NL}{sup local}=1. We derive an alternative formulation of the previously derived integral expression for f {sub NL}, which makes it easier to physically interpret the result and see which types of potentials can produce large non-Gaussianity. We apply this to the case of a sum potential and show that it is very difficult to satisfy simultaneously the conditions for a large f {sub NL} and the observational constraints on the spectral index n {sub s} . In the case of the sum of two monomial potentials and a constant we explicitly show in which small region of parameter space this is possible, and we show how to construct such a model. Finally, the new general expression for f {sub NL} also allows us to prove that for the sum potential the explicit expressions derived within the slow-roll approximation remain valid even when the slow-roll approximation is broken during the turn of the field trajectory (as long as only the ε slow-roll parameter remains small).
TEMPORAL MODELING OF DNA DEGRADATION IN BONE REMAINS
Directory of Open Access Journals (Sweden)
Andrei Stefan
2012-06-01
Full Text Available The aim of this study is to follow the changes that occur, in time, at DNA level and to establish an efficient and reliable protocol for ancestral DNA extraction from bones found in archaeological sites. To test whether the protocol is efficient and capable of yielding good quality DNA, extraction was first performed on fresh bones. The material consists of fresh pig (Sus scrofa and cow (Bos taurus bones that were grounded by using a drill operating at low speed. The bone powder was then incubated in lysis buffer in the presence of proteinase K. DNA isolation and purification were done by using the phenol:chloroform protocol and DNA was precipitated with absolute ethanol stored at -20oC. The extractions were carried out once every month for a total of four extractions
Instrument to determine prestress remaining in a damaged bridge girder
Civjan, Scott A.; Jirsa, James O.; Carrasquillo, Ramon L.; Fowler, David W.
1998-03-01
An instrument has been developed to estimate stress levels in prestress strands in existing members. The prototype instrument applies a lateral load to an exposed prestressing strand and measures the resulting displacements. The instrument was calibrated for 0.5-inch (12.7 mm) diameter seven-wire strand with exposed lengths of 1.5 feet (0.46 m) to 3.75 feet (1.14 m). It was tested to determine its accuracy, precision, and usefulness in the field. Strand forces were consistently estimated to within ten percent of the actual load. The device was also utilized in the placement of strand splices and was found to be more reliable in checking induced strand tensions than the standard torque wrench method.
New Tests of the Fixed Hotspot Approximation
Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.
2005-05-01
We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of
Random phase approximation in relativistic approach
International Nuclear Information System (INIS)
Ma Zhongyu; Yang Ding; Tian Yuan; Cao Ligang
2009-01-01
Some special issues of the random phase approximation(RPA) in the relativistic approach are reviewed. A full consistency and proper treatment of coupling to the continuum are responsible for the successful application of the RPA in the description of dynamical properties of finite nuclei. The fully consistent relativistic RPA(RRPA) requires that the relativistic mean filed (RMF) wave function of the nucleus and the RRPA correlations are calculated in a same effective Lagrangian and the consistent treatment of the Dirac sea of negative energy states. The proper treatment of the single particle continuum with scattering asymptotic conditions in the RMF and RRPA is discussed. The full continuum spectrum can be described by the single particle Green's function and the relativistic continuum RPA is established. A separable form of the paring force is introduced in the relativistic quasi-particle RPA. (authors)
Local facet approximation for image stitching
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
Approximated solutions to the Schroedinger equation
International Nuclear Information System (INIS)
Rico, J.F.; Fernandez-Alonso, J.I.
1977-01-01
The authors are currently working on a couple of the well-known deficiencies of the variation method and present here some of the results that have been obtained so far. The variation method does not give information a priori on the trial functions best suited for a particular problem nor does it give information a posteriori on the degree of precision attained. In order to clarify the origin of both difficulties, a geometric interpretation of the variation method is presented. This geometric interpretation is the starting point for the exact formal solution to the fundamental state and for the step-by-step approximations to the exact solution which are also given. Some comments on these results are included. (Auth.)
Vortex sheet approximation of boundary layers
International Nuclear Information System (INIS)
Chorin, A.J.
1978-01-01
a grid free method for approximating incomprssible boundary layers is introduced. The computational elements are segments of vortex sheets. The method is related to the earlier vortex method; simplicity is achieved at the cost of replacing the Navier-Stokes equations by the Prandtl boundary layer equations. A new method for generating vorticity at boundaries is also presented; it can be used with the earlier voartex method. The applications presented include (i) flat plate problems, and (ii) a flow problem in a model cylinder- piston assembly, where the new method is used near walls and an improved version of the random choice method is used in the interior. One of the attractive features of the new method is the ease with which it can be incorporated into hybrid algorithms
Approximate Stokes Drift Profiles in Deep Water
Breivik, Øyvind; Janssen, Peter A. E. M.; Bidlot, Jean-Raymond
2014-09-01
A deep-water approximation to the Stokes drift velocity profile is explored as an alternative to the monochromatic profile. The alternative profile investigated relies on the same two quantities required for the monochromatic profile, viz the Stokes transport and the surface Stokes drift velocity. Comparisons with parametric spectra and profiles under wave spectra from the ERA-Interim reanalysis and buoy observations reveal much better agreement than the monochromatic profile even for complex sea states. That the profile gives a closer match and a more correct shear has implications for ocean circulation models since the Coriolis-Stokes force depends on the magnitude and direction of the Stokes drift profile and Langmuir turbulence parameterizations depend sensitively on the shear of the profile. The alternative profile comes at no added numerical cost compared to the monochromatic profile.
Analytical approximations for wide and narrow resonances
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2005-01-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Analytical approximations for wide and narrow resonances
Energy Technology Data Exchange (ETDEWEB)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2005-07-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
The Bloch Approximation in Periodically Perforated Media
International Nuclear Information System (INIS)
Conca, C.; Gomez, D.; Lobo, M.; Perez, E.
2005-01-01
We consider a periodically heterogeneous and perforated medium filling an open domain Ω of R N . Assuming that the size of the periodicity of the structure and of the holes is O(ε),we study the asymptotic behavior, as ε → 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in Ω ε (Ω ε being Ω minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and the first Bloch eigenfunction. We first consider the case where Ωis R N and then localize the problem for abounded domain Ω, considering a homogeneous Dirichlet condition on the boundary of Ω
Approximate analytical modeling of leptospirosis infection
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Coated sphere scattering by geometric optics approximation.
Mengran, Zhai; Qieni, Lü; Hongxia, Zhang; Yinxin, Zhang
2014-10-01
A new geometric optics model has been developed for the calculation of light scattering by a coated sphere, and the analytic expression for scattering is presented according to whether rays hit the core or not. The ray of various geometric optics approximation (GOA) terms is parameterized by the number of reflections in the coating/core interface, the coating/medium interface, and the number of chords in the core, with the degeneracy path and repeated path terms considered for the rays striking the core, which simplifies the calculation. For the ray missing the core, the various GOA terms are dealt with by a homogeneous sphere. The scattering intensity of coated particles are calculated and then compared with those of Debye series and Aden-Kerker theory. The consistency of the results proves the validity of the method proposed in this work.
Approximation by max-product type operators
Bede, Barnabás; Gal, Sorin G
2016-01-01
This monograph presents a broad treatment of developments in an area of constructive approximation involving the so-called "max-product" type operators. The exposition highlights the max-product operators as those which allow one to obtain, in many cases, more valuable estimates than those obtained by classical approaches. The text considers a wide variety of operators which are studied for a number of interesting problems such as quantitative estimates, convergence, saturation results, localization, to name several. Additionally, the book discusses the perfect analogies between the probabilistic approaches of the classical Bernstein type operators and of the classical convolution operators (non-periodic and periodic cases), and the possibilistic approaches of the max-product variants of these operators. These approaches allow for two natural interpretations of the max-product Bernstein type operators and convolution type operators: firstly, as possibilistic expectations of some fuzzy variables, and secondly,...
Polarized constituent quarks in NLO approximation
International Nuclear Information System (INIS)
Khorramian, Ali N.; Tehrani, S. Atashbar; Mirjalili, A.
2006-01-01
The valon representation provides a basis between hadrons and quarks, in terms of which the bound-state and scattering properties of hadrons can be united and described. We studied polarized valon distributions which have an important role in describing the spin dependence of parton distribution in leading and next-to-leading order approximation. Convolution integral in frame work of valon model as a useful tool, was used in polarized case. To obtain polarized parton distributions in a proton we need to polarized valon distribution in a proton and polarized parton distributions inside the valon. We employed Bernstein polynomial averages to get unknown parameters of polarized valon distributions by fitting to available experimental data
Approximate Sensory Data Collection: A Survey.
Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong
2017-03-10
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
Approximate Sensory Data Collection: A Survey
Directory of Open Access Journals (Sweden)
Siyao Cheng
2017-03-01
Full Text Available With the rapid development of the Internet of Things (IoTs, wireless sensor networks (WSNs and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximatedatacollectionalgorithms. Weclassifythemintothreecategories: themodel-basedones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted.
Approximate truncation robust computed tomography—ATRACT
International Nuclear Information System (INIS)
Dennerlein, Frank; Maier, Andreas
2013-01-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)
Hydromagnetic turbulence in the direct interaction approximation
International Nuclear Information System (INIS)
Nagarajan, S.
1975-01-01
The dissertation is concerned with the nature of turbulence in a medium with large electrical conductivity. Three distinct though inter-related questions are asked. Firstly, the evolution of a weak, random initial magnetic field in a highly conducting, isotropically turbulent fluid is discussed. This was first discussed in the paper 'Growth of Turbulent Magnetic Fields' by Kraichnan and Nagargian. The Physics of Fluids, volume 10, number 4, 1967. Secondly, the direct interaction approximation for hydromagnetic turbulence maintained by stationary, isotropic, random stirring forces is formulated in the wave-number-frequency domain. Thirdly, the dynamical evolution of a weak, random, magnetic excitation in a turbulent electrically conducting fluid is examined under varying kinematic conditions. (G.T.H.)
Approximation Preserving Reductions among Item Pricing Problems
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
Approximate direct georeferencing in national coordinates
Legat, Klaus
Direct georeferencing has gained an increasing importance in photogrammetry and remote sensing. Thereby, the parameters of exterior orientation (EO) of an image sensor are determined by GPS/INS, yielding results in a global geocentric reference frame. Photogrammetric products like digital terrain models or orthoimages, however, are often required in national geodetic datums and mapped by national map projections, i.e., in "national coordinates". As the fundamental mathematics of photogrammetry is based on Cartesian coordinates, the scene restitution is often performed in a Cartesian frame located at some central position of the image block. The subsequent transformation to national coordinates is a standard problem in geodesy and can be done in a rigorous manner-at least if the formulas of the map projection are rigorous. Drawbacks of this procedure include practical deficiencies related to the photogrammetric processing as well as the computational cost of transforming the whole scene. To avoid these problems, the paper pursues an alternative processing strategy where the EO parameters are transformed prior to the restitution. If only this transition was done, however, the scene would be systematically distorted. The reason is that the national coordinates are not Cartesian due to the earth curvature and the unavoidable length distortion of map projections. To settle these distortions, several corrections need to be applied. These are treated in detail for both passive and active imaging. Since all these corrections are approximations only, the resulting technique is termed "approximate direct georeferencing". Still, the residual distortions are usually very low as is demonstrated by simulations, rendering the technique an attractive approach to direct georeferencing.
The association between higher education and approximate number system acuity
Lindskog, Marcus; Winman, Anders; Juslin, Peter
2014-01-01
Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity. PMID:24904478
The association between higher education and approximate number system acuity.
Lindskog, Marcus; Winman, Anders; Juslin, Peter
2014-01-01
Humans are equipped with an approximate number system (ANS) supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity) and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities), measured either early (First year) or late (Third year) in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.
The Association Between Higher Education and Approximate Number System Acuity
Directory of Open Access Journals (Sweden)
Marcus eLindskog
2014-05-01
Full Text Available Humans are equipped with an Approximate Number System (ANS supporting non-symbolic numerosity representation. Studies indicate a relationship between ANS-precision (acuity and math achievement. Whether the ANS is a prerequisite for learning mathematics or if mathematics education enhances the ANS remains an open question. We investigated the association between higher education and ANS acuity with university students majoring in subjects with varying amounts of mathematics (mathematics, business, and humanities, measured either early (1th year or late (3rd year in their studies. The results suggested a non-significant trend where students taking more mathematics had better ANS acuity and a significant improvement in ANS acuity as a function of study length that was mainly confined to the business students. The results provide partial support for the hypothesis that education in mathematics can enhance the ANS acuity.
SEE rate estimation based on diffusion approximation of charge collection
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.