WorldWideScience

Sample records for exponential error reduction

  1. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  2. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul

    2015-01-01

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions

  4. The Negative Sign and Exponential Expressions: Unveiling Students' Persistent Errors and Misconceptions

    Science.gov (United States)

    Cangelosi, Richard; Madrid, Silvia; Cooper, Sandra; Olson, Jo; Hartter, Beverly

    2013-01-01

    The purpose of this study was to determine whether or not certain errors made when simplifying exponential expressions persist as students progress through their mathematical studies. College students enrolled in college algebra, pre-calculus, and first- and second-semester calculus mathematics courses were asked to simplify exponential…

  5. The District Nursing Clinical Error Reduction Programme.

    Science.gov (United States)

    McGraw, Caroline; Topping, Claire

    2011-01-01

    The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.

  6. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian

    2015-01-07

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions for the existence of a L? bound that separates the dynamical contribution from that arising from the type of the option n in question. The bound achieved does not rely on information of the asymptotic behaviour of option prices at extreme asset values. In addition, we demonstrate improved numerical performance for select examples of practical relevance when compared to established bounding methods.

  7. Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes

    KAUST Repository

    Crocce, Fabian

    2016-01-06

    Prices of European options whose underlying asset is driven by the L´evy process are solutions to partial integrodifferential Equations (PIDEs) that generalise the Black-Scholes equation by incorporating a non-local integral term to account for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE. The rapid rate of convergence of the trapezoid quadrature and the speedup provide efficient methods for evaluationg option prices, possibly for a range of parameter configurations simultaneously. A couple of works have been devoted to the error analysis and parameter selection for these transform-based methods. In [5] several payoff functions are considered for a rather general set of models, whose characteristic function is assumed to be known. [4] presents the framework and theoretical approach for the error analysis, and establishes polynomial convergence rates for approximations of the option prices. [1] presents FT-related methods with curved integration contour. The classical flat FT-methods have been, on the other hand, extended for option pricing problems beyond the European framework [3]. We present a methodology for studying and bounding the error committed when using FT methods to compute option prices. We also provide a systematic way of choosing the parameters of the numerical method, minimising the error bound and guaranteeing adherence to a pre-described error tolerance. We focus on exponential L´evy processes that may be of either diffusive or pure jump in type. Our contribution is to derive a tight error bound for a Fourier transform method when pricing options under risk-neutral Levy dynamics. We present a simplified bound that separates the contributions of the payoff and of the process in an easily processed and extensible product form that

  8. Removal of round off errors in the matrix exponential method for solving the heavy nuclide chain

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Noh, Jae Man; Joo, Hyung Kook

    2005-01-01

    Many nodal codes for core simulation adopt the micro-depletion procedure for the depletion analysis. Unlike the macro-depletion procedure, the microdepletion procedure uses micro-cross sections and number densities of important nuclides to generate the macro cross section of a spatial calculational node. Therefore, it needs to solve the chain equations of the nuclides of interest to obtain their number densities. There are several methods such as the matrix exponential method (MEM) and the chain linearization method (CLM) for solving the nuclide chain equations. The former solves chain equations exactly even when the cycles that come from the alpha decay exist in the chain while the latter solves the chain approximately when the cycles exist in the chain. The former has another advantage over the latter. Many nodal codes for depletion analysis, such as MASTER, solve only the hard coded nuclide chains with the CLM. Therefore, if we want to extend the chain by adding some more nuclides to the chain, we have to modify the source code. In contrast, we can extend the chain just by modifying the input in the MEM because it is easy to implement the MEM solver for solving an arbitrary nuclide chain. In spite of these advantages of the MEM, many nodal codes adopt the chain linearization because the former has a large round off error when the flux level is very high or short lived or strong absorber nuclides exist in the chain. In this paper, we propose a new technique to remove the round off errors in the MEM and we compared the performance of the two methods

  9. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  10. FEL small signal gain reduction due to phase error of undulator

    International Nuclear Information System (INIS)

    Jia Qika

    2002-01-01

    The effects of undulator phase errors on the Free Electron Laser small signal gain is analyzed and discussed. The gain reduction factor due to the phase error is given analytically for low-gain regimes, it shows that degradation of the gain is similar to that of the spontaneous radiation, has a simple exponential relation with square of the rms phase error, and the linear variation part of phase error induces the position shift of maximum gain. The result also shows that the Madey's theorem still hold in the presence of phase error. The gain reduction factor due to the phase error for high-gain regimes also can be given in a simple way

  11. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  12. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  13. Exponential noise reduction in Lattice QCD: new tools for new physics

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The numerical computations of many quantities of theoretical and phenomenological interest are plagued by statistical errors which increase exponentially with the distance of the sources in the relevant correlators. Notable examples are baryon masses and matrix elements, the hadronic vacuum polarization and the light-by-light scattering contributions to the muon g-2, and the form factors of semileptonic B decays. Reliable and precise determinations of these quantities are very difficult if not impractical with state-of-the-art standard Monte Carlo integration schemes. I will discuss a recent proposal for factorizing the fermion determinant in lattice QCD that leads to a local action in the gauge field and in the auxiliary boson fields. Once combined with the corresponding factorization of the quark propagator, it paves the way for multi-level Monte Carlo integration in the presence of fermions opening new perspectives in lattice QCD and in its capability to unveil new physics. Exploratory results on the impac...

  14. A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates

    CERN Document Server

    Hagedorn, G A

    2004-01-01

    We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\

  15. Reduction of measurement errors in OCT scanning

    Science.gov (United States)

    Morel, E. N.; Tabla, P. M.; Sallese, M.; Torga, J. R.

    2018-03-01

    Optical coherence tomography (OCT) is a non-destructive optical technique, which uses a light source with a wide band width that focuses on a point in the sample to determine the distance (strictly, the optical path difference, OPD) between this point and a reference surface. The point can be superficial or at an interior interface of the sample (transparent or semitransparent), allowing topographies and / or tomographies in different materials. The Michelson interferometer is the traditional experimental scheme for this technique, in which a beam of light is divided into two arms, one the reference and the other the sample. The overlap of reflected light in the sample and in the reference generates an interference signal that gives us information about the OPD between arms. In this work, we work on the experimental configuration in which the reference signal and the reflected signal in the sample travel on the same arm, improving the quality of the interference signal. Among the most important aspects of this improvement we can mention that the noise and errors produced by the relative reference-sample movement and by the dispersion of the refractive index are considerably reduced. It is thus possible to obtain 3D images of surfaces with a spatial resolution in the order of microns. Results obtained on the topography of metallic surfaces, glass and inks printed on paper are presented.

  16. Errors and mistakes in the traditional optimum design of experiments on exponential absorption

    International Nuclear Information System (INIS)

    Burge, E.J.

    1977-01-01

    The treatment of statistical errors in absorption experiments using particle counters, given by Rose and Shapiro (1948), is shown to be incorrect for non-zero background counts. For the simplest case of only one absorber thickness, revised conditions are computed for the optimum geometry and the best apportionment of counting times for the incident and transmitted beams for a wide range of relative backgrounds (0, 10 -5 -10 2 ). The two geometries of Rose and Shapiro are treated, (I) beam area fixed, absorber thickness varied, and (II) beam area and absorber thickness both varied, but with effective volume of absorber constant. For case (I) the new calculated errors in the absorption coefficients are shown to be about 0.7 of the Rose and Shapiro values for the largest background, and for case (II) about 0.4. The corresponding fractional times for background counts are (I) 0.7 and (II) 0.07 of those given by Rose and Shapiro. For small backgrounds the differences are negligible. Revised values are also computed for the sensitivity of the accuracy to deviations from optimum transmission. (Auth.)

  17. TRANSMUTED EXPONENTIATED EXPONENTIAL DISTRIBUTION

    OpenAIRE

    MEROVCI, FATON

    2013-01-01

    In this article, we generalize the exponentiated exponential distribution using the quadratic rank transmutation map studied by Shaw etal. [6] to develop a transmuted exponentiated exponential distribution. Theproperties of this distribution are derived and the estimation of the model parameters is discussed. An application to real data set are finally presented forillustration

  18. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  19. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  20. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Seong, Poong Hyun; Kang, Hyun Gook [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Na, Man Gyun [Chosun Univ., Gwangju (Korea, Republic of); Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of); Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Jung, Yoensub [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2013-04-15

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  1. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    International Nuclear Information System (INIS)

    Seong, Poong Hyun; Kang, Hyun Gook; Na, Man Gyun; Kim, Jong Hyun; Heo, Gyunyoung; Jung, Yoensub

    2013-01-01

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs

  2. ADVANCED MMIS TOWARD SUBSTANTIAL REDUCTION IN HUMAN ERRORS IN NPPS

    Directory of Open Access Journals (Sweden)

    POONG HYUN SEONG

    2013-04-01

    Full Text Available This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS. It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs. Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  3. Error reduction techniques for Monte Carlo neutron transport calculations

    International Nuclear Information System (INIS)

    Ju, J.H.W.

    1981-01-01

    Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas

  4. Reduction in pediatric identification band errors: a quality collaborative.

    Science.gov (United States)

    Phillips, Shannon Connor; Saysana, Michele; Worley, Sarah; Hain, Paul D

    2012-06-01

    Accurate and consistent placement of a patient identification (ID) band is used in health care to reduce errors associated with patient misidentification. Multiple safety organizations have devoted time and energy to improving patient ID, but no multicenter improvement collaboratives have shown scalability of previously successful interventions. We hoped to reduce by half the pediatric patient ID band error rate, defined as absent, illegible, or inaccurate ID band, across a quality improvement learning collaborative of hospitals in 1 year. On the basis of a previously successful single-site intervention, we conducted a self-selected 6-site collaborative to reduce ID band errors in heterogeneous pediatric hospital settings. The collaborative had 3 phases: preparatory work and employee survey of current practice and barriers, data collection (ID band failure rate), and intervention driven by data and collaborative learning to accelerate change. The collaborative audited 11377 patients for ID band errors between September 2009 and September 2010. The ID band failure rate decreased from 17% to 4.1% (77% relative reduction). Interventions including education of frontline staff regarding correct ID bands as a safety strategy; a change to softer ID bands, including "luggage tag" type ID bands for some patients; and partnering with families and patients through education were applied at all institutions. Over 13 months, a collaborative of pediatric institutions significantly reduced the ID band failure rate. This quality improvement learning collaborative demonstrates that safety improvements tested in a single institution can be disseminated to improve quality of care across large populations of children.

  5. Error reduction techniques for measuring long synchrotron mirrors

    International Nuclear Information System (INIS)

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP

  6. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas; Tempone, Raul

    2014-01-01

    jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time

  7. Two statistics for evaluating parameter identifiability and error reduction

    Science.gov (United States)

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  8. Advancing the research agenda for diagnostic error reduction

    NARCIS (Netherlands)

    Zwaan, L.; Schiff, G.D.; Singh, H.

    2013-01-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research.

  9. Bayesian Exponential Smoothing.

    OpenAIRE

    Forbes, C.S.; Snyder, R.D.; Shami, R.S.

    2000-01-01

    In this paper, a Bayesian version of the exponential smoothing method of forecasting is proposed. The approach is based on a state space model containing only a single source of error for each time interval. This model allows us to improve current practices surrounding exponential smoothing by providing both point predictions and measures of the uncertainty surrounding them.

  10. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    Science.gov (United States)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  11. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  12. Stochastic Frontier Models with Dependent Errors based on Normal and Exponential Margins || Modelos de frontera estocástica con errores dependientes basados en márgenes normal y exponencial

    Directory of Open Access Journals (Sweden)

    Gómez-Déniz, Emilio

    2017-06-01

    Full Text Available Following the recent work of Gómez-Déniz and Pérez-Rodríguez (2014, this paper extends the results obtained there to the normal-exponential distribution with dependence. Accordingly, the main aim of the present paper is to enhance stochastic production frontier and stochastic cost frontier modelling by proposing a bivariate distribution for dependent errors which allows us to nest the classical models. Closed-form expressions for the error term and technical efficiency are provided. An illustration using real data from the econometric literature is provided to show the applicability of the model proposed. || Continuando el reciente trabajo de Gómez-Déniz y Pérez-Rodríguez (2014, el presente artículo extiende los resultados obtenidos a la distribución normal-exponencial con dependencia. En consecuencia, el principal propósito de este artículo es mejorar el modelado de la frontera estocástica tanto de producción como de coste proponiendo para ello una distribución bivariante para errores dependientes que nos permitan encajar los modelos clásicos. Se obtienen las expresiones en forma cerrada para el término de error y la eficiencia técnica. Se ilustra la aplicabilidad del modelo propouesto usando datos reales existentes en la literatura econométrica.

  13. Field error reduction experiment on the REPUTE-1 RFP device

    International Nuclear Information System (INIS)

    Toyama, H.; Shinohara, S.; Yamagishi, K.

    1989-01-01

    The vacuum chamber of the RFP device REPUTE-1 is a welded structure using 18 sets of 1 mm thick Inconel bellows (inner minor radius 22 cm) and 2.4 mm thick port segments arranged in toroidal geometry as shown in Fig. 1. The vacuum chamber is surrounded by 5 mm thick stainless steel shells. The time constant of the shell is 1 ms for vertical field penetration. The pulse length in REPUTE-1 is so far 3.2 ms (about 3 times longer than shell skin time). The port bypass plates have been attached as shown in Fig. 2 to reduce field errors so that the pulse length becomes longer and the loop voltage becomes lower. (author) 5 refs., 4 figs

  14. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  15. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  16. Reduction of low frequency error for SED36 and APS based HYDRA star trackers

    Science.gov (United States)

    Ouaknine, Julien; Blarre, Ludovic; Oddos-Marcel, Lionel; Montel, Johan; Julio, Jean-Marc

    2017-11-01

    In the frame of the CNES Pleiades satellite, a reduction of the star tracker low frequency error, which is the most penalizing error for the satellite attitude control, was performed. For that purpose, the SED36 star tracker was developed, with a design based on the flight qualified SED16/26. In this paper, the SED36 main features will be first presented. Then, the reduction process of the low frequency error will be developed, particularly the optimization of the optical distortion calibration. The result is an attitude low frequency error of 1.1" at 3 sigma along transverse axes. The implementation of these improvements to HYDRA, the new multi-head APS star tracker developed by SODERN, will finally be presented.

  17. Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.

    Science.gov (United States)

    Heard, Bridgette; Miller, Laura; Kumar, Pankaj

    2012-03-01

    Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.

  18. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  19. Error Reduction in an Operating Environment - Comanche Peak Steam Electric Station

    International Nuclear Information System (INIS)

    Blevins, Mike; Gallman, Jim

    1998-01-01

    After having outlined that a program to manage human performance and to reduce human performance errors has reached an 88% error reduction rate and a 99% significant error reduction rate, the authors present this program. It takes three cornerstones of human performance management into account: training, leadership and procedures. Other aspects are introduced: communication, corrective action programs, a root cause analysis, seven steps of self checking, trending, and a human performance enhancement program. These other aspects and their relationships are discussed. Program strengths and downsides are outlined, as well as actions needed for success. Another approach is then proposed which comprises proactive interventions and indicators for human performance. These indicators are identified and introduced by analyzing the anatomy of an event. The limitations of this model are discussed

  20. Reduction of sources of error and simplification of the Carbon-14 urea breath test

    International Nuclear Information System (INIS)

    Bellon, M.S.

    1997-01-01

    Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially

  1. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  2. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  3. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    Directory of Open Access Journals (Sweden)

    J. F. Newman

    2017-02-01

    Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  4. The 3 faces of clinical reasoning: Epistemological explorations of disparate error reduction strategies.

    Science.gov (United States)

    Monteiro, Sandra; Norman, Geoff; Sherbino, Jonathan

    2018-03-13

    There is general consensus that clinical reasoning involves 2 stages: a rapid stage where 1 or more diagnostic hypotheses are advanced and a slower stage where these hypotheses are tested or confirmed. The rapid hypothesis generation stage is considered inaccessible for analysis or observation. Consequently, recent research on clinical reasoning has focused specifically on improving the accuracy of the slower, hypothesis confirmation stage. Three perspectives have developed in this line of research, and each proposes different error reduction strategies for clinical reasoning. This paper considers these 3 perspectives and examines the underlying assumptions. Additionally, this paper reviews the evidence, or lack of, behind each class of error reduction strategies. The first perspective takes an epidemiological stance, appealing to the benefits of incorporating population data and evidence-based medicine in every day clinical reasoning. The second builds on the heuristic and bias research programme, appealing to a special class of dual process reasoning models that theorizes a rapid error prone cognitive process for problem solving with a slower more logical cognitive process capable of correcting those errors. Finally, the third perspective borrows from an exemplar model of categorization that explicitly relates clinical knowledge and experience to diagnostic accuracy. © 2018 John Wiley & Sons, Ltd.

  5. Schur Complement Reduction in the Mixed-Hybrid Approximation of Darcy's Law: Rounding Error Analysis

    Czech Academy of Sciences Publication Activity Database

    Maryška, Jiří; Rozložník, Miroslav; Tůma, Miroslav

    2000-01-01

    Roč. 117, - (2000), s. 159-173 ISSN 0377-0427 R&D Projects: GA AV ČR IAA2030706; GA ČR GA201/98/P108 Institutional research plan: AV0Z1030915 Keywords : potential fluid flow problem * symmetric indefinite linear systems * Schur complement reduction * iterative methods * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 0.455, year: 2000

  6. An Analysis of Medication Errors at the Military Medical Center: Implications for a Systems Approach for Error Reduction

    National Research Council Canada - National Science Library

    Scheirman, Katherine

    2001-01-01

    An analysis was accomplished of all inpatient medication errors at a military academic medical center during the year 2000, based on the causes of medication errors as described by current research in the field...

  7. Reduction of very large reaction mechanisms using methods based on simulation error minimization

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, Tibor; Turanyi, Tamas [Institute of Chemistry, Eoetvoes University (ELTE), P.O. Box 32, H-1518 Budapest (Hungary)

    2009-02-15

    A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)

  8. Quantitative shearography: error reduction by using more than three measurement channels

    International Nuclear Information System (INIS)

    Charrett, Tom O. H.; Francis, Daniel; Tatam, Ralph P.

    2011-01-01

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.

  9. Characterization of electromagnetic fields in the aSPECT spectrometer and reduction of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Ayala Guardia, Fidel

    2011-10-15

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  10. Characterization of electromagnetic fields in the αSPECTspectrometer and reduction of systematic errors

    International Nuclear Information System (INIS)

    Ayala Guardia, Fidel

    2011-10-01

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  11. An experimental investigation on the effects of exponential window and impact force level on harmonic reduction in impact-synchronous model analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin [Mechanical Engineering Department, Faculty of EngineeringUniversity of Malaya, Kuala Lumpur (Malaysia); Rahman, Abdul Ghaffar Abdul [Faculty of Mechanical Engineering, University Malaysia Pahang, Pekan (Malaysia); Ismail, Zubaidah [Civil Engineering Department, Faculty of Engineering, University of Malaya, Kuala Lumpur (Malaysia)

    2016-08-15

    A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)

  12. An experimental investigation on the effects of exponential window and impact force level on harmonic reduction in impact-synchronous model analysis

    International Nuclear Information System (INIS)

    Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin; Rahman, Abdul Ghaffar Abdul; Ismail, Zubaidah

    2016-01-01

    A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)

  13. Medical error reduction and tort reform through private, contractually-based quality medicine societies.

    Science.gov (United States)

    MacCourt, Duncan; Bernstein, Joseph

    2009-01-01

    The current medical malpractice system is broken. Many patients injured by malpractice are not compensated, whereas some patients who recover in tort have not suffered medical negligence; furthermore, the system's failures demoralize patients and physicians. But most importantly, the system perpetuates medical error because the adversarial nature of litigation induces a so-called "Culture of Silence" in physicians eager to shield themselves from liability. This silence leads to the pointless repetition of error, as the open discussion and analysis of the root causes of medical mistakes does not take place as fully as it should. In 1993, President Clinton's Task Force on National Health Care Reform considered a solution characterized by Enterprise Medical Liability (EML), Alternative Dispute Resolution (ADR), some limits on recovery for non-pecuniary damages (Caps), and offsets for collateral source recovery. Yet this list of ingredients did not include a strategy to surmount the difficulties associated with each element. Specifically, EML might be efficient, but none of the enterprises contemplated to assume responsibility, i.e., hospitals and payers, control physician behavior enough so that it would be fair to foist liability on them. Likewise, although ADR might be efficient, it will be resisted by individual litigants who perceive themselves as harmed by it. Finally, while limitations on collateral source recovery and damages might effectively reduce costs, patients and trial lawyers likely would not accept them without recompense. The task force also did not place error reduction at the center of malpractice tort reform -a logical and strategic error, in our view. In response, we propose a new system that employs the ingredients suggested by the task force but also addresses the problems with each. We also explicitly consider steps to rebuff the Culture of Silence and promote error reduction. We assert that patients would be better off with a system where

  14. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Liu, Zhengjun

    2015-01-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)

  15. Clinical errors and therapist discomfort with client disclosure of troublesome pornography use: Implications for clinical practice and error reduction.

    Science.gov (United States)

    Walters, Nathan T; Spengler, Paul M

    2016-09-01

    Mental health professionals are increasingly aware of the need for competence in the treatment of clients with pornography-related concerns. However, while researchers have recently sought to explore efficacious treatments for pornography-related concerns, few explorations of potential clinical judgment issues have occurred. Due to the sensitive, and at times uncomfortable, nature of client disclosures of sexual concerns within therapy, therapists are required to manage their own discomfort while retaining fidelity to treatment. The present paper explores clinician examples of judgment errors that may result from feelings of discomfort, and specifically from client use of pornography. Issues of potential bias, bias management techniques, and therapeutic implications are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Non-linear quantization error reduction for the temperature measurement subsystem on-board LISA Pathfinder

    Science.gov (United States)

    Sanjuan, J.; Nofrarias, M.

    2018-04-01

    Laser Interferometer Space Antenna (LISA) Pathfinder is a mission to test the technology enabling gravitational wave detection in space and to demonstrate that sub-femto-g free fall levels are possible. To do so, the distance between two free falling test masses is measured to unprecedented sensitivity by means of laser interferometry. Temperature fluctuations are one of the noise sources limiting the free fall accuracy and the interferometer performance and need to be known at the ˜10 μK Hz-1/2 level in the sub-millihertz frequency range in order to validate the noise models for the future space-based gravitational wave detector LISA. The temperature measurement subsystem on LISA Pathfinder is in charge of monitoring the thermal environment at key locations with noise levels of 7.5 μK Hz-1/2 at the sub-millihertz. However, its performance worsens by one to two orders of magnitude when slowly changing temperatures are measured due to errors introduced by analog-to-digital converter non-linearities. In this paper, we present a method to reduce this effect by data post-processing. The method is applied to experimental data available from on-ground validation tests to demonstrate its performance and the potential benefit for in-flight data. The analog-to-digital converter effects are reduced by a factor between three and six in the frequencies where the errors play an important role. An average 2.7 fold noise reduction is demonstrated in the 0.3 mHz-2 mHz band.

  17. An exponential distribution

    International Nuclear Information System (INIS)

    Anon

    2009-01-01

    In this presentation author deals with the probabilistic evaluation of product life on the example of the exponential distribution. The exponential distribution is special one-parametric case of the weibull distribution.

  18. The systems approach to error reduction: factors influencing inoculation injury reporting in the operating theatre.

    Science.gov (United States)

    Cutter, Jayne; Jordan, Sue

    2013-11-01

    To examine the frequency of, and factors influencing, reporting of mucocutaneous and percutaneous injuries in operating theatres. Surgeons and peri-operative nurses risk acquiring blood-borne viral infections during surgical procedures. Appropriate first-aid and prophylactic treatment after an injury can significantly reduce the risk of infection. However, studies indicate that injuries often go unreported. The 'systems approach' to error reduction relies on reporting incidents and near misses. Failure to report will compromise safety. A postal survey of all surgeons and peri-operative nurses engaged in exposure prone procedures in nine Welsh hospitals, face-to-face interviews with selected participants and telephone interviews with Infection Control Nurses. The response rate was 51.47% (315/612). Most respondents reported one or more percutaneous (183/315, 58.1%) and/or mucocutaneous injuries (68/315, 21.6%) in the 5 years preceding the study. Only 54.9% (112/204) reported every injury. Surgeons were poorer at reporting: 70/133 (52.6%) reported all or >50% of their injuries compared with 65/71 nurses (91.5%). Injuries are frequently under-reported, possibly compromising safety in operating theatres. A significant number of inoculation injuries are not reported. Factors influencing under-reporting were identified. This knowledge can assist managers in improving reporting and encouraging a robust safety culture within operating departments. © 2012 John Wiley & Sons Ltd.

  19. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Science.gov (United States)

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  20. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2006-01-01

    A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...

  1. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...

  2. SU-F-T-241: Reduction in Planning Errors Via a Process Control Developed Using the Eclipse Scripting API

    Energy Technology Data Exchange (ETDEWEB)

    Barbee, D; McCarthy, A; Galavis, P; Xu, A [NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# to check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria

  3. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    Science.gov (United States)

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform

  4. Extended Poisson Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Anum Fatima

    2015-09-01

    Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.

  5. Dynamics of exponential maps

    OpenAIRE

    Rempe, Lasse

    2003-01-01

    This thesis contains several new results about the dynamics of exponential maps $z\\mapsto \\exp(z)+\\kappa$. In particular, we prove that periodic external rays of exponential maps with nonescaping singular value always land. This is an analog of a theorem of Douady and Hubbard for polynomials. We also answer a question of Herman, Baker and Rippon by showing that the boundary of an unbounded exponential Siegel disk always contains the singular value. In addition to the presentation of new resul...

  6. Reduction of digital errors of digital charge division type position-sensitive detectors

    International Nuclear Information System (INIS)

    Uritani, A.; Yoshimura, K.; Takenaka, Y.; Mori, C.

    1994-01-01

    It is well known that ''digital errors'', i.e. differential non-linearity, appear in a position profile of radiation interactions when the profile is obtained with a digital charge-division-type position-sensitive detector. Two methods are presented to reduce the digital errors. They are the methods using logarithmic amplifiers and a weighting function. The validities of these two methods have been evaluated mainly by computer simulation. These methods can considerably reduce the digital errors. The best results are obtained when both methods are applied. ((orig.))

  7. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    Science.gov (United States)

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  8. Error reduction in health care: a systems approach to improving patient safety

    National Research Council Canada - National Science Library

    Spath, Patrice

    2011-01-01

    .... The book pinpoints how to reduce and eliminate medical mistakes that threaten the health and safety of patients and teaches how to identify the root cause of medical errors, implement strategies...

  9. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes

    Science.gov (United States)

    Elhelw, Amr M.; Badran, Ehab F.

    2015-01-01

    High peak to average power ratio (PAPR) is one of the major problems of OFDM systems. Selected mapping (SLM) is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI) index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI) for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate. PMID:26018504

  10. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  11. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.

    Directory of Open Access Journals (Sweden)

    Amr M Elhelw

    Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.

  12. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    Science.gov (United States)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  13. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  14. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  15. Reduction of errors during practice facilitates fundamental movement skill learning in children with intellectual disabilities.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Eguia, K F; Masters, R S W

    2013-04-01

    Children with intellectual disabilities (ID) have been found to have inferior motor proficiencies in fundamental movement skills (FMS). This study examined the effects of training the FMS of overhand throwing by manipulating the amount of practice errors. Participants included 39 children with ID aged 4-11 years who were allocated into either an error-reduced (ER) training programme or a more typical programme in which errors were frequent (error-strewn, ES). Throwing movement form, throwing accuracy, and throwing frequency during free play were evaluated. The ER programme improved movement form, and increased throwing activity during free play to a greater extent than the ES programme. Furthermore, ER learners were found to be capable of engaging in a secondary cognitive task while manifesting robust throwing accuracy performance. The findings support the use of movement skills training programmes that constrain practice errors in children with ID, suggesting that such approach results in improved performance and heightened movement engagement in free play. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 Blackwell Publishing Ltd.

  16. Thin film thickness measurement error reduction by wavelength selection in spectrophotometry

    International Nuclear Information System (INIS)

    Tsepulin, Vladimir G; Perchik, Alexey V; Tolstoguzov, Victor L; Karasik, Valeriy E

    2015-01-01

    Fast and accurate volumetric profilometry of thin film structures is an important problem in the electronic visual display industry. We propose to use spectrophotometry with a limited number of working wavelengths to achieve high-speed control and an approach to selecting the optimal working wavelengths to reduce the thickness measurement error. A simple expression for error estimation is presented and tested using a Monte Carlo simulation. The experimental setup is designed to confirm the stability of film thickness determination using a limited number of wavelengths

  17. Is Radioactive Decay Really Exponential?

    OpenAIRE

    Aston, Philip J.

    2012-01-01

    Radioactive decay of an unstable isotope is widely believed to be exponential. This view is supported by experiments on rapidly decaying isotopes but is more difficult to verify for slowly decaying isotopes. The decay of 14C can be calibrated over a period of 12,550 years by comparing radiocarbon dates with dates obtained from dendrochronology. It is well known that this approach shows that radiocarbon dates of over 3,000 years are in error, which is generally attributed to past variation in ...

  18. Sleep-Dependent Reductions in Reality-Monitoring Errors Arise from More Conservative Decision Criteria

    Science.gov (United States)

    Westerberg, Carmen E.; Hawkins, Christopher A.; Rendon, Lauren

    2018-01-01

    Reality-monitoring errors occur when internally generated thoughts are remembered as external occurrences. We hypothesized that sleep-dependent memory consolidation could reduce them by strengthening connections between items and their contexts during an afternoon nap. Participants viewed words and imagined their referents. Pictures of the…

  19. Reduction of Errors during Practice Facilitates Fundamental Movement Skill Learning in Children with Intellectual Disabilities

    Science.gov (United States)

    Capio, C. M.; Poolton, J. M.; Sit, C. H. P.; Eguia, K. F.; Masters, R. S. W.

    2013-01-01

    Background: Children with intellectual disabilities (ID) have been found to have inferior motor proficiencies in fundamental movement skills (FMS). This study examined the effects of training the FMS of overhand throwing by manipulating the amount of practice errors. Methods: Participants included 39 children with ID aged 4-11 years who were…

  20. Exponential Cardassian universe

    International Nuclear Information System (INIS)

    Liu Daojun; Sun Changbo; Li Xinzhou

    2006-01-01

    The expectation of explaining cosmological observations without requiring new energy sources is forsooth worthy of investigation. In this Letter, a new kind of Cardassian models, called exponential Cardassian models, for the late-time universe are investigated in the context of the spatially flat FRW universe scenario. We fit the exponential Cardassian models to current type Ia supernovae data and find they are consistent with the observations. Furthermore, we point out that the equation-of-state parameter for the effective dark fluid component in exponential Cardassian models can naturally cross the cosmological constant divide w=-1 that observations favor mildly without introducing exotic material that destroy the weak energy condition

  1. Model and Reduction of Inactive Times in a Maintenance Workshop Following a Diagnostic Error

    Directory of Open Access Journals (Sweden)

    T. Beda

    2011-04-01

    Full Text Available The majority of maintenance workshops in manufacturing factories are hierarchical. This arrangement permits quick response in advent of a breakdown. Reaction of the maintenance workshop is done by evaluating the characteristics of the breakdown. In effect, a diagnostic error at a given level of the process of decision making delays the restoration of normal operating state. The consequences are not just financial loses, but loss in customers’ satisfaction as well. The goal of this paper is to model the inactive time of a maintenance workshop in case that an unpredicted catalectic breakdown has occurred and a diagnostic error has also occurred at a certain level of decision-making, during the treatment process of the breakdown. We show that the expression for the inactive times obtained, is depended only on the characteristics of the workshop. Next, we propose a method to reduce the inactive times.

  2. An Unusual Exponential Graph

    Science.gov (United States)

    Syed, M. Qasim; Lovatt, Ian

    2014-01-01

    This paper is an addition to the series of papers on the exponential function begun by Albert Bartlett. In particular, we ask how the graph of the exponential function y = e[superscript -t/t] would appear if y were plotted versus ln t rather than the normal practice of plotting ln y versus t. In answering this question, we find a new way to…

  3. Exponential and Logarithmic Functions

    OpenAIRE

    Todorova, Tamara

    2010-01-01

    Exponential functions find applications in economics in relation to growth and economic dynamics. In these fields, quite often the choice variable is time and economists are trying to determine the best timing for certain economic activities to take place. An exponential function is one in which the independent variable appears in the exponent. Very often that exponent is time. In highly mathematical courses, it is a truism that students learn by doing, not by reading. Tamara Todorova’s Pr...

  4. Mean-value identities as an opportunity for Monte Carlo error reduction.

    Science.gov (United States)

    Fernandez, L A; Martin-Mayor, V

    2009-05-01

    In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the two-dimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.

  5. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    Energy Technology Data Exchange (ETDEWEB)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method of reconciling any conflicting results from the two leakage tests.

  6. Matrix-exponential description of radiative transfer

    International Nuclear Information System (INIS)

    Waterman, P.C.

    1981-01-01

    By appling the matrix-exponential operator technique to the radiative-transfer equation in discrete form, new analytical solutions are obtained for the transmission and reflection matrices in the limiting cases x >1, where x is the optical depth of the layer. Orthongonality of the eigenvectors of the matrix exponential apparently yields new conditions for determining. Chandrasekhar's characteristic roots. The exact law of reflection for the discrete eigenfunctions is also obtained. Finally, when used in conjuction with the doubling method, the matrix exponential should result in reduction in both computation time and loss of precision

  7. Instanton-based techniques for analysis and reduction of error floors of LDPC codes

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Stepanov, Mikhail G.; Vasic, Bane

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  8. Reduction of the elevator illusion from continued hypergravity exposure and visual error-corrective feedback

    Science.gov (United States)

    Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.

    1996-01-01

    Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.

  9. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  10. Fast quantum modular exponentiation

    International Nuclear Information System (INIS)

    Meter, Rodney van; Itoh, Kohei M.

    2005-01-01

    We present a detailed analysis of the impact on quantum modular exponentiation of architectural features and possible concurrent gate execution. Various arithmetic algorithms are evaluated for execution time, potential concurrency, and space trade-offs. We find that to exponentiate an n-bit number, for storage space 100n (20 times the minimum 5n), we can execute modular exponentiation 200-700 times faster than optimized versions of the basic algorithms, depending on architecture, for n=128. Addition on a neighbor-only architecture is limited to O(n) time, whereas non-neighbor architectures can reach O(log n), demonstrating that physical characteristics of a computing device have an important impact on both real-world running time and asymptotic behavior. Our results will help guide experimental implementations of quantum algorithms and devices

  11. Continuous multivariate exponential extension

    International Nuclear Information System (INIS)

    Block, H.W.

    1975-01-01

    The Freund-Weinman multivariate exponential extension is generalized to the case of nonidentically distributed marginal distributions. A fatal shock model is given for the resulting distribution. Results in the bivariate case and the concept of constant multivariate hazard rate lead to a continuous distribution related to the multivariate exponential distribution (MVE) of Marshall and Olkin. This distribution is shown to be a special case of the extended Freund-Weinman distribution. A generalization of the bivariate model of Proschan and Sullo leads to a distribution which contains both the extended Freund-Weinman distribution and the MVE

  12. A Bayesian approach for the stochastic modeling error reduction of magnetic material identification of an electromagnetic device

    International Nuclear Information System (INIS)

    Abdallh, A; Crevecoeur, G; Dupré, L

    2012-01-01

    Magnetic material properties of an electromagnetic device can be recovered by solving an inverse problem where measurements are adequately interpreted by a mathematical forward model. The accuracy of these forward models dramatically affects the accuracy of the material properties recovered by the inverse problem. The more accurate the forward model is, the more accurate recovered data are. However, the more accurate ‘fine’ models demand a high computational time and memory storage. Alternatively, less accurate ‘coarse’ models can be used with a demerit of the high expected recovery errors. This paper uses the Bayesian approximation error approach for improving the inverse problem results when coarse models are utilized. The proposed approach adapts the objective function to be minimized with the a priori misfit between fine and coarse forward model responses. In this paper, two different electromagnetic devices, namely a switched reluctance motor and an EI core inductor, are used as case studies. The proposed methodology is validated on both purely numerical and real experimental results. The results show a significant reduction in the recovery error within an acceptable computational time. (paper)

  13. Intelligent Engine Systems Work Element 1.2: Malfunction and Operator Error Reduction

    Science.gov (United States)

    Wiseman, Matthew

    2005-01-01

    Jet engines, although highly reliable and safe, do experience malfunctions that cause flight delays, passenger stress, and in some cases, in conjunction with inappropriate crew response, contribute to airplane accidents. On rare occasions, the anomalous engine behavior is not recognized until it is too late for the pilots to do anything to prevent or mitigate the resulting engine malfunction causing in-flight shutdowns (IFSDs), aborted takeoffs (ATOs), or loss of thrust control (LOTC). In some cases, the crew response to a myriad of external stimuli and existing training procedures is the source of the problem mentioned above. The problem is the reduction of jet engine malfunctions (IFSDs, ATOs, and LOTC) and inappropriate crew response (PSM+ICR) through the use of evolving and advanced technologies. The solution is to develop the overall system health maintenance architecture, detection and accommodation technologies, processes, and enhanced crew interfaces that would enable a significant reduction in IFSDs, ATOs, and LOTC. This program defines requirements and proposes a preliminary design concept of an architecture that enables the realization of the solution.

  14. SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2012-10-01

    Full Text Available Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2, denoted by XCO2 , retrieved from SCIAMACHY on-board ENVISAT can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. In a previous publication (Heymann et al., 2012, it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1 that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2, which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003–2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON. The validation shows a significant improvement of the new product (v2.2 in comparison to the previous product (v2.1. For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean intra-monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional-scale (radius of 500 km precision of monthly averages of 1.6 ppm and an estimated regional-scale relative accuracy of 0.8 ppm. In addition to the comparison with the limited number of TCCON sites, we also present a comparison with NOAA's global CO2 modelling

  15. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  16. Effect of double-shell structure on reduction of field errors in the STP-3(M) reversed-field pinch

    International Nuclear Information System (INIS)

    Yamada, S.; Masamune, S.; Nagata, A.; Arimoto, H.; Oshiyama, H.; Sato, K.I.

    1988-08-01

    Reversed-field pinch (RFP) operation on STP-3 (M) proved that the adition of a quasistational vertical field B sub(perpendicular) together with large reduction of irregular magnetic field at the shell gap could remarkably improve properties of the plasma confinement. Here, the gaps of a thick shell is wholely covered with the single primary coil having a shell shape. The measured field error at the gap is as small as 7.5 % of the poloidal field. The application of B sub(perpendicular) sets the plasma at a more perfect equilibrium. In this operation, the plasma resistivety much decreased by a factor 2 and the electron temperature rose up to 0.8 keV. (author)

  17. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Singh

    2011-06-01

    Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

  18. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  19. Chapter 3: The analysis of exponential experiments

    International Nuclear Information System (INIS)

    Brown, G.; Moore, P.F.G.; Richmond, R.

    1963-01-01

    A description is given of the methods used by the BICEP group for the analysis of exponential experiments on graphite-moderated natural uranium lattices. These differ in some respects from the methods formerly employed at A.E.R.E. and have resulted in a reduction by a factor of four in the time taken to carry out and analyse an experiment. (author)

  20. Direction-dependent exponential biassing

    International Nuclear Information System (INIS)

    Bending, R.C.

    1974-01-01

    When Monte Carlo methods are applied to penetration problems, the use of variance reduction techniques is essential if realistic computing times are to be achieved. A technique known as direction-dependent exponential biassing is described which is simple to apply and therefore suitable for problems with difficult geometry. The material cross section in any region is multiplied by a factor which depends on the particle direction, so that particles travelling in a preferred direction ''see'' a smaller cross section than those travelling in the opposite direction. A theoretical study shows that substantial gains may be obtained, and that the choice of biassing parameter is not critical. The method has been implemented alongside other importance sampling techniques in the general Monte Carlo code SPARTAN, and results obtained for simple problems using this code are included. 4 references. (U.S.)

  1. Continuous exponential martingales and BMO

    CERN Document Server

    Kazamaki, Norihiko

    1994-01-01

    In three chapters on Exponential Martingales, BMO-martingales, and Exponential of BMO, this book explains in detail the beautiful properties of continuous exponential martingales that play an essential role in various questions concerning the absolute continuity of probability laws of stochastic processes. The second and principal aim is to provide a full report on the exciting results on BMO in the theory of exponential martingales. The reader is assumed to be familiar with the general theory of continuous martingales.

  2. Exponential smoothing weighted correlations

    Science.gov (United States)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  3. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  4. Estimating exponential scheduling preferences

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Börjesson, Maria; Engelson, Leonid

    2015-01-01

    of car drivers' route and mode choice under uncertain travel times. Our analysis exposes some important methodological issues related to complex non-linear scheduling models: One issue is identifying the point in time where the marginal utility of being at the destination becomes larger than the marginal......Different assumptions about travelers' scheduling preferences yield different measures of the cost of travel time variability. Only few forms of scheduling preferences provide non-trivial measures which are additive over links in transport networks where link travel times are arbitrarily...... utility of being at the origin. Another issue is that models with the exponential marginal utility formulation suffer from empirical identification problems. Though our results are not decisive, they partly support the constant-affine specification, in which the value of travel time variability...

  5. Quantitative EEG analysis using error reduction ratio-causality test; validation on simulated and real EEG data.

    Science.gov (United States)

    Sarrigiannis, Ptolemaios G; Zhao, Yifan; Wei, Hua-Liang; Billings, Stephen A; Fotheringham, Jayne; Hadjivassiliou, Marios

    2014-01-01

    To introduce a new method of quantitative EEG analysis in the time domain, the error reduction ratio (ERR)-causality test. To compare performance against cross-correlation and coherence with phase measures. A simulation example was used as a gold standard to assess the performance of ERR-causality, against cross-correlation and coherence. The methods were then applied to real EEG data. Analysis of both simulated and real EEG data demonstrates that ERR-causality successfully detects dynamically evolving changes between two signals, with very high time resolution, dependent on the sampling rate of the data. Our method can properly detect both linear and non-linear effects, encountered during analysis of focal and generalised seizures. We introduce a new quantitative EEG method of analysis. It detects real time levels of synchronisation in the linear and non-linear domains. It computes directionality of information flow with corresponding time lags. This novel dynamic real time EEG signal analysis unveils hidden neural network interactions with a very high time resolution. These interactions cannot be adequately resolved by the traditional methods of coherence and cross-correlation, which provide limited results in the presence of non-linear effects and lack fidelity for changes appearing over small periods of time. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Integration of large chemical kinetic mechanisms via exponential methods with Krylov approximations to Jacobian matrix functions

    KAUST Repository

    Bisetti, Fabrizio

    2012-06-01

    Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components. © 2012 Copyright Taylor and Francis Group, LLC.

  7. Test Exponential Pile

    Science.gov (United States)

    Fermi, Enrico

    The Patent contains an extremely detailed description of an atomic pile employing natural uranium as fissile material and graphite as moderator. It starts with the discussion of the theory of the intervening phenomena, in particular the evaluation of the reproduction or multiplication factor, K, that is the ratio of the number of fast neutrons produced in one generation by the fissions to the original number of fast neutrons, in a system of infinite size. The possibility of having a self-maintaining chain reaction in a system of finite size depends both on the facts that K is greater than unity and the overall size of the system is sufficiently large to minimize the percentage of neutrons escaping from the system. After the description of a possible realization of such a pile (with many detailed drawings), the various kinds of neutron losses in a pile are depicted. Particularly relevant is the reported "invention" of the exponential experiment: since theoretical calculations can determine whether or not a chain reaction will occur in a give system, but can be invalidated by uncertainties in the parameters of the problem, an experimental test of the pile is proposed, aimed at ascertaining if the pile under construction would be divergent (i.e. with a neutron multiplication factor K greater than 1) by making measurements on a smaller pile. The idea is to measure, by a detector containing an indium foil, the exponential decrease of the neutron density along the length of a column of uranium-graphite lattice, where a neutron source is placed near its base. Such an exponential decrease is greater or less than that expected due to leakage, according to whether the K factor is less or greater than 1, so that this experiment is able to test the criticality of the pile, its accuracy increasing with the size of the column. In order to perform this measure a mathematical description of the effect of neutron production, diffusion, and absorption on the neutron density in the

  8. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Science.gov (United States)

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  9. Cosmology with exponential potentials

    International Nuclear Information System (INIS)

    Kehagias, Alex; Kofinas, Georgios

    2004-01-01

    We examine in the context of general relativity the dynamics of a spatially flat Robertson-Walker universe filled with a classical minimally coupled scalar field φ of exponential potential V(φ) ∼ exp(-μφ) plus pressureless baryonic matter. This system is reduced to a first-order ordinary differential equation for Ω φ (w φ ) or q(w φ ), providing direct evidence on the acceleration/deceleration properties of the system. As a consequence, for positive potentials, passage into acceleration not at late times is generically a feature of the system for any value of μ, even when the late-times attractors are decelerating. Furthermore, the structure formation bound, together with the constraints Ω m0 ∼ 0.25 - 0.3, -1 ≤ w φ0 ≤ -0.6, provides, independently of initial conditions and other parameters, the necessary condition 0 N , while the less conservative constraint -1 ≤ w φ ≤ -0.93 gives 0 N . Special solutions are found to possess intervals of acceleration. For the almost cosmological constant case w φ ∼ -1, the general relation Ω φ (w φ ) is obtained. The generic (nonlinearized) late-times solution of the system in the plane (w φ , Ω φ ) or (w φ , q) is also derived

  10. OPINION: Safe exponential manufacturing

    Science.gov (United States)

    Phoenix, Chris; Drexler, Eric

    2004-08-01

    In 1959, Richard Feynman pointed out that nanometre-scale machines could be built and operated, and that the precision inherent in molecular construction would make it easy to build multiple identical copies. This raised the possibility of exponential manufacturing, in which production systems could rapidly and cheaply increase their productive capacity, which in turn suggested the possibility of destructive runaway self-replication. Early proposals for artificial nanomachinery focused on small self-replicating machines, discussing their potential productivity and their potential destructiveness if abused. In the light of controversy regarding scenarios based on runaway replication (so-called 'grey goo'), a review of current thinking regarding nanotechnology-based manufacturing is in order. Nanotechnology-based fabrication can be thoroughly non-biological and inherently safe: such systems need have no ability to move about, use natural resources, or undergo incremental mutation. Moreover, self-replication is unnecessary: the development and use of highly productive systems of nanomachinery (nanofactories) need not involve the construction of autonomous self-replicating nanomachines. Accordingly, the construction of anything resembling a dangerous self-replicating nanomachine can and should be prohibited. Although advanced nanotechnologies could (with great difficulty and little incentive) be used to build such devices, other concerns present greater problems. Since weapon systems will be both easier to build and more likely to draw investment, the potential for dangerous systems is best considered in the context of military competition and arms control.

  11. Multivariate Matrix-Exponential Distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    2010-01-01

    be written as linear combinations of the elements in the exponential of a matrix. For this reason we shall refer to multivariate distributions with rational Laplace transform as multivariate matrix-exponential distributions (MVME). The marginal distributions of an MVME are univariate matrix......-exponential distributions. We prove a characterization that states that a distribution is an MVME distribution if and only if all non-negative, non-null linear combinations of the coordinates have a univariate matrix-exponential distribution. This theorem is analog to a well-known characterization theorem...

  12. Hyperbolic Cosine–Exponentiated Exponential Lifetime Distribution and its Application in Reliability

    Directory of Open Access Journals (Sweden)

    Omid Kharazmi

    2017-02-01

    Full Text Available Recently, Kharazmi and Saadatinik (2016 introduced a new family of lifetime distributions called hyperbolic cosine – F (HCF distribution. In the present paper, it is focused on a special case of HCF family with exponentiated exponential distribution as a baseline distribution (HCEE. Various properties of the proposed distribution including explicit expressions for the moments, quantiles, mode, moment generating function, failure rate function, mean residual lifetime, order statistics and expression of the entropy are derived. Estimating parameters of HCEE distribution are obtained by eight estimation methods: maximum likelihood, Bayesian, maximum product of spacings, parametric bootstrap, non-parametric bootstrap, percentile, least-squares and weighted least-squares. A simulation study is conducted to examine the bias, mean square error of the maximum likelihood estimators. Finally, one real data set has been analyzed for illustrative purposes and it is observed that the proposed model fits better than Weibull, gamma and generalized exponential distributions.

  13. An exponential observer for the generalized Rossler chaotic system

    International Nuclear Information System (INIS)

    Sun, Y.-J.

    2009-01-01

    In this paper, the generalized Rossler chaotic system is considered and the state observation problem of such a system is investigated. Based on the time-domain approach, a state observer for the generalized Rossler chaotic system is developed to guarantee the global exponential stability of the resulting error system. Moreover, the guaranteed exponential convergence rate can be arbitrarily pre-specified. Finally, a numerical example is provided to illustrate the feasibility and effectiveness of the obtained result.

  14. Transverse exponential stability and applications

    NARCIS (Netherlands)

    Andrieu, Vincent; Jayawardhana, Bayu; Praly, Laurent

    2016-01-01

    We investigate how the following properties are related to each other: i) A manifold is “transversally” exponentially stable; ii) The “transverse” linearization along any solution in the manifold is exponentially stable; iii) There exists a field of positive definite quadratic forms whose

  15. Reduction of determinate errors in mass bias-corrected isotope ratios measured using a multi-collector plasma mass spectrometer

    International Nuclear Information System (INIS)

    Doherty, W.

    2015-01-01

    A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer

  16. Daily Orthogonal Kilovoltage Imaging Using a Gantry-Mounted On-Board Imaging System Results in a Reduction in Radiation Therapy Delivery Errors

    Energy Technology Data Exchange (ETDEWEB)

    Russo, Gregory A., E-mail: gregory.russo@bmc.org [Department of Radiation Oncology, Boston Medical Center and Boston University School of Medicine, Boston, Massachusetts (United States); Qureshi, Muhammad M.; Truong, Minh-Tam; Hirsch, Ariel E.; Orlina, Lawrence; Bohrs, Harry; Clancy, Pauline; Willins, John; Kachnic, Lisa A. [Department of Radiation Oncology, Boston Medical Center and Boston University School of Medicine, Boston, Massachusetts (United States)

    2012-11-01

    Purpose: To determine whether the use of routine image guided radiation therapy (IGRT) using pretreatment on-board imaging (OBI) with orthogonal kilovoltage X-rays reduces treatment delivery errors. Methods and Materials: A retrospective review of documented treatment delivery errors from 2003 to 2009 was performed. Following implementation of IGRT in 2007, patients received daily OBI with orthogonal kV X-rays prior to treatment. The frequency of errors in the pre- and post-IGRT time frames was compared. Treatment errors (TEs) were classified as IGRT-preventable or non-IGRT-preventable. Results: A total of 71,260 treatment fractions were delivered to 2764 patients. A total of 135 (0.19%) TEs occurred in 39 (1.4%) patients (3.2% in 2003, 1.1% in 2004, 2.5% in 2005, 2% in 2006, 0.86% in 2007, 0.24% in 2008, and 0.22% in 2009). In 2007, the TE rate decreased by >50% and has remained low (P = .00007, compared to before 2007). Errors were classified as being potentially preventable with IGRT (e.g., incorrect site, patient, or isocenter) vs. not. No patients had any IGRT-preventable TEs from 2007 to 2009, whereas there were 9 from 2003 to 2006 (1 in 2003, 2 in 2004, 2 in 2005, and 4 in 2006; P = .0058) before the implementation of IGRT. Conclusions: IGRT implementation has a patient safety benefit with a significant reduction in treatment delivery errors. As such, we recommend the use of IGRT in routine practice to complement existing quality assurance measures.

  17. Generalized approach to non-exponential relaxation

    Indian Academy of Sciences (India)

    Non-exponential relaxation is a universal feature of systems as diverse as glasses, spin ... which changes from a simple exponential to a stretched exponential and a power law by increasing the constraints in the system. ... Current Issue

  18. Universality in stochastic exponential growth.

    Science.gov (United States)

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  19. Increased Patient Satisfaction and a Reduction in Pre-Analytical Errors Following Implementation of an Electronic Specimen Collection Module in Outpatient Phlebotomy.

    Science.gov (United States)

    Kantartjis, Michalis; Melanson, Stacy E F; Petrides, Athena K; Landman, Adam B; Bates, David W; Rosner, Bernard A; Goonan, Ellen; Bixho, Ida; Tanasijevic, Milenko J

    2017-08-01

    Patient satisfaction in outpatient phlebotomy settings typically depends on wait time and venipuncture experience, and many patients equate their experiences with their overall satisfaction with the hospital. We compared patient service times and preanalytical errors pre- and postimplementation of an integrated electronic health record (EHR)-laboratory information system (LIS) and electronic specimen collection module. We also measured patient wait time and assessed patient satisfaction using a 5-question survey. The percentage of patients waiting less than 10 minutes increased from 86% preimplementation to 93% postimplementation of the EHR-LIS (P ≤.001). The median total service time decreased significantly, from 6 minutes (IQR, 4-8 minutes), to 5 minutes (IQR, 3-6 minutes) (P = .005). The preanalytical errors decreased significantly, from 3.20 to 1.93 errors per 1000 specimens (P ≤.001). Overall patient satisfaction improved, with an increase in excellent responses for all 5 questions (P ≤.001). We found several benefits of implementing an electronic specimen collection module, including decreased wait and service times, improved patient satisfaction, and a reduction in preanalytical errors. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  20. Exponential Expansion in Evolutionary Economics

    DEFF Research Database (Denmark)

    Frederiksen, Peter; Jagtfelt, Tue

    2013-01-01

    This article attempts to solve current problems of conceptual fragmentation within the field of evolutionary economics. One of the problems, as noted by a number of observers, is that the field suffers from an assemblage of fragmented and scattered concepts (Boschma and Martin 2010). A solution...... to this problem is proposed in the form of a model of exponential expansion. The model outlines the overall structure and function of the economy as exponential expansion. The pictographic model describes four axiomatic concepts and their exponential nature. The interactive, directional, emerging and expanding...... concepts are described in detail. Taken together it provides the rudimentary aspects of an economic system within an analytical perspective. It is argued that the main dynamic processes of the evolutionary perspective can be reduced to these four concepts. The model and concepts are evaluated in the light...

  1. Exponential x-ray transform

    International Nuclear Information System (INIS)

    Hazou, I.A.

    1986-01-01

    In emission computed tomography one wants to determine the location and intensity of radiation emitted by sources in the presence of an attenuating medium. If the attenuation is known everywhere and equals a constant α in a convex neighborhood of the support of f, then the problem reduces to that of inverting the exponential x-ray transform P/sub α/. The exponential x-ray transform P/sub μ/ with the attenuation μ variable, is of interest mathematically. For the exponential x-ray transform in two dimensions, it is shown that for a large class of approximate δ functions E, convolution kernels K exist for use in the convolution backprojection algorithm. For the case where the attenuation is constant, exact formulas are derived for calculating the convolution kernels from radial point spread functions. From these an exact inversion formula for the constantly attenuated transform is obtained

  2. Rhodium SPND's Error Reduction using Extended Kalman Filter combined with Time Dependent Neutron Diffusion Equation

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su

    2014-01-01

    The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core

  3. Rhodium SPND's Error Reduction using Extended Kalman Filter combined with Time Dependent Neutron Diffusion Equation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su [FNC Technology Co., Ltd., Yongin (Korea, Republic of)

    2014-05-15

    The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core.

  4. Exponential Potential versus Dark Matter

    Science.gov (United States)

    1993-10-15

    scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the

  5. Phenomenology of stochastic exponential growth

    Science.gov (United States)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  6. The exponentiated generalized Pareto distribution | Adeyemi | Ife ...

    African Journals Online (AJOL)

    Recently Gupta et al. (1998) introduced the exponentiated exponential distribution as a generalization of the standard exponential distribution. In this paper, we introduce a three-parameter generalized Pareto distribution, the exponentiated generalized Pareto distribution (EGP). We present a comprehensive treatment of the ...

  7. Reduction of errors in radiotherapy: the E.F.O.M.P. approach (European federation of organisations for medical physics)

    International Nuclear Information System (INIS)

    Van Kleffens, H.; Van der Putten, W.

    2009-01-01

    This article is devoted to the study of the current situation of the training and education in medical physics in Europe, through the new perspectives and recommendations of the European federation of organisations for medical physics (E.F.O.M.P.). E.F.O.M.P. recommends to its members to institute a degree course on five years ( master degree in medical physics) followed by two years of specialization in medical physics leading to a title of qualified medical physicist. The question about the time to get this diploma is not solved (10 or 13 years) and could constitute a brake at the improvement of the quality because of the lack of qualified medical physicists. E.F.O.M.P. recommends to its members to integrate a module on safety and risk analysis at the training for students in medical physics, in order to reduce the errors in the field of health cares in general and in radiotherapy in particular. (N.C.)

  8. Exponentially convergent state estimation for delayed switched recurrent neural networks.

    Science.gov (United States)

    Ahn, Choon Ki

    2011-11-01

    This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.

  9. Real-Time Exponential Curve Fits Using Discrete Calculus

    Science.gov (United States)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  10. Accelerating cosmologies from exponential potentials

    International Nuclear Information System (INIS)

    Neupane, Ishwaree P.

    2003-11-01

    It is learnt that exponential potentials of the form V ∼ exp(-2cφ/M p ) arising from the hyperbolic or flux compactification of higher-dimensional theories are of interest for getting short periods of accelerated cosmological expansions. Using a similar potential but derived for the combined case of hyperbolic-flux compactification, we study a four-dimensional flat (or open) FRW cosmologies and give analytic (and numerical) solutions with exponential behavior of scale factors. We show that, for the M-theory motivated potentials, the cosmic acceleration of the universe can be eternal if the spatial curvature of the 4d spacetime is negative, while the acceleration is only transient for a spatially flat universe. We also briefly discuss about the mass of massive Kaluza-Klein modes and the dynamical stabilization of the compact hyperbolic extra dimensions. (author)

  11. Science in an Exponential World

    Science.gov (United States)

    Szalay, Alexander

    The amount of scientific information is doubling every year. This exponential growth is fundamentally changing every aspect of the scientific process - the collection, analysis and dissemination of scientific information. Our traditional paradigm for scientific publishing assumes a linear world, where the number of journals and articles remains approximately constant. The talk presents the challenges of this new paradigm and shows examples of how some disciplines are trying to cope with the data avalanche. In astronomy, the Virtual Observatory is emerging as a way to do astronomy in the 21st century. Other disciplines are also in the process of creating their own Virtual Observatories, on every imaginable scale of the physical world. We will discuss how long this exponential growth can continue.

  12. Exponential asymptotics of homoclinic snaking

    International Nuclear Information System (INIS)

    Dean, A D; Matthews, P C; Cox, S M; King, J R

    2011-01-01

    We study homoclinic snaking in the cubic-quintic Swift–Hohenberg equation (SHE) close to the onset of a subcritical pattern-forming instability. Application of the usual multiple-scales method produces a leading-order stationary front solution, connecting the trivial solution to the patterned state. A localized pattern may therefore be constructed by matching between two distant fronts placed back-to-back. However, the asymptotic expansion of the front is divergent, and hence should be truncated. By truncating optimally, such that the resultant remainder is exponentially small, an exponentially small parameter range is derived within which stationary fronts exist. This is shown to be a direct result of the 'locking' between the phase of the underlying pattern and its slowly varying envelope. The locking mechanism remains unobservable at any algebraic order, and can only be derived by explicitly considering beyond-all-orders effects in the tail of the asymptotic expansion, following the method of Kozyreff and Chapman as applied to the quadratic-cubic SHE (Chapman and Kozyreff 2009 Physica D 238 319–54, Kozyreff and Chapman 2006 Phys. Rev. Lett. 97 44502). Exponentially small, but exponentially growing, contributions appear in the tail of the expansion, which must be included when constructing localized patterns in order to reproduce the full snaking diagram. Implicit within the bifurcation equations is an analytical formula for the width of the snaking region. Due to the linear nature of the beyond-all-orders calculation, the bifurcation equations contain an analytically indeterminable constant, estimated in the previous work by Chapman and Kozyreff using a best fit approximation. A more accurate estimate of the equivalent constant in the cubic-quintic case is calculated from the iteration of a recurrence relation, and the subsequent analytical bifurcation diagram compared with numerical simulations, with good agreement

  13. Limit laws for exponential families

    OpenAIRE

    Balkema, August A.; Klüppelberg, Claudia; Resnick, Sidney I.

    1999-01-01

    For a real random variable [math] with distribution function [math] , define ¶ [math] ¶ The distribution [math] generates a natural exponential family of distribution functions [math] , where ¶ [math] ¶ We study the asymptotic behaviour of the distribution functions [math] as [math] increases to [math] . If [math] then [math] pointwise on [math] . It may still be possible to obtain a non-degenerate weak limit law [math] by choosing suitable scaling and centring constants [math] an...

  14. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications.

    Science.gov (United States)

    Kim, Changhwa; Shin, DongHyun

    2017-05-12

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss.

  15. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    Science.gov (United States)

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  16. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication

    Science.gov (United States)

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-01

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  17. Fully exponentially correlated wavefunctions for small atoms

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Frank E. [Department of Physics, University of Utah, Salt Lake City, UT 84112 and Quantum Theory Project, University of Florida, P.O. Box 118435, Gainesville, FL 32611 (United States)

    2015-01-22

    Fully exponentially correlated atomic wavefunctions are constructed from exponentials in all the interparticle coordinates, in contrast to correlated wavefunctions of the Hylleraas form, in which only the electron-nuclear distances occur exponentially, with electron-electron distances entering only as integer powers. The full exponential correlation causes many-configuration wavefunctions to converge with expansion length more rapidly than either orbital formulations or correlated wavefunctions of the Hylleraas type. The present contribution surveys the effectiveness of fully exponentially correlated functions for the three-body system (the He isoelectronic series) and reports their application to a four-body system (the Li atom)

  18. Deformed exponentials and portfolio selection

    Science.gov (United States)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  19. Coarse Grained Exponential Variational Autoencoders

    KAUST Repository

    Sun, Ke

    2017-02-25

    Variational autoencoders (VAE) often use Gaussian or category distribution to model the inference process. This puts a limit on variational learning because this simplified assumption does not match the true posterior distribution, which is usually much more sophisticated. To break this limitation and apply arbitrary parametric distribution during inference, this paper derives a \\\\emph{semi-continuous} latent representation, which approximates a continuous density up to a prescribed precision, and is much easier to analyze than its continuous counterpart because it is fundamentally discrete. We showcase the proposition by applying polynomial exponential family distributions as the posterior, which are universal probability density function generators. Our experimental results show consistent improvements over commonly used VAE models.

  20. An Analysis of Decision Factors on the Price of South Korea’s Certified Emission Reductions in Use of Vector Error Correction Model

    Directory of Open Access Journals (Sweden)

    Sumin Park

    2017-09-01

    Full Text Available This study analyzes factors affecting the price of South Korea’s Certified Emission Reduction (CER using statistical methods. CER refers to the transaction price for the amount of carbon emitted. Analysis of results found a co-integration relationship among the price of South Korea’s CER, oil price (WTI, and South Korea’s maximum electric power demand, which means that there is a long-term relationship among the three variables. Based on this result, VECM (vector error correction model analysis, impulse response function, and variance decomposition were performed. As the oil price (WTI increases, the demand for gas in power generation in Korea declines while the demand for coal increases. This leads to increased greenhouse gas (GHG; e.g., CO2 emissions and increased price of South Korea’s CERs. In addition, rising oil prices (WTI cause a decline in demand for oil products such as kerosene, which results in an increase in South Korea’s maximum power demand.

  1. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  2. Exponential Stabilization of Underactuated Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Pettersen, K.Y.

    1996-12-31

    Underactuated vehicles are vehicles with fewer independent control actuators than degrees of freedom to be controlled. Such vehicles may be used in inspection of sub-sea cables, inspection and maintenance of offshore oil drilling platforms, and similar. This doctoral thesis discusses feedback stabilization of underactuated vehicles. The main objective has been to further develop methods from stabilization of nonholonomic systems to arrive at methods that are applicable to underactuated vehicles. A nonlinear model including both dynamics and kinematics is used to describe the vehicles, which may be surface vessels, spacecraft or autonomous underwater vehicles (AUVs). It is shown that for a certain class of underactuated vehicles the stabilization problem is not solvable by linear control theory. A new stability result for a class of homogeneous time-varying systems is derived and shown to be an important tool for developing continuous periodic time-varying feedback laws that stabilize underactuated vehicles without involving cancellation of dynamics. For position and orientation control of a surface vessel without side thruster a new continuous periodic feedback law is proposed that does not cancel any dynamics, and that exponentially stabilizes the origin of the underactuated surface vessel. A further issue considered is the stabilization of the attitude of an AUV. Finally, the thesis discusses stabilization of both position and attitude of an underactuated AUV. 55 refs., 28 figs.

  3. Exponential Hilbert series of equivariant embeddings

    OpenAIRE

    Johnson, Wayne A.

    2018-01-01

    In this article, we study properties of the exponential Hilbert series of a $G$-equivariant projective variety, where $G$ is a semisimple, simply-connected complex linear algebraic group. We prove a relationship between the exponential Hilbert series and the degree and dimension of the variety. We then prove a combinatorial identity for the coefficients of the polynomial representing the exponential Hilbert series. This formula is used in examples to prove further combinatorial identities inv...

  4. Exponential time-dependent perturbation theory in rotationally inelastic scattering

    International Nuclear Information System (INIS)

    Cross, R.J.

    1983-01-01

    An exponential form of time-dependent perturbation theory (the Magnus approximation) is developed for rotationally inelastic scattering. A phase-shift matrix is calculated as an integral in time over the anisotropic part of the potential. The trajectory used for this integral is specified by the diagonal part of the potential matrix and the arithmetic average of the initial and final velocities and the average orbital angular momentum. The exponential of the phase-shift matrix gives the scattering matrix and the various cross sections. A special representation is used where the orbital angular momentum is either treated classically or may be frozen out to yield the orbital sudden approximation. Calculations on Ar+N 2 and Ar+TIF show that the theory generally gives very good agreement with accurate calculations, even where the orbital sudden approximation (coupled-states) results are seriously in error

  5. Global impulsive exponential synchronization of stochastic perturbed chaotic delayed neural networks

    International Nuclear Information System (INIS)

    Hua-Guang, Zhang; Tie-Dong, Ma; Jie, Fu; Shao-Cheng, Tong

    2009-01-01

    In this paper, the global impulsive exponential synchronization problem of a class of chaotic delayed neural networks (DNNs) with stochastic perturbation is studied. Based on the Lyapunov stability theory, stochastic analysis approach and an efficient impulsive delay differential inequality, some new exponential synchronization criteria expressed in the form of the linear matrix inequality (LMI) are derived. The designed impulsive controller not only can globally exponentially stabilize the error dynamics in mean square, but also can control the exponential synchronization rate. Furthermore, to estimate the stable region of the synchronization error dynamics, a novel optimization control algorithm is proposed, which can deal with the minimum problem with two nonlinear terms coexisting in LMIs effectively. Simulation results finally demonstrate the effectiveness of the proposed method

  6. Survival analysis approach to account for non-exponential decay rate effects in lifetime experiments

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevincoakley@nist.gov [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S.; Huber, M.G. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); Huffer, C.R.; Huffman, P.R. [North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Marley, D.E. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Mumm, H.P. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); O' Shaughnessy, C.M. [University of North Carolina at Chapel Hill, 120 E. Cameron Ave., CB #3255, Chapel Hill, NC 27599 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Schelhammer, K.W. [North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Thompson, A.K.; Yue, A.T. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States)

    2016-03-21

    In experiments that measure the lifetime of trapped particles, in addition to loss mechanisms with exponential survival probability functions, particles can be lost by mechanisms with non-exponential survival probability functions. Failure to account for such loss mechanisms produces systematic measurement error and associated systematic uncertainties in these measurements. In this work, we develop a general competing risks survival analysis method to account for the joint effect of loss mechanisms with either exponential or non-exponential survival probability functions, and a method to quantify the size of systematic effects and associated uncertainties for lifetime estimates. As a case study, we apply our survival analysis formalism and method to the Ultra Cold Neutron lifetime experiment at NIST. In this experiment, neutrons can escape a magnetic trap before they decay due to a wall loss mechanism with an associated non-exponential survival probability function.

  7. Survival analysis approach to account for non-exponential decay rate effects in lifetime experiments

    International Nuclear Information System (INIS)

    Coakley, K.J.; Dewey, M.S.; Huber, M.G.; Huffer, C.R.; Huffman, P.R.; Marley, D.E.; Mumm, H.P.; O'Shaughnessy, C.M.; Schelhammer, K.W.; Thompson, A.K.; Yue, A.T.

    2016-01-01

    In experiments that measure the lifetime of trapped particles, in addition to loss mechanisms with exponential survival probability functions, particles can be lost by mechanisms with non-exponential survival probability functions. Failure to account for such loss mechanisms produces systematic measurement error and associated systematic uncertainties in these measurements. In this work, we develop a general competing risks survival analysis method to account for the joint effect of loss mechanisms with either exponential or non-exponential survival probability functions, and a method to quantify the size of systematic effects and associated uncertainties for lifetime estimates. As a case study, we apply our survival analysis formalism and method to the Ultra Cold Neutron lifetime experiment at NIST. In this experiment, neutrons can escape a magnetic trap before they decay due to a wall loss mechanism with an associated non-exponential survival probability function.

  8. Method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  9. Multivariate Marshall and Olkin Exponential Minification Process ...

    African Journals Online (AJOL)

    A stationary bivariate minification process with bivariate Marshall-Olkin exponential distribution that was earlier studied by Miroslav et al [15]is in this paper extended to multivariate minification process with multivariate Marshall and Olkin exponential distribution as its stationary marginal distribution. The innovation and the ...

  10. On Uniform Exponential Trichotomy in Banach Spaces

    Directory of Open Access Journals (Sweden)

    Kovacs Monteola Ilona

    2014-06-01

    Full Text Available In this paper we consider three concepts of uniform exponential trichotomy on the half-line in the general framework of evolution operators in Banach spaces. We obtain a systematic classification of uniform exponential trichotomy concepts and the connections between them.

  11. Dual exponential polynomials and linear differential equations

    Science.gov (United States)

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne

    2018-01-01

    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  12. Does proton decay follow the exponential law

    International Nuclear Information System (INIS)

    Sanchez-Gomez, J.L.; Alvarez-Estrada, R.F.; Fernandez, L.A.

    1984-01-01

    In this paper, we discuss the exponential law for proton decay. By using a simple model based upon SU(5)GUT and the current theories of hadron structure, we explicitely show that the corrections to the Wigner-Weisskopf approximation are quite negligible for present day protons, so that their eventual decay should follow the exponential law. Previous works are critically analyzed. (orig.)

  13. A quantification of the hazards of fitting sums of exponentials to noisy data

    International Nuclear Information System (INIS)

    Bromage, G.E.

    1983-06-01

    The ill-conditioned nature of sums-of-exponentials analyses is confirmed and quantified, using synthetic noisy data. In particular, the magnification of errors is plotted for various two-exponential models, to illustrate its dependence on the ratio of decay constants, and on the ratios of amplitudes of the contributing terms. On moving from two- to three-exponential models, the condition deteriorates badly. It is also shown that the use of 'direct' Prony-type analyses (rather than general iterative nonlinear optimisation) merely aggravates the condition. (author)

  14. Estimation of exponential convergence rate and exponential stability for neural networks with time-varying delay

    International Nuclear Information System (INIS)

    Tu Fenghua; Liao Xiaofeng

    2005-01-01

    We study the problem of estimating the exponential convergence rate and exponential stability for neural networks with time-varying delay. Some criteria for exponential stability are derived by using the linear matrix inequality (LMI) approach. They are less conservative than the existing ones. Some analytical methods are employed to investigate the bounds on the interconnection matrix and activation functions so that the systems are exponentially stable

  15. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  16. Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems

    Directory of Open Access Journals (Sweden)

    Zheng-Fan Liu

    2014-01-01

    Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.

  17. Exponential Stability of Switched Positive Homogeneous Systems

    Directory of Open Access Journals (Sweden)

    Dadong Tian

    2017-01-01

    Full Text Available This paper studies the exponential stability of switched positive nonlinear systems defined by cooperative and homogeneous vector fields. In order to capture the decay rate of such systems, we first consider the subsystems. A sufficient condition for exponential stability of subsystems with time-varying delays is derived. In particular, for the corresponding delay-free systems, we prove that this sufficient condition is also necessary. Then, we present a sufficient condition of exponential stability under minimum dwell time switching for the switched positive nonlinear systems. Some results in the previous literature are extended. Finally, a numerical example is given to demonstrate the effectiveness of the obtained results.

  18. Exponential Frequency Spectrum in Magnetized Plasmas

    International Nuclear Information System (INIS)

    Pace, D. C.; Shi, M.; Maggs, J. E.; Morales, G. J.; Carter, T. A.

    2008-01-01

    Measurements of a magnetized plasma with a controlled electron temperature gradient show the development of a broadband spectrum of density and temperature fluctuations having an exponential frequency dependence at frequencies below the ion cyclotron frequency. The origin of the exponential frequency behavior is traced to temporal pulses of Lorentzian shape. Similar exponential frequency spectra are also found in limiter-edge plasma turbulence associated with blob transport. This finding suggests a universal feature of magnetized plasma turbulence leading to nondiffusive, cross-field transport, namely, the presence of Lorentzian shaped pulses

  19. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  20. On the formation of exponential discs

    International Nuclear Information System (INIS)

    Yoshii, Yuzuru; Sommer-Larsen, Jesper

    1989-01-01

    Spiral galaxy discs are characterized by approximately exponential surface luminosity profiles. In this paper the evolutionary equations for a star-forming, viscous disc are solved analytically or semi-analytically. It is shown that approximately exponential stellar surface density profiles result if the viscous time-scale t ν is comparable to the star-formation time scale t * everywhere in the disc. The analytical solutions are used to illuminate further on the issue of why the above mechanism leads to resulting exponential stellar profiles under certain conditions. The sensitivity of the solution to variations of various parameters are investigated and show that the initial gas surface density distribution has to be fairly regular in order that final exponential stellar surface density profiles result. (author)

  1. Exponential attractors for a nonclassical diffusion equation

    Directory of Open Access Journals (Sweden)

    Qiaozhen Ma

    2009-01-01

    Full Text Available In this article, we prove the existence of exponential attractors for a nonclassical diffusion equation in ${H^{2}(Omega}cap{H}^{1}_{0}(Omega$ when the space dimension is less than 4.

  2. Central limit theorem and deformed exponentials

    International Nuclear Information System (INIS)

    Vignat, C; Plastino, A

    2007-01-01

    The central limit theorem (CLT) can be ranked among the most important ones in probability theory and statistics and plays an essential role in several basic and applied disciplines, notably in statistical thermodynamics. We show that there exists a natural extension of the CLT from exponentials to so-called deformed exponentials (also denoted as q-Gaussians). Our proposal applies exactly in the usual conditions in which the classical CLT is used. (fast track communication)

  3. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  4. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  5. BET 2: Poor evidence on whether teaching cognitive debiasing, or cognitive forcing strategies, lead to a reduction in errors attributable to cognition in emergency medicine students or doctors.

    Science.gov (United States)

    Oliver, Govind; Oliver, Gopal; Body, Rick

    2017-08-01

    A short review was carried out to see if teaching cognitive forcing strategies reduces cognitive error in the practice of emergency medicine. Two relevant papers were found using the described search strategy. The author, date and country of publication, patient group studied, study type, relevant outcomes, results and study weaknesses of these papers are tabulated. There is currently little evidence that teaching cognitive forcing strategies reduces cognitive error in the practice of emergency medicine. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Implicit and fully implicit exponential finite difference methods

    Indian Academy of Sciences (India)

    Burgers' equation; exponential finite difference method; implicit exponential finite difference method; ... This paper describes two new techniques which give improved exponential finite difference solutions of Burgers' equation. ... Current Issue

  7. The technological singularity and exponential medicine

    Directory of Open Access Journals (Sweden)

    Iraj Nabipour

    2016-01-01

    Full Text Available The "technological singularity" is forecasted to occur in 2045. It is a point when non-biological intelligence becomes more intelligent than humans and each generation of intelligent machines re-designs itself smarter. Beyond this point, there is a symbiosis between machines and humans. This co-existence will produce incredible impacts on medicine that its sparkles could be seen in healthcare industry and the future medicine since 2025. Ray Kurzweil, the great futurist, suggested that three revolutions in science and technology consisting genetic and molecular science, nanotechnology, and robotic (artificial intelligence provided an exponential growth rate for medicine. The "exponential medicine" is going to create more disruptive technologies in healthcare industry. The exponential medicine shifts the paradigm of medical philosophy and produces significant impacts on the healthcare system and patient-physician relationship.   

  8. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  9. Exponential and Critical Experiments Vol. II. Proceedings of the Symposium on Exponential and Critical Experiments

    International Nuclear Information System (INIS)

    1964-01-01

    In September 1963 the International Atomic Energy Agency organized the Symposium on Exponential and Critical Experiments in Amsterdam, Netherlands, at the invitation of the Government of the Netherlands. The Symposium enabled scientists from Member States to discuss the results of such experiments which provide the physics data necessary for the design of power reactors. Great advances made in recent years in this field have provided scientists with highly sophisticated and reliable experimental and theoretical methods. This trend is reflected in the presentation, at the Symposium, of many new experimental techniques resulting in more detailed and accurate information and a reduction of costs. Both the number of experimental parameters and their range of variation have been extended, and a closer degree of simulation of the actual power reactor has been achieved, for example, by means of high temperature critical assemblies. Basic types of lattices have continued to be the objective of many investigations, and extensive theoretical analyses have been carried out to provide a more thorough understanding of the neutron physics involved. Twenty nine countries and 3 international organizations were represented by 198 participants. Seventy one papers were presented. These numbers alone show the wide interest which the topic commands in the field of reactor design. We hope that this publication, which includes the papers presented at the Symposium and a record of the discussions, will prove useful as a work of reference to scientists working in this field

  10. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    Science.gov (United States)

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  11. A method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  12. Quantum Zeno effect for exponentially decaying systems

    International Nuclear Information System (INIS)

    Koshino, Kazuki; Shimizu, Akira

    2004-01-01

    The quantum Zeno effect - suppression of decay by frequent measurements - was believed to occur only when the response of the detector is so quick that the initial tiny deviation from the exponential decay law is detectable. However, we show that it can occur even for exactly exponentially decaying systems, for which this condition is never satisfied, by considering a realistic case where the detector has a finite energy band of detection. The conventional theories correspond to the limit of an infinite bandwidth. This implies that the Zeno effect occurs more widely than expected thus far

  13. Calorimeter prediction based on multiple exponentials

    International Nuclear Information System (INIS)

    Smith, M.K.; Bracken, D.S.

    2002-01-01

    Calorimetry allows very precise measurements of nuclear material to be carried out, but it also requires relatively long measurement times to do so. The ability to accurately predict the equilibrium response of a calorimeter would significantly reduce the amount of time required for calorimetric assays. An algorithm has been developed that is effective at predicting the equilibrium response. This multi-exponential prediction algorithm is based on an iterative technique using commercial fitting routines that fit a constant plus a variable number of exponential terms to calorimeter data. Details of the implementation and the results of trials on a large number of calorimeter data sets will be presented

  14. Exponential Growth of Nonlinear Ballooning Instability

    International Nuclear Information System (INIS)

    Zhu, P.; Hegna, C. C.; Sovinec, C. R.

    2009-01-01

    Recent ideal magnetohydrodynamic (MHD) theory predicts that a perturbation evolving from a linear ballooning instability will continue to grow exponentially in the intermediate nonlinear phase at the same linear growth rate. This prediction is confirmed in ideal MHD simulations. When the Lagrangian compression, a measure of the ballooning nonlinearity, becomes of the order of unity, the intermediate nonlinear phase is entered, during which the maximum plasma displacement amplitude as well as the total kinetic energy continues to grow exponentially at the rate of the corresponding linear phase.

  15. The Matrix exponential, Dynamic Systems and Control

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    The matrix exponential can be found in various connections in analysis and control of dynamic systems. In this short note we are going to list a few examples. The matrix exponential usably pops up in connection to the sampling process, whatever it is in a deterministic or a stochastic setting...... or it is a tool for determining a Gramian matrix. This note is intended to be used in connection to the teaching post the course in Stochastic Adaptive Control (02421) given at Informatics and Mathematical Modelling (IMM), The Technical University of Denmark. This work is a result of a study of the litterature....

  16. Exponential Operators, Dobinski Relations and Summability

    International Nuclear Information System (INIS)

    Blasiak, P; Gawron, A; Horzela, A; Penson, K A; Solomon, A I

    2006-01-01

    We investigate properties of exponential operators preserving the particle number using combinatorial methods developed in order to solve the boson normal ordering problem. In particular, we apply generalized Dobinski relations and methods of multivariate Bell polynomials which enable us to understand the meaning of perturbation-like expansions of exponential operators. Such expansions, obtained as formal power series, are everywhere divergent but the Pade summation method is shown to give results which very well agree with exact solutions got for simplified quantum models of the one mode bosonic systems

  17. Exponentially tapered Josephson flux-flow oscillator

    DEFF Research Database (Denmark)

    Benabdallah, A.; Caputo, J. G.; Scott, Alwyn C.

    1996-01-01

    We introduce an exponentially tapered Josephson flux-flow oscillator that is tuned by applying a bias current to the larger end of the junction. Numerical and analytical studies show that above a threshold level of bias current the static solution becomes unstable and gives rise to a train...... of fluxons moving toward the unbiased smaller end, as in the standard flux-flow oscillator. An exponentially shaped junction provides several advantages over a rectangular junction including: (i) smaller linewidth, (ii) increased output power, (iii) no trapped flux because of the type of current injection...

  18. Exponential Data Fitting and its Applications

    CERN Document Server

    Pereyra, Victor

    2010-01-01

    Real and complex exponential data fitting is an important activity in many different areas of science and engineering, ranging from Nuclear Magnetic Resonance Spectroscopy and Lattice Quantum Chromodynamics to Electrical and Chemical Engineering, Vision and Robotics. The most commonly used norm in the approximation by linear combinations of exponentials is the l2 norm (sum of squares of residuals), in which case one obtains a nonlinear separable least squares problem. A number of different methods have been proposed through the years to solve these types of problems and new applications appear

  19. An Exponential Growth Learning Trajectory: Students' Emerging Understanding of Exponential Growth through Covariation

    Science.gov (United States)

    Ellis, Amy B.; Ozgur, Zekiye; Kulow, Torrey; Dogan, Muhammed F.; Amidon, Joel

    2016-01-01

    This article presents an Exponential Growth Learning Trajectory (EGLT), a trajectory identifying and characterizing middle grade students' initial and developing understanding of exponential growth as a result of an instructional emphasis on covariation. The EGLT explicates students' thinking and learning over time in relation to a set of tasks…

  20. When economic growth is less than exponential

    DEFF Research Database (Denmark)

    Groth, Christian; Koch, Karl-Josef; Steger, Thomas

    2010-01-01

    This paper argues that growth theory needs a more general notion of "regularity" than that of exponential growth. We suggest that paths along which the rate of decline of the growth rate is proportional to the growth rate itself deserve attention. This opens up for considering a richer set...

  1. When Economic Growth is Less than Exponential

    DEFF Research Database (Denmark)

    Groth, Christian; Koch, Karl-Josef; Steger, Thomas M.

    This paper argues that growth theory needs a more general notion of "regularity" than that of exponential growth. We suggest that paths along which the rate of decline of the growth rate is proportional to the growth rate itself deserve attention. This opens up for considering a richer set...

  2. Academic Sacred Cows and Exponential Growth.

    Science.gov (United States)

    Heterick, Robert C., Jr.

    1991-01-01

    The speech notes the linear growth of resources versus the exponential growth of costs in higher education. It identifies opportunities arising from information technology to transform teaching and learning through creation of a new scholarly information delivery system. An integrated triad of communications, computing, and library organizations…

  3. Students' Understanding of Exponential and Logarithmic Functions.

    Science.gov (United States)

    Weber, Keith

    Exponential, and logarithmic functions are pivotal mathematical concepts that play central roles in advanced mathematics. Unfortunately, these are also concepts that give students serious difficulty. This report describe a theory of how students acquire an understanding of these functions by prescribing a set of mental constructions that a student…

  4. Intersection of the Exponential and Logarithmic Curves

    Science.gov (United States)

    Boukas, Andreas; Valahas, Theodoros

    2009-01-01

    The study of the number of intersection points of y = a[superscript x] and y = log[subscript a]x can be an interesting topic to present in a single-variable calculus class. In this article, the authors present a classroom presentation outline involving the basic algebra and the elementary calculus of the exponential and logarithmic functions. The…

  5. The evolution of stellar exponential discs

    NARCIS (Netherlands)

    Ferguson, AMN; Clarke, CJ

    2001-01-01

    Models of disc galaxies which invoke viscosity-driven radial flows have long been known to provide a natural explanation for the origin of stellar exponential discs, under the assumption that the star formation and viscous time-scales are comparable. We present models which invoke simultaneous star

  6. Exponential Lower Bounds For Policy Iteration

    OpenAIRE

    Fearnley, John

    2010-01-01

    We study policy iteration for infinite-horizon Markov decision processes. It has recently been shown policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total reward and average-reward optimality criteria.

  7. Exponential rate of convergence in current reservoirs

    OpenAIRE

    De Masi, Anna; Presutti, Errico; Tsagkarogiannis, Dimitrios; Vares, Maria Eulalia

    2015-01-01

    In this paper, we consider a family of interacting particle systems on $[-N,N]$ that arises as a natural model for current reservoirs and Fick's law. We study the exponential rate of convergence to the stationary measure, which we prove to be of the order $N^{-2}$.

  8. Exponential characteristics spatial quadrature for discrete ordinates radiation transport in slab geometry

    International Nuclear Information System (INIS)

    Mathews, K.; Sjoden, G.; Minor, B.

    1994-01-01

    The exponential characteristic spatial quadrature for discrete ordinates neutral particle transport in slab geometry is derived and compared with current methods. It is similar to the linear characteristic (or, in slab geometry, the linear nodal) quadrature but differs by assuming an exponential distribution of the scattering source within each cell, S(x) = a exp(bx), whose parameters are root-solved to match the known (from the previous iteration) average and first moment of the source over the cell. Like the linear adaptive method, the exponential characteristic method is positive and nonlinear but more accurate and more readily extended to other cell shapes. The nonlinearity has not interfered with convergence. The authors introduce the ''exponential moment functions,'' a generalization of the functions used by Walters in the linear nodal method, and use them to avoid numerical ill-conditioning. The method exhibits O(Δx 4 ) truncation error on fine enough meshes; the error is insensitive to mesh size for coarse meshes. In a shielding problem, it is accurate to 10% using 16-mfp-thick cells; conventional methods err by 8 to 15 orders of magnitude. The exponential characteristic method is computationally more costly per cell than current methods but can be accurate with very thick cells, leading to increased computational efficiency on appropriate problems

  9. On Using Exponential Parameter Estimators with an Adaptive Controller

    Science.gov (United States)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  10. Zero inflated negative binomial-generalized exponential distributionand its applications

    Directory of Open Access Journals (Sweden)

    Sirinapa Aryuyuen

    2014-08-01

    Full Text Available In this paper, we propose a new zero inflated distribution, namely, the zero inflated negative binomial-generalized exponential (ZINB-GE distribution. The new distribution is used for count data with extra zeros and is an alternative for data analysis with over-dispersed count data. Some characteristics of the distribution are given, such as mean, variance, skewness, and kurtosis. Parameter estimation of the ZINB-GE distribution uses maximum likelihood estimation (MLE method. Simulated and observed data are employed to examine this distribution. The results show that the MLE method seems to have high efficiency for large sample sizes. Moreover, the mean square error of parameter estimation is increased when the zero proportion is higher. For the real data sets, this new zero inflated distribution provides a better fit than the zero inflated Poisson and zero inflated negative binomial distributions.

  11. Practical pulse engineering: Gradient ascent without matrix exponentiation

    Science.gov (United States)

    Bhole, Gaurav; Jones, Jonathan A.

    2018-06-01

    Since 2005, there has been a huge growth in the use of engineered control pulses to perform desired quantum operations in systems such as nuclear magnetic resonance quantum information processors. These approaches, which build on the original gradient ascent pulse engineering algorithm, remain computationally intensive because of the need to calculate matrix exponentials for each time step in the control pulse. In this study, we discuss how the propagators for each time step can be approximated using the Trotter-Suzuki formula, and a further speedup achieved by avoiding unnecessary operations. The resulting procedure can provide substantial speed gain with negligible costs in the propagator error, providing a more practical approach to pulse engineering.

  12. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Hä ppö lä , Juho

    2016-01-01

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE

  13. Exponentially-convergent Monte Carlo for the 1-D transport equation

    International Nuclear Information System (INIS)

    Peterson, J. R.; Morel, J. E.; Ragusa, J. C.

    2013-01-01

    We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)

  14. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  15. Statistical estimation for truncated exponential families

    CERN Document Server

    Akahira, Masafumi

    2017-01-01

    This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considere...

  16. Exponentiated Lomax Geometric Distribution: Properties and Applications

    Directory of Open Access Journals (Sweden)

    Amal Soliman Hassan

    2017-09-01

    Full Text Available In this paper, a new four-parameter lifetime distribution, called the exponentiated Lomax geometric (ELG is introduced. The new lifetime distribution contains the Lomax geometric and exponentiated Pareto geometric as new sub-models. Explicit algebraic formulas of probability density function, survival and hazard functions are derived. Various structural properties of the new model are derived including; quantile function, Re'nyi entropy, moments, probability weighted moments, order statistic, Lorenz and Bonferroni curves. The estimation of the model parameters is performed by maximum likelihood method and inference for a large sample is discussed. The flexibility and potentiality of the new model in comparison with some other distributions are shown via an application to a real data set. We hope that the new model will be an adequate model for applications in various studies.

  17. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  18. Harmonic analysis on exponential solvable Lie groups

    CERN Document Server

    Fujiwara, Hidenori

    2015-01-01

    This book is the first one that brings together recent results on the harmonic analysis of exponential solvable Lie groups. There still are many interesting open problems, and the book contributes to the future progress of this research field. As well, various related topics are presented to motivate young researchers. The orbit method invented by Kirillov is applied to study basic problems in the analysis on exponential solvable Lie groups. This method tells us that the unitary dual of these groups is realized as the space of their coadjoint orbits. This fact is established using the Mackey theory for induced representations, and that mechanism is explained first. One of the fundamental problems in the representation theory is the irreducible decomposition of induced or restricted representations. Therefore, these decompositions are studied in detail before proceeding to various related problems: the multiplicity formula, Plancherel formulas, intertwining operators, Frobenius reciprocity, and associated alge...

  19. Exponential Stabilization of an Underactuated Surface Vessel

    Directory of Open Access Journals (Sweden)

    Kristin Y. Pettersen

    1997-07-01

    Full Text Available The paper shows that a large class of underactuated vehicles cannot be asymptotically stabilized by either continuous or discontinuous state feedback. Furthermore, stabilization of an underactuated surface vessel is considered. Controllability properties of the surface vessels is presented, and a continuous periodic time-varying feedback law is proposed. It is shown that this feedback law exponentially stabilizes the surface vessel to the origin, and this is illustrated by simulations.

  20. Financing exponential growth at H3

    OpenAIRE

    Silva, João Ricardo Ferreira Hipolito da

    2012-01-01

    H3 is a fast-food chain that introduced the concept of gourmet hamburgers in the Portuguese market. This case-study illustrates its financing strategy that supported an exponential growth represented by opening 33 restaurants within approximately 3 years of its inception. H3 is now faced with the challenge of structuring its foreign ventures and change its financial approach. The main covered topics are the options an entrepreneur has for financing a new venture and how it evolves along th...

  1. Exponentially Light Dark Matter from Coannihilation

    OpenAIRE

    D'Agnolo, Raffaele Tito; Mondino, Cristina; Ruderman, Joshua T.; Wang, Po-Jen

    2018-01-01

    Dark matter may be a thermal relic whose abundance is set by mutual annihilations among multiple species. Traditionally, this coannihilation scenario has been applied to weak scale dark matter that is highly degenerate with other states. We show that coannihilation among states with split masses points to dark matter that is exponentially lighter than the weak scale, down to the keV scale. We highlight the regime where dark matter does not participate in the annihilations that dilute its numb...

  2. Heterogeneous dipolar theory of the exponential pile

    International Nuclear Information System (INIS)

    Mastrangelo, P.V.

    1981-01-01

    We present a heterogeneous theory of the exponential pile, closely related to NORDHEIM-SCALETTAR's. It is well adapted to lattice whose pitch is relatively large (D-2O, grahpite) and the dimensions of whose channels are not negligible. The anisotropy of neutron diffusion is taken into account by the introduction of dipolar parameters. We express the contribution of each channel to the total flux in the moderator by means of multipolar coefficients. In order to be able to apply conditions of continuity between the flux and their derivatives, on the side of the moderator, we develop in a Fourier series the fluxes found at the periphery of each channel. Using Wronski's relations of Bessel's functions, we express the multipolar coefficients of the surfaces of each channel, on the side of the moderator, by means of the harmonics of each flux and their derivatives. We retain only monopolar (A 0 sub(g)) and dipolar (A 1 sub(g)) coefficients; those of a higher order are ignored. We deduce from these coefficients the systems of homogeneous equations of the exponential pile with monopoles on their own and monopoles plus dipoles. It should be noted that the systems of homogeneous equations of the critical pile are contained in those of the exponential pile. In another article, we develop the calculation of monopolar and dipolar heterogeneous parameters. (orig.)

  3. Unwrapped phase inversion with an exponential damping

    KAUST Repository

    Choi, Yun Seok

    2015-07-28

    Full-waveform inversion (FWI) suffers from the phase wrapping (cycle skipping) problem when the frequency of data is not low enough. Unless we obtain a good initial velocity model, the phase wrapping problem in FWI causes a result corresponding to a local minimum, usually far away from the true solution, especially at depth. Thus, we have developed an inversion algorithm based on a space-domain unwrapped phase, and we also used exponential damping to mitigate the nonlinearity associated with the reflections. We construct the 2D phase residual map, which usually contains the wrapping discontinuities, especially if the model is complex and the frequency is high. We then unwrap the phase map and remove these cycle-based jumps. However, if the phase map has several residues, the unwrapping process becomes very complicated. We apply a strong exponential damping to the wavefield to eliminate much of the residues in the phase map, thus making the unwrapping process simple. We finally invert the unwrapped phases using the back-propagation algorithm to calculate the gradient. We progressively reduce the damping factor to obtain a high-resolution image. Numerical examples determined that the unwrapped phase inversion with a strong exponential damping generated convergent long-wavelength updates without low-frequency information. This model can be used as a good starting model for a subsequent inversion with a reduced damping, eventually leading to conventional waveform inversion.

  4. Exponential dependence of potential barrier height on biased voltages of inorganic/organic static induction transistor

    International Nuclear Information System (INIS)

    Zhang Yong; Yang Jianhong; Cai Xueyuan; Wang Zaixing

    2010-01-01

    The exponential dependence of the potential barrier height φ c on the biased voltages of the inorganic/organic static induction transistor (SIT/OSIT) through a normalized approach in the low-current regime is presented. It shows a more accurate description than the linear expression of the potential barrier height. Through the verification of the numerical calculated and experimental results, the exponential dependence of φ c on the applied biases can be used to derive the I-V characteristics. For both SIT and OSIT, the calculated results, using the presented relationship, are agreeable with the experimental results. Compared to the previous linear relationship, the exponential description of φ c can contribute effectively to reduce the error between the theoretical and experimental results of the I-V characteristics. (semiconductor devices)

  5. Optimal Exponential Synchronization of Chaotic Systems with Multiple Time Delays via Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Feng-Hsiag Hsiao

    2013-01-01

    Full Text Available This study presents an effective approach to realize the optimal exponential synchronization of multiple time-delay chaotic (MTDC systems. First, a neural network (NN model is employed to approximate the MTDC system. Then, a linear differential inclusion (LDI state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion of the error system derived in terms of Lyapunov’s direct method to ensure that the trajectories of the slave system can approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI. Based on the LMI, a fuzzy controller is synthesized not only to realize the exponential synchronization but also to achieve the optimal performance by minimizing the disturbance attenuation level. Finally, a numerical example with simulations is provided to illustrate the concepts discussed throughout this work.

  6. Application of heterogeneous method for the interpretation of exponential experiments

    International Nuclear Information System (INIS)

    Birkhoff, G.; Bondar, L.

    1977-01-01

    The present paper gives a brief review of a work which was executed mainly during 1967 and 1968 in the field of the application of heterogeneous methods for the interpretation of exponential experiments with ORGEL type lattices (lattices of natural uranium cluster elements with organic coolants moderated by heavy water). In the frame of this work a heterogeneous computer program, in (r,γ) geometry was written which is based on the NORDHEIM method using a uniform moderator, three energy groups and monopol and dipol sources. This code is especially adapted for regular square lattices in a cylindrical tank. Full use of lattice symmetry was made for reducing the numerical job of the theory. A further reduction was obtained by introducing a group averaged extrapolation distance at the external boundary. Channel parameters were evaluated by the PINOCCHIO code. Comparisons of calculated and measured thermal neutron flux showed good agreement. Equivalence of heterogeneous and homogeneous theory was found in cases of lattices comprising a minimum of 32, 24 and 16 fuel elements for respectively under-, well-, and over-moderated lattices. Heterogeneous calculations of high leakage lattices suffered the lack of good methods for the computation of axial and radial streaming parameters. Interpretation of buckling measurements in the subcritical facility EXPO requires already more accurate evaluation of the streaming effects than we made. The potential of heterogeneous theory in the field of exponential experiments is thought to be limited by the precision by which the streaming parameters can be calculated

  7. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  8. Characterizing quantum correlations. Entanglement, uncertainty relations and exponential families

    Energy Technology Data Exchange (ETDEWEB)

    Niekamp, Soenke

    2012-04-20

    This thesis is concerned with different characterizations of multi-particle quantum correlations and with entropic uncertainty relations. The effect of statistical errors on the detection of entanglement is investigated. First, general results on the statistical significance of entanglement witnesses are obtained. Then, using an error model for experiments with polarization-entangled photons, it is demonstrated that Bell inequalities with lower violation can have higher significance. The question for the best observables to discriminate between a state and the equivalence class of another state is addressed. Two measures for the discrimination strength of an observable are defined, and optimal families of observables are constructed for several examples. A property of stabilizer bases is shown which is a natural generalization of mutual unbiasedness. For sets of several dichotomic, pairwise anticommuting observables, uncertainty relations using different entropies are constructed in a systematic way. Exponential families provide a classification of states according to their correlations. In this classification scheme, a state is considered as k-correlated if it can be written as thermal state of a k-body Hamiltonian. Witness operators for the detection of higher-order interactions are constructed, and an algorithm for the computation of the nearest k-correlated state is developed.

  9. Characterizing quantum correlations. Entanglement, uncertainty relations and exponential families

    International Nuclear Information System (INIS)

    Niekamp, Soenke

    2012-01-01

    This thesis is concerned with different characterizations of multi-particle quantum correlations and with entropic uncertainty relations. The effect of statistical errors on the detection of entanglement is investigated. First, general results on the statistical significance of entanglement witnesses are obtained. Then, using an error model for experiments with polarization-entangled photons, it is demonstrated that Bell inequalities with lower violation can have higher significance. The question for the best observables to discriminate between a state and the equivalence class of another state is addressed. Two measures for the discrimination strength of an observable are defined, and optimal families of observables are constructed for several examples. A property of stabilizer bases is shown which is a natural generalization of mutual unbiasedness. For sets of several dichotomic, pairwise anticommuting observables, uncertainty relations using different entropies are constructed in a systematic way. Exponential families provide a classification of states according to their correlations. In this classification scheme, a state is considered as k-correlated if it can be written as thermal state of a k-body Hamiltonian. Witness operators for the detection of higher-order interactions are constructed, and an algorithm for the computation of the nearest k-correlated state is developed.

  10. The Parity of Set Systems under Random Restrictions with Applications to Exponential Time Problems

    DEFF Research Database (Denmark)

    Björklund, Andreas; Dell, Holger; Husfeldt, Thore

    2015-01-01

    problems. We find three applications of our reductions: 1. An exponential-time algorithm: We show how to decide Hamiltonicity in directed n-vertex graphs with running time 1.9999^n provided that the graph has at most 1.0385^n Hamiltonian cycles. We do so by reducing to the algorithm of Björklund...

  11. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  12. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  13. Exponential expansion: galactic destiny or technological hubris?

    Science.gov (United States)

    Finney, B. R.

    Is it our destiny to expand exponentially to populate the galaxy, or is such a vision but an extreme example of technological hubris? The overall record of human evolution and dispersion over the Earth can be cited to support the view that we are a uniquely expansionary and technological animal bound for the stars, yet an examination of the fate of individual migrations and exploratory initiatives raises doubts. Although it may be in keeping with our hubristic nature to predict ultimate galactic expansion, there is no way to specify how far expansionary urges may drive our spacefaring descendants.

  14. Progressive Exponential Clustering-Based Steganography

    Directory of Open Access Journals (Sweden)

    Li Yue

    2010-01-01

    Full Text Available Cluster indexing-based steganography is an important branch of data-hiding techniques. Such schemes normally achieve good balance between high embedding capacity and low embedding distortion. However, most cluster indexing-based steganographic schemes utilise less efficient clustering algorithms for embedding data, which causes redundancy and leaves room for increasing the embedding capacity further. In this paper, a new clustering algorithm, called progressive exponential clustering (PEC, is applied to increase the embedding capacity by avoiding redundancy. Meanwhile, a cluster expansion algorithm is also developed in order to further increase the capacity without sacrificing imperceptibility.

  15. Exponentially Convergent Algorithms for Abstract Differential Equations

    CERN Document Server

    Gavrilyuk, Ivan; Vasylyk, Vitalii

    2011-01-01

    This book presents new accurate and efficient exponentially convergent methods for abstract differential equations with unbounded operator coefficients in Banach space. These methods are highly relevant for the practical scientific computing since the equations under consideration can be seen as the meta-models of systems of ordinary differential equations (ODE) as well as the partial differential equations (PDEs) describing various applied problems. The framework of functional analysis allows one to obtain very general but at the same time transparent algorithms and mathematical results which

  16. Blowing-up Semilinear Wave Equation with Exponential ...

    Indian Academy of Sciences (India)

    Blowing-up Semilinear Wave Equation with Exponential Nonlinearity in Two Space ... We investigate the initial value problem for some semi-linear wave equation in two space dimensions with exponential nonlinearity growth. ... Current Issue

  17. Forecasting Financial Extremes: A Network Degree Measure of Super-Exponential Growth.

    Directory of Open Access Journals (Sweden)

    Wanfeng Yan

    Full Text Available Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous overreaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential growth of stock prices, by measuring the degree of the price network, generated from the price time series. Twelve major international stock indices have been investigated. Error diagram tests show that this new indicator has strong predictive power for financial extremes, both peaks and troughs. By varying the parameters used to construct the error diagram, we show the predictive power is very robust. The new indicator has a better performance than the LPPL pattern recognition indicator.

  18. Forecasting Financial Extremes: A Network Degree Measure of Super-Exponential Growth.

    Science.gov (United States)

    Yan, Wanfeng; van Tuyll van Serooskerken, Edgar

    2015-01-01

    Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous overreaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential growth of stock prices, by measuring the degree of the price network, generated from the price time series. Twelve major international stock indices have been investigated. Error diagram tests show that this new indicator has strong predictive power for financial extremes, both peaks and troughs. By varying the parameters used to construct the error diagram, we show the predictive power is very robust. The new indicator has a better performance than the LPPL pattern recognition indicator.

  19. Exponentially asymptotical synchronization in uncertain complex dynamical networks with time delay

    Energy Technology Data Exchange (ETDEWEB)

    Luo Qun; Yang Han; Li Lixiang; Yang Yixian [Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876 (China); Han Jiangxue, E-mail: luoqun@bupt.edu.c [National Engineering Laboratory for Disaster Backup and Recovery, Beijing University of Posts and Telecommunications, Beijing 100876 (China)

    2010-12-10

    Over the past decade, complex dynamical network synchronization has attracted more and more attention and important developments have been made. In this paper, we explore the scheme of globally exponentially asymptotical synchronization in complex dynamical networks with time delay. Based on Lyapunov stability theory and through defining the error function between adjacent nodes, four novel adaptive controllers are designed under four situations where the Lipschitz constants of the state function in nodes are known or unknown and the network structure is certain or uncertain, respectively. These controllers could not only globally asymptotically synchronize all nodes in networks, but also ensure that the error functions do not exceed the pre-scheduled exponential function. Finally, simulations of the synchronization among the chaotic system in the small-world and scale-free network structures are presented, which prove the effectiveness and feasibility of our controllers.

  20. Exponential potentials, scaling solutions and inflation

    International Nuclear Information System (INIS)

    Wands, D.; Copeland, E.J.; Liddle, A.R.

    1993-01-01

    The goal of driving a period of rapid inflation in the early universe in a model motivated by grand unified theories has been given new life in recent years in the context of extended gravity theories. Extended inflation is one model based on a Brans-Dicke type gravity which can allow a very general first-order phase transition to complete by changing the expansion of the false vacuum dominated universe from an exponential to a power law expansion. This inflation is conformally equivalent to general relativity where the vacuum energy density is exponentially dependent upon a dilaton field. With this in mind, the authors consider in this paper the evolution of a scalar field σ with a potential V(σ) = V 0 exp(-λκ 1/2 σ) in a spatially flat (κ = 0) Friedmann-Robertson-Walker metric in the presence of a barotropic (P = (γ - 1)ρ) fluid. Here κ = 8πG, and λ is a dimensionless constant describing the steepness of the potential. It is well known that if the potential is sufficiently flat (λ small), the energy density of the scalar field dominated and the universe undergoes power law inflation. The behavior of fields with a steep potential seems to be less well known, although the results the authors present here are not new. 11 refs., 2 figs

  1. Exponential inflation with F (R ) gravity

    Science.gov (United States)

    Oikonomou, V. K.

    2018-03-01

    In this paper, we shall consider an exponential inflationary model in the context of vacuum F (R ) gravity. By using well-known reconstruction techniques, we shall investigate which F (R ) gravity can realize the exponential inflation scenario at leading order in terms of the scalar curvature, and we shall calculate the slow-roll indices and the corresponding observational indices, in the context of slow-roll inflation. We also provide some general formulas of the slow-roll and the corresponding observational indices in terms of the e -foldings number. In addition, for the calculation of the slow-roll and of the observational indices, we shall consider quite general formulas, for which it is not necessary for the assumption that all the slow-roll indices are much smaller than unity to hold true. Finally, we investigate the phenomenological viability of the model by comparing it with the latest Planck and BICEP2/Keck-Array observational data. As we demonstrate, the model is compatible with the current observational data for a wide range of the free parameters of the model.

  2. Critical mutation rate has an exponential dependence on population size in haploid and diploid populations.

    Directory of Open Access Journals (Sweden)

    Elizabeth Aston

    Full Text Available Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the

  3. Fast Fourier Transform Pricing Method for Exponential Lévy Processes

    KAUST Repository

    Crocce, Fabian

    2014-05-04

    We describe a set of partial-integro-differential equations (PIDE) whose solutions represent the prices of european options when the underlying asset is driven by an exponential L´evy process. Exploiting the L´evy -Khintchine formula, we give a Fourier based method for solving this class of PIDEs. We present a novel L1 error bound for solving a range of PIDEs in asset pricing and use this bound to set parameters for numerical methods.

  4. Fast Fourier Transform Pricing Method for Exponential Lévy Processes

    KAUST Repository

    Crocce, Fabian; Happola, Juho; Kiessling, Jonas; Tempone, Raul

    2014-01-01

    We describe a set of partial-integro-differential equations (PIDE) whose solutions represent the prices of european options when the underlying asset is driven by an exponential L´evy process. Exploiting the L´evy -Khintchine formula, we give a Fourier based method for solving this class of PIDEs. We present a novel L1 error bound for solving a range of PIDEs in asset pricing and use this bound to set parameters for numerical methods.

  5. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  6. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  7. Effect of insolation forecasting error on reduction of electricity charges for solar hot water system; Taiyonetsu kyuto system no denki ryokin sakugen koka ni oyobosu nissharyo yosoku gosa no eikyo

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S [Maizuru National College of Technology, Kyoto (Japan); Kenmoku, Y; Sakakibara, T [Toyohashi University of Technology, Aichi (Japan); Kawamoto, T [Shizuoka University, Shizuoka (Japan)

    1996-10-27

    A solar hot water system can be economically operated if inexpensive midnight power is purchased to cover the shortage of solar energy predicted for the following day. Investigations were conducted because error in insolation prediction affects the system operation and electric charge reduction effect. The target temperature of the heat accumulation tank at every predetermined time point is calculated on the previous evening in consideration of predicted insolation so that the water will be as hot as prescribed at the feeding time on the following day. Midnight power is used for uniform heating to attain the target temperature for 7 o`clock on the following morning. The uniform heating continues from 8 o`clock to the feeding time, this time using solar energy and daytime power to attain the target temperature. Accordingly, the division between the midnight power and daytime power is determined in view of the target temperature for 7 o`clock on the following morning, which target temperature is so set that the charge will be the minimum by optimizing the allocation of the above-said two. When the insolation prediction error rate is beyond 30%, the electric charge grows higher as the rate rises. But, when the rate is not higher than 30%, the charge is little affected by a rise in the rate. 5 refs., 10 figs., 1 tab.

  8. Exponential decay and exponential recovery of modal gains in high count rate channel electron multipliers

    International Nuclear Information System (INIS)

    Hahn, S.F.; Burch, J.L.

    1980-01-01

    A series of data on high count rate channel electron multipliers revealed an initial drop and subsequent recovery of gains in exponential fashion. The FWHM of the pulse height distribution at the initial stage of testing can be used as a good criterion for the selection of operating bias voltage of the channel electron multiplier

  9. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  10. Filtering of Discrete-Time Switched Neural Networks Ensuring Exponential Dissipative and $l_{2}$ - $l_{\\infty }$ Performances.

    Science.gov (United States)

    Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg

    2017-10-01

    This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.

  11. An exponentiation method for XML element retrieval.

    Science.gov (United States)

    Wichaiwong, Tanakorn

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.

  12. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  13. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  14. Poissonian renormalizations, exponentials, and power laws

    Science.gov (United States)

    Eliazar, Iddo

    2013-05-01

    This paper presents a comprehensive “renormalization study” of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to “white noise” and to “1/f noise.” Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.

  15. Poissonian renormalizations, exponentials, and power laws.

    Science.gov (United States)

    Eliazar, Iddo

    2013-05-01

    This paper presents a comprehensive "renormalization study" of Poisson processes governed by exponential and power-law intensities. These Poisson processes are of fundamental importance, as they constitute the very bedrock of the universal extreme-value laws of Gumbel, Fréchet, and Weibull. Applying the method of Poissonian renormalization we analyze the emergence of these Poisson processes, unveil their intrinsic dynamical structures, determine their domains of attraction, and characterize their structural phase transitions. These structural phase transitions are shown to be governed by uniform and harmonic intensities, to have universal domains of attraction, to uniquely display intrinsic invariance, and to be intimately connected to "white noise" and to "1/f noise." Thus, we establish a Poissonian explanation to the omnipresence of white and 1/f noises.

  16. An exponential multireference wave-function Ansatz

    International Nuclear Information System (INIS)

    Hanrath, Michael

    2005-01-01

    An exponential multireference wave-function Ansatz is formulated. In accordance with the state universal coupled-cluster Ansatz of Jeziorski and Monkhorst [Phys. Rev. A 24, 1668 (1981)] the approach uses a reference specific cluster operator. In order to achieve state selectiveness the excitation- and reference-related amplitude indexing of the state universal Ansatz is replaced by an indexing which is based on excited determinants. There is no reference determinant playing a particular role. The approach is size consistent, coincides with traditional single-reference coupled cluster if applied to a single-reference, and converges to full configuration interaction with an increasing cluster operator excitation level. Initial applications on BeH 2 , CH 2 , Li 2 , and nH 2 are reported

  17. Transient accelerating scalar models with exponential potentials

    International Nuclear Information System (INIS)

    Cui Wen-Ping; Zhang Yang; Fu Zheng-Wen

    2013-01-01

    We study a known class of scalar dark energy models in which the potential has an exponential term and the current accelerating era is transient. We find that, although a decelerating era will return in the future, when extrapolating the model back to earlier stages (z ≳ 4), scalar dark energy becomes dominant over matter. So these models do not have the desired tracking behavior, and the predicted transient period of acceleration cannot be adopted into the standard scenario of the Big Bang cosmology. When couplings between the scalar field and matter are introduced, the models still have the same problem; only the time when deceleration returns will be varied. To achieve re-deceleration, one has to turn to alternative models that are consistent with the standard Big Bang scenario.

  18. An Exponentiation Method for XML Element Retrieval

    Science.gov (United States)

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643

  19. Cascade DNA nanomachine and exponential amplification biosensing.

    Science.gov (United States)

    Xu, Jianguo; Wu, Zai-Sheng; Shen, Weiyu; Xu, Huo; Li, Hongling; Jia, Lee

    2015-11-15

    DNA is a versatile scaffold for the assembly of multifunctional nanostructures, and potential applications of various DNA nanodevices have been recently demonstrated for disease diagnosis and treatment. In the current study, a powerful cascade DNA nanomachine was developed that can execute the exponential amplification of p53 tumor suppressor gene. During the operation of the newly-proposed DNA nanomachine, dual-cyclical nucleic acid strand-displacement polymerization (dual-CNDP) was ingeniously introduced, where the target trigger is repeatedly used as the fuel molecule and the nicked fragments are dramatically accumulated. Moreover, each displaced nicked fragment is able to activate the another type of cyclical strand-displacement amplification, increasing exponentially the value of fluorescence intensity. Essentially, one target binding event can induce considerable number of subsequent reactions, and the nanodevice was called cascade DNA nanomachine. It can implement several functions, including recognition element, signaling probe, polymerization primer and template. Using the developed autonomous operation of DNA nanomachine, the p53 gene can be quantified in the wide concentration range from 0.05 to 150 nM with the detection limit of 50 pM. If taking into account the final volume of mixture, the detection limit is calculated as lower as 6.2 pM, achieving an desirable assay ability. More strikingly, the mutant gene can be easily distinguished from the wild-type one. The proof-of-concept demonstrations reported herein is expected to promote the development and application of DNA nanomachine, showing great potential value in basic biology and medical diagnosis. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Thermoluminescence dating of chinese porcelain using a regression method of saturating exponential in pre-dose technique

    International Nuclear Information System (INIS)

    Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.

    2001-01-01

    Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating

  1. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  2. Double-exponential decay of orientational correlations in semiflexible polyelectrolytes.

    Science.gov (United States)

    Bačová, P; Košovan, P; Uhlík, F; Kuldová, J; Limpouchová, Z; Procházka, K

    2012-06-01

    In this paper we revisited the problem of persistence length of polyelectrolytes. We performed a series of Molecular Dynamics simulations using the Debye-Hückel approximation for electrostatics to test several equations which go beyond the classical description of Odijk, Skolnick and Fixman (OSF). The data confirm earlier observations that in the limit of large contour separations the decay of orientational correlations can be described by a single-exponential function and the decay length can be described by the OSF relation. However, at short countour separations the behaviour is more complex. Recent equations which introduce more complicated expressions and an additional length scale could describe the results very well on both the short and the long length scale. The equation of Manghi and Netz when used without adjustable parameters could capture the qualitative trend but deviated in a quantitative comparison. Better quantitative agreement within the estimated error could be obtained using three equations with one adjustable parameter: 1) the equation of Manghi and Netz; 2) the equation proposed by us in this paper; 3) the equation proposed by Cannavacciuolo and Pedersen. Two characteristic length scales can be identified in the data: the intrinsic or bare persistence length and the electrostatic persistence length. All three equations use a single parameter to describe a smooth crossover from the short-range behaviour dominated by the intrinsic stiffness of the chain to the long-range OSF-like behaviour.

  3. An exponential decay model for mediation.

    Science.gov (United States)

    Fritz, Matthew S

    2014-10-01

    Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, address many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed.

  4. Stretched Exponential relaxation in pure Se glass

    Science.gov (United States)

    Dash, S.; Ravindren, S.; Boolchand, P.

    A universal feature of glasses is the stretched exponential relaxation, f (t) = exp[ - t / τ ] β . The model of diffusion of excitations to randomly distributed traps in a glass by Phillips1 yields the stretched exponent β = d[d +2] where d, the effective dimensionality. We have measured the enthalpy of relaxation ΔHnr (tw) at Tg of Se glass in modulated DSC experiments as glasses age at 300K and find β = 0.43(2) for tw in the 0

  5. Exponential growth and atmospheric carbon dioxide

    International Nuclear Information System (INIS)

    Laurmann, J.A.; Rotty, R.M.

    1983-01-01

    The adequacy of assumptions required to project atmospheric CO 2 concentrations in time frames of practical importance is reviewed. Relevant issues concern the form assumed for future fossil fuel release, carbon cycle approximations, and the implications of revisions in fossil fuel patterns required to maintain atmospheric CO 2 levels below a chosen threshold. In general, we find that with a judiciously selected exponential fossil fuel release rate, and with a constant airborn fraction, we can estimate atmospheric CO 2 growth over the next 50 years based on essentially surprise free scenarios. Resource depletion effects must be included for projections beyond about 50 years, and on this time frame the constant airborne fraction approximation has to be questioned as well (especially in later years when the fossil fuel use begins to taper off). For projections for over 100 years, both energy demand scenarios and currently available carbon cycle models have sufficient uncertainties that atmospheric CO 2 levels derived from them are not much better than guesses

  6. Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes

    KAUST Repository

    Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul

    2016-01-01

    for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE

  7. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  8. Exponential stability of delayed fuzzy cellular neural networks with diffusion

    International Nuclear Information System (INIS)

    Huang Tingwen

    2007-01-01

    The exponential stability of delayed fuzzy cellular neural networks (FCNN) with diffusion is investigated. Exponential stability, significant for applications of neural networks, is obtained under conditions that are easily verified by a new approach. Earlier results on the exponential stability of FCNN with time-dependent delay, a special case of the model studied in this paper, are improved without using the time-varying term condition: dτ(t)/dt < μ

  9. Chemical model reduction under uncertainty

    KAUST Repository

    Malpica Galassi, Riccardo

    2017-03-06

    A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.

  10. On the conditions of exponential stability in active disturbance rejection control based on singular perturbation analysis

    Science.gov (United States)

    Shao, S.; Gao, Z.

    2017-10-01

    Stability of active disturbance rejection control (ADRC) is analysed in the presence of unknown, nonlinear, and time-varying dynamics. In the framework of singular perturbations, the closed-loop error dynamics are semi-decoupled into a relatively slow subsystem (the feedback loop) and a relatively fast subsystem (the extended state observer), respectively. It is shown, analytically and geometrically, that there exists a unique exponential stable solution if the size of the initial observer error is sufficiently small, i.e. in the same order of the inverse of the observer bandwidth. The process of developing the uniformly asymptotic solution of the system reveals the condition on the stability of the ADRC and the relationship between the rate of change in the total disturbance and the size of the estimation error. The differentiability of the total disturbance is the only assumption made.

  11. Exponential Synchronization of Networked Chaotic Delayed Neural Network by a Hybrid Event Trigger Scheme.

    Science.gov (United States)

    Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun; Zhongyang Fei; Chaoxu Guan; Huijun Gao; Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun

    2018-06-01

    This paper is concerned with the exponential synchronization for master-slave chaotic delayed neural network with event trigger control scheme. The model is established on a network control framework, where both external disturbance and network-induced delay are taken into consideration. The desired aim is to synchronize the master and slave systems with limited communication capacity and network bandwidth. In order to save the network resource, we adopt a hybrid event trigger approach, which not only reduces the data package sending out, but also gets rid of the Zeno phenomenon. By using an appropriate Lyapunov functional, a sufficient criterion for the stability is proposed for the error system with extended ( , , )-dissipativity performance index. Moreover, hybrid event trigger scheme and controller are codesigned for network-based delayed neural network to guarantee the exponential synchronization between the master and slave systems. The effectiveness and potential of the proposed results are demonstrated through a numerical example.

  12. Lake Area Analysis Using Exponential Smoothing Model and Long Time-Series Landsat Images in Wuhan, China

    Directory of Open Access Journals (Sweden)

    Gonghao Duan

    2018-01-01

    Full Text Available The loss of lake area significantly influences the climate change in a region, and this loss represents a serious and unavoidable challenge to maintaining ecological sustainability under the circumstances of lakes that are being filled. Therefore, mapping and forecasting changes in the lake is critical for protecting the environment and mitigating ecological problems in the urban district. We created an accessible map displaying area changes for 82 lakes in the Wuhan city using remote sensing data in conjunction with visual interpretation by combining field data with Landsat 2/5/7/8 Thematic Mapper (TM time-series images for the period 1987–2013. In addition, we applied a quadratic exponential smoothing model to forecast lake area changes in Wuhan city. The map provides, for the first time, estimates of lake development in Wuhan using data required for local-scale studies. The model predicted a lake area reduction of 18.494 km2 in 2015. The average error reached 0.23 with a correlation coefficient of 0.98, indicating that the model is reliable. The paper provided a numerical analysis and forecasting method to provide a better understanding of lake area changes. The modeling and mapping results can help assess aquatic habitat suitability and property planning for Wuhan lakes.

  13. Finite Difference Solution of Elastic-Plastic Thin Rotating Annular Disk with Exponentially Variable Thickness and Exponentially Variable Density

    Directory of Open Access Journals (Sweden)

    Sanjeev Sharma

    2013-01-01

    Full Text Available Elastic-plastic stresses, strains, and displacements have been obtained for a thin rotating annular disk with exponentially variable thickness and exponentially variable density with nonlinear strain hardening material by finite difference method using Von-Mises' yield criterion. Results have been computed numerically and depicted graphically. From the numerical results, it can be concluded that disk whose thickness decreases radially and density increases radially is on the safer side of design as compared to the disk with exponentially varying thickness and exponentially varying density as well as to flat disk.

  14. Exponential characteristic spatial quadrature for discrete ordinates radiation transport with rectangular cells

    International Nuclear Information System (INIS)

    Minor, B.; Mathews, K.

    1995-01-01

    The exponential characteristic (EC) spatial quadrature for discrete ordinates neutral particle transport previously introduced in slab geometry is extended here to x-y geometry with rectangular cells. The method is derived and compared with current methods. It is similar to the linear characteristic (LC) quadrature (a linear-linear moments method) but differs by assuming an exponential distribution of the scattering source within each cell, S(x) = a exp(bx + cy), whose parameters are rootsolved to match the known (from the previous iteration) spatial average and first moments of the source over the cell. Similarly, EC assumes exponential distributions of flux along cell edges through which particles enter the cell, with parameters chosen to match the average and first moments of flux, as passed from the adjacent, upstream cells (or as determined by boundary conditions). Like the linear adaptive (LA) method, EC is positive and nonlinear. It is more accurate than LA and does not require subdivision of cells. The nonlinearity has not interfered with convergence. The exponential moment functions, which were introduced with the slab geometry method, are extended to arbitrary dimensions (numbers of arguments) and used to avoid numerical ill conditioning. As in slab geometry, the method approaches O(Δx 4 ) global truncation error on fine-enough meshes, while the error is insensitive to mesh size for coarse meshes. Performance of the method is compared with that of the step characteristic, LC, linear nodal, step adaptive, and LA schemes. The EC method is a strong performer with scattering ratios ranging from 0 to 0.9 (the range tested), particularly so for lower scattering ratios. As in slab geometry, EC is computationally more costly per cell than current methods but can be accurate with very thick cells, leading to increased computational efficiency on appropriate problems

  15. Exponential local discriminant embedding and its application to face recognition.

    Science.gov (United States)

    Dornaika, Fadi; Bosaghzadeh, Alireza

    2013-06-01

    Local discriminant embedding (LDE) has been recently proposed to overcome some limitations of the global linear discriminant analysis method. In the case of a small training data set, however, LDE cannot directly be applied to high-dimensional data. This case is the so-called small-sample-size (SSS) problem. The classical solution to this problem was applying dimensionality reduction on the raw data (e.g., using principal component analysis). In this paper, we introduce a novel discriminant technique called "exponential LDE" (ELDE). The proposed ELDE can be seen as an extension of LDE framework in two directions. First, the proposed framework overcomes the SSS problem without discarding the discriminant information that was contained in the null space of the locality preserving scatter matrices associated with LDE. Second, the proposed ELDE is equivalent to transforming original data into a new space by distance diffusion mapping (similar to kernel-based nonlinear mapping), and then, LDE is applied in such a new space. As a result of diffusion mapping, the margin between samples belonging to different classes is enlarged, which is helpful in improving classification accuracy. The experiments are conducted on five public face databases: Yale, Extended Yale, PF01, Pose, Illumination, and Expression (PIE), and Facial Recognition Technology (FERET). The results show that the performances of the proposed ELDE are better than those of LDE and many state-of-the-art discriminant analysis techniques.

  16. Exponential Correlation of IQ and the Wealth of Nations

    Science.gov (United States)

    Dickerson, Richard E.

    2006-01-01

    Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…

  17. Is the basic law of radioactive decay exponential?

    International Nuclear Information System (INIS)

    Gopych, P.M.; Zalyubovskii, I.I.

    1988-01-01

    Basic theoretical approaches to the explanation of the observed exponential nature of the decay law are discussed together with the hypothesis that it is not exponential. The significance of this question and its connection with fundamental problems of modern physics are considered. The results of experiments relating to investigation of the form of the decay law are given

  18. Exponential stability in a scalar functional differential equation

    Directory of Open Access Journals (Sweden)

    Pituk Mihály

    2006-01-01

    Full Text Available We establish a criterion for the global exponential stability of the zero solution of the scalar retarded functional differential equation whose linear part generates a monotone semiflow on the phase space with respect to the exponential ordering, and the nonlinearity has at most linear growth.

  19. Blowing-up semilinear wave equation with exponential nonlinearity ...

    Indian Academy of Sciences (India)

    H1-norm. Hence, it is legitimate to consider an exponential nonlinearity. Moreover, the choice of an exponential nonlinearity emerges from a possible control of solutions via a. Moser–Trudinger type inequality [1, 16, 19]. In fact, Nakamura and Ozawa [17] proved global well-posedness and scattering for small Cauchy data in ...

  20. Review of "Going Exponential: Growing the Charter School Sector's Best"

    Science.gov (United States)

    Garcia, David

    2011-01-01

    This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…

  1. Exponential convergence on a continuous Monte Carlo transport problem

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-01-01

    For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described

  2. Global exponential stability for nonautonomous cellular neural networks with delays

    International Nuclear Information System (INIS)

    Zhang Qiang; Wei Xiaopeng; Xu Jin

    2006-01-01

    In this Letter, by utilizing Lyapunov functional method and Halanay inequalities, we analyze global exponential stability of nonautonomous cellular neural networks with delay. Several new sufficient conditions ensuring global exponential stability of the network are obtained. The results given here extend and improve the earlier publications. An example is given to demonstrate the effectiveness of the obtained results

  3. Stochastic B-series and order conditions for exponential integrators

    DEFF Research Database (Denmark)

    Arara, Alemayehu Adugna; Debrabant, Kristian; Kværnø, Anne

    2018-01-01

    We discuss stochastic differential equations with a stiff linear part and their approximation by stochastic exponential integrators. Representing the exact and approximate solutions using B-series and rooted trees, we derive the order conditions for stochastic exponential integrators. The resulting...

  4. Residual, restarting and Richardson iteration for the matrix exponential

    NARCIS (Netherlands)

    Bochev, Mikhail A.; Grimm, Volker; Hochbruck, Marlis

    2013-01-01

    A well-known problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be computed.

  5. Residual, restarting and Richardson iteration for the matrix exponential

    NARCIS (Netherlands)

    Bochev, Mikhail A.

    2010-01-01

    A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We

  6. Ranking Exponential Trapezoidal Fuzzy Numbers by Median Value

    Directory of Open Access Journals (Sweden)

    S. Rezvani

    2013-12-01

    Full Text Available In this paper, we want represented a method for ranking of two exponential trapezoidal fuzzy numbers. A median value is proposed for the ranking of exponential trapezoidal fuzzy numbers. For the validation the results of the proposed approach are compared with different existing approaches.

  7. New Results of Global Exponential Stabilization for BLDCMs System

    OpenAIRE

    Fengxia Tian; Fangchao Zhen; Guopeng Zhou; Xiaoxin Liao

    2015-01-01

    The global exponential stabilization for brushless direct current motor (BLDCM) system is studied. Four linear and simple feedback controllers are proposed to realize the global stabilization of BLDCM with exponential convergence rate; the control law used in each theorem is less conservative and more concise. Finally, an example is given to demonstrate the correctness of the proposed results.

  8. Possible stretched exponential parametrization for humidity absorption in polymers.

    Science.gov (United States)

    Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O

    2009-04-01

    Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.

  9. Exponential B-splines and the partition of unity property

    DEFF Research Database (Denmark)

    Christensen, Ole; Massopust, Peter

    2012-01-01

    We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...

  10. Modeling of Single Event Transients With Dual Double-Exponential Current Sources: Implications for Logic Cell Characterization

    Science.gov (United States)

    Black, Dolores A.; Robinson, William H.; Wilcox, Ian Z.; Limbrick, Daniel B.; Black, Jeffrey D.

    2015-08-01

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. The parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.

  11. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  12. Sub-exponential mixing of random billiards driven by thermostats

    International Nuclear Information System (INIS)

    Yarmola, Tatiana

    2013-01-01

    We study the class of open continuous-time mechanical particle systems introduced in the paper by Khanin and Yarmola (2013 Commun. Math. Phys. 320 121–47). Using the discrete-time results from Khanin and Yarmola (2013 Commun. Math. Phys. 320 121–47) we demonstrate rigorously that, in continuous time, a unique steady state exists and is sub-exponentially mixing. Moreover, all initial distributions converge to the steady state and, for a large class of initial distributions, convergence to the steady state is sub-exponential. The main obstacle to exponential convergence is the existence of slow particles in the system. (paper)

  13. Analytic results for asymmetric random walk with exponential transition probabilities

    International Nuclear Information System (INIS)

    Gutkowicz-Krusin, D.; Procaccia, I.; Ross, J.

    1978-01-01

    We present here exact analytic results for a random walk on a one-dimensional lattice with asymmetric, exponentially distributed jump probabilities. We derive the generating functions of such a walk for a perfect lattice and for a lattice with absorbing boundaries. We obtain solutions for some interesting moment properties, such as mean first passage time, drift velocity, dispersion, and branching ratio for absorption. The symmetric exponential walk is solved as a special case. The scaling of the mean first passage time with the size of the system for the exponentially distributed walk is determined by the symmetry and is independent of the range

  14. Exponential Shear Flow of Linear, Entangled Polymeric Liquids

    DEFF Research Database (Denmark)

    Neergaard, Jesper; Park, Kyungho; Venerus, David C.

    2000-01-01

    A previously proposed reptation model is used to interpret exponential shear flow data taken on an entangled polystyrenesolution. Both shear and normal stress measurements are made during exponential shear using mechanical means. The model iscapable of explaining all trends seen in the data......, and suggests a novel analysis of the data. This analysis demonstrates thatexponential shearing flow is no more capable of stretching polymer chains than is inception of steady shear at comparableinstantaneous shear rates. In fact, all exponential shear flow stresses measured are bounded quantitatively...

  15. (Anti)symmetric multivariate exponential functions and corresponding Fourier transforms

    International Nuclear Information System (INIS)

    Klimyk, A U; Patera, J

    2007-01-01

    We define and study symmetrized and antisymmetrized multivariate exponential functions. They are defined as determinants and antideterminants of matrices whose entries are exponential functions of one variable. These functions are eigenfunctions of the Laplace operator on the corresponding fundamental domains satisfying certain boundary conditions. To symmetric and antisymmetric multivariate exponential functions there correspond Fourier transforms. There are three types of such Fourier transforms: expansions into the corresponding Fourier series, integral Fourier transforms and multivariate finite Fourier transforms. Eigenfunctions of the integral Fourier transforms are found

  16. Exponential integrators in time-dependent density-functional calculations

    Science.gov (United States)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  17. Laminar phase flow for an exponentially tapered Josephson oscillator

    DEFF Research Database (Denmark)

    Benabdallah, A.; Caputo, J. G.; Scott, Alwyn C.

    2000-01-01

    Exponential tapering and inhomogeneous current feed were recently proposed as means to improve the performance of a Josephson flux flow oscillator. Extensive numerical results backed up by analysis are presented here that support this claim and demonstrate that exponential tapering reduces...... the small current instability region and leads to a laminar flow regime where the voltage wave form is periodic giving the oscillator minimal spectral width. Tapering also leads to an increased output power. Since exponential tapering is not expected to increase the difficulty of fabricating a flux flow...

  18. Periodic oscillation and exponential stability of delayed CNNs

    Science.gov (United States)

    Cao, Jinde

    2000-05-01

    Both the global exponential stability and the periodic oscillation of a class of delayed cellular neural networks (DCNNs) is further studied in this Letter. By applying some new analysis techniques and constructing suitable Lyapunov functionals, some simple and new sufficient conditions are given ensuring global exponential stability and the existence of periodic oscillatory solution of DCNNs. These conditions can be applied to design globally exponentially stable DCNNs and periodic oscillatory DCNNs and easily checked in practice by simple algebraic methods. These play an important role in the design and applications of DCNNs.

  19. Effect of benzalkonium chloride on viability and energy metabolism in exponential- and stationary-growth-phase cells of Listeria monocytogenes

    NARCIS (Netherlands)

    Luppens, S.B.I.; Abee, T.; Oosterom, J.

    2001-01-01

    The difference in killing exponential- and stationary-phase cells of Listeria monocytogenes by benzalkonium chloride (BAC) was investigated by plate counting and linked to relevant bioenergetic parameters. At a low concentration of BAC (8 mg liter-1), a similar reduction in viable cell numbers was

  20. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  1. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  2. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  3. Delay-Dependent Exponential Optimal Synchronization for Nonidentical Chaotic Systems via Neural-Network-Based Approach

    Directory of Open Access Journals (Sweden)

    Feng-Hsiag Hsiao

    2013-01-01

    Full Text Available A novel approach is presented to realize the optimal exponential synchronization of nonidentical multiple time-delay chaotic (MTDC systems via fuzzy control scheme. A neural-network (NN model is first constructed for the MTDC system. Then, a linear differential inclusion (LDI state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, a delay-dependent exponential stability criterion of the error system derived in terms of Lyapunov's direct method is proposed to guarantee that the trajectories of the slave system can approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI. According to the LMI, a fuzzy controller is synthesized not only to realize the exponential synchronization but also to achieve the optimal performance by minimizing the disturbance attenuation level at the same time. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach.

  4. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    Science.gov (United States)

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  5. Unsteady MHD flow in porous media past over exponentially ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... rotation and magnetic field on the flow past an exponentially accelerated vertical plate with ... Let (u, v, w) be the components of the velocity vector V. Then using the equation.

  6. Effects of Exponential Trends on Correlations of Stock Markets

    Directory of Open Access Journals (Sweden)

    Ai-Jing Lin

    2014-01-01

    Full Text Available Detrended fluctuation analysis (DFA is a scaling analysis method used to estimate long-range power-law correlation exponents in time series. In this paper, DFA is employed to discuss the long-range correlations of stock market. The effects of exponential trends on correlations of Hang Seng Index (HSI are investigated with emphasis. We find that the long-range correlations and the positions of the crossovers of lower order DFA appear to have no immunity to the additive exponential trends. Further, our analysis suggests that an increase in the DFA order increases the efficiency of eliminating on exponential trends. In addition, the empirical study shows that the correlations and crossovers are associated with DFA order and magnitude of exponential trends.

  7. Global robust exponential stability analysis for interval recurrent neural networks

    International Nuclear Information System (INIS)

    Xu Shengyuan; Lam, James; Ho, Daniel W.C.; Zou Yun

    2004-01-01

    This Letter investigates the problem of robust global exponential stability analysis for interval recurrent neural networks (RNNs) via the linear matrix inequality (LMI) approach. The values of the time-invariant uncertain parameters are assumed to be bounded within given compact sets. An improved condition for the existence of a unique equilibrium point and its global exponential stability of RNNs with known parameters is proposed. Based on this, a sufficient condition for the global robust exponential stability for interval RNNs is obtained. Both of the conditions are expressed in terms of LMIs, which can be checked easily by various recently developed convex optimization algorithms. Examples are provided to demonstrate the reduced conservatism of the proposed exponential stability condition

  8. Re-analysis of exponential rigid-rotor astron equilibria

    International Nuclear Information System (INIS)

    Lovelace, R.V.; Larrabee, D.A.; Fleischmann, H.H.

    1978-01-01

    Previous studies of exponential rigid-rotor astron equilibria include particles which are not trapped in the self-field of the configuration. The modification of these studies required to exclude untrapped particles is derived

  9. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  10. Sustaining the Exponential Growth of Embedded Digital Signal Processing Capability

    National Research Council Canada - National Science Library

    Shaw, Gary A; Richards, Mark A

    2004-01-01

    .... We conjecture that as IC shrinkage and attendant performance improvements begin to slow, the exponential rate of improvement we have become accustomed to for embedded applications will be sustainable...

  11. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  12. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  13. Demonstration of the exponential decay law using beer froth

    International Nuclear Information System (INIS)

    Leike, A.

    2002-01-01

    The volume of beer froth decays exponentially with time. This property is used to demonstrate the exponential decay law in the classroom. The decay constant depends on the type of beer and can be used to differentiate between different beers. The analysis shows in a transparent way the techniques of data analysis commonly used in science - consistency checks of theoretical models with the data, parameter estimation and determination of confidence intervals. (author)

  14. Meet and Join Matrices in the Poset of Exponential Divisors

    Indian Academy of Sciences (India)

    ... exponential divisor ( G C E D ) and the least common exponential multiple ( L C E M ) do not always exist. In this paper we embed this poset in a lattice. As an application we study the G C E D and L C E M matrices, analogues of G C D and L C M matrices, which are both special cases of meet and join matrices on lattices.

  15. Global robust exponential stability for interval neural networks with delay

    International Nuclear Information System (INIS)

    Cui Shihua; Zhao Tao; Guo Jie

    2009-01-01

    In this paper, new sufficient conditions for globally robust exponential stability of neural networks with either constant delays or time-varying delays are given. We show the sufficient conditions for the existence, uniqueness and global robust exponential stability of the equilibrium point by employing Lyapunov stability theory and linear matrix inequality (LMI) technique. Numerical examples are given to show the approval of our results.

  16. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  17. Testable Implications of Quasi-Hyperbolic and Exponential Time Discounting

    OpenAIRE

    Echenique, Federico; Imai, Taisuke; Saito, Kota

    2014-01-01

    We present the first revealed-preference characterizations of the models of exponential time discounting, quasi-hyperbolic time discounting, and other time-separable models of consumers’ intertemporal decisions. The characterizations provide non-parametric revealed-preference tests, which we take to data using the results of a recent experiment conducted by Andreoni and Sprenger (2012). For such data, we find that less than half the subjects are consistent with exponential discounting, and on...

  18. Fast Modular Exponentiation and Elliptic Curve Group Operation in Maple

    Science.gov (United States)

    Yan, S. Y.; James, G.

    2006-01-01

    The modular exponentiation, y[equivalent to]x[superscript k](mod n) with x,y,k,n integers and n [greater than] 1; is the most fundamental operation in RSA and ElGamal public-key cryptographic systems. Thus the efficiency of RSA and ElGamal depends entirely on the efficiency of the modular exponentiation. The same situation arises also in elliptic…

  19. Confronting quasi-exponential inflation with WMAP seven

    International Nuclear Information System (INIS)

    Pal, Barun Kumar; Pal, Supratik; Basu, B.

    2012-01-01

    We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK

  20. THE ATKINSON INDEX, THE MORAN STATISTIC, AND TESTING EXPONENTIALITY

    OpenAIRE

    Nao, Mimoto; Ricardas, Zitikis; Department of Statistics and Probability, Michigan State University; Department of Statistical and Actuarial Sciences, University of Western Ontario

    2008-01-01

    Constructing tests for exponentiality has been an active and fruitful research area, with numerous applications in engineering, biology and other sciences concerned with life-time data. In the present paper, we construct and investigate powerful tests for exponentiality based on two well known quantities: the Atkinson index and the Moran statistic. We provide an extensive study of the performance of the tests and compare them with those already available in the literature.

  1. KIOPS: A fast adaptive Krylov subspace solver for exponential integrators

    OpenAIRE

    Gaudreault, Stéphane; Rainwater, Greg; Tokman, Mayya

    2018-01-01

    This paper presents a new algorithm KIOPS for computing linear combinations of $\\varphi$-functions that appear in exponential integrators. This algorithm is suitable for large-scale problems in computational physics where little or no information about the spectrum or norm of the Jacobian matrix is known \\textit{a priori}. We first show that such problems can be solved efficiently by computing a single exponential of a modified matrix. Then our approach is to compute an appropriate basis for ...

  2. Simultaneous determination of exponential background and Gaussian peak functions in gamma ray scintillation spectrometers by maximum likelihood technique

    International Nuclear Information System (INIS)

    Eisler, P.; Youl, S.; Lwin, T.; Nelson, G.

    1983-01-01

    Simultaneous fitting of peaks and background functions from gamma-ray spectrometry using multichannel pulse height analysis is considered. The specific case of Gaussian peak and exponential background is treated in detail with respect to simultaneous estimation of both functions by using a technique which incorporates maximum likelihood method as well as a graphical method. Theoretical expressions for the standard errors of the estimates are also obtained. The technique is demonstrated for two experimental data sets. (orig.)

  3. The exponential edge-gradient effect in x-ray computed tomography

    International Nuclear Information System (INIS)

    Joseph, P.M.

    1981-01-01

    The exponential edge-gradient effect must arise in any X-ray transmission CT scanner whenever long sharp edges of high contrast are encountered. The effect is non-linear and is due to the interaction of the exponential law of X-ray attenuation and the finite width of the scanning beam in the x-y plane. The error induced in the projection values is proved to be always negative. While the most common effect is lucent streaks emerging from single straight edges, it is demonstrated that dense streaks from pairs of edges are possible. It is shown that an exact correction of the error is possible only under very special (and rather unrealistic) circumstances in which an infinite number of samples per beam width are available and all thin rays making up the beam can be considered parallel. As a practical matter, nevertheless, increased sample density is highly desirable in making good approximate corrections; this is demonstrated with simulated scans. Two classes of approximate correction algorithms are described and their effectiveness evaluated on simulated CT phantom scans. One such algorithm is also shown to work well with a real scan of a physical phantom on a machine that provides approximately four samples per beam width. (author)

  4. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  5. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  6. Numerical solution of matrix exponential in burn-up equation using mini-max polynomial approximation

    International Nuclear Information System (INIS)

    Kawamoto, Yosuke; Chiba, Go; Tsuji, Masashi; Narabayashi, Tadashi

    2015-01-01

    Highlights: • We propose a new numerical solution of matrix exponential in burn-up depletion calculations. • The depletion calculation with extremely short half-lived nuclides can be done numerically stable with this method. • The computational time is shorter than the other conventional methods. - Abstract: Nuclear fuel burn-up depletion calculations are essential to compute the nuclear fuel composition transition. In the burn-up calculations, the matrix exponential method has been widely used. In the present paper, we propose a new numerical solution of the matrix exponential, a Mini-Max Polynomial Approximation (MMPA) method. This method is numerically stable for burn-up matrices with extremely short half-lived nuclides as the Chebyshev Rational Approximation Method (CRAM), and it has several advantages over CRAM. We also propose a multi-step calculation, a computational time reduction scheme of the MMPA method, which can perform simultaneously burn-up calculations with several time periods. The applicability of these methods has been theoretically and numerically proved for general burn-up matrices. The numerical verification has been performed, and it has been shown that these methods have high precision equivalent to CRAM

  7. Novel Exponentially Fitted Two-Derivative Runge-Kutta Methods with Equation-Dependent Coefficients for First-Order Differential Equations

    Directory of Open Access Journals (Sweden)

    Yanping Yang

    2016-01-01

    Full Text Available The construction of exponentially fitted two-derivative Runge-Kutta (EFTDRK methods for the numerical solution of first-order differential equations is investigated. The revised EFTDRK methods proposed, with equation-dependent coefficients, take into consideration the errors produced in the internal stages to the update. The local truncation errors and stability of the new methods are analyzed. The numerical results are reported to show the accuracy of the new methods.

  8. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  9. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  10. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  11. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  12. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  13. Exponential Sensitivity and its Cost in Quantum Physics.

    Science.gov (United States)

    Gilyén, András; Kiss, Tamás; Jex, Igor

    2016-02-10

    State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed.

  14. Exponential gain of randomness certified by quantum contextuality

    Science.gov (United States)

    Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan

    2017-04-01

    We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.

  15. The Exponentiated Gumbel Type-2 Distribution: Properties and Application

    Directory of Open Access Journals (Sweden)

    I. E. Okorie

    2016-01-01

    Full Text Available We introduce a generalized version of the standard Gumble type-2 distribution. The new lifetime distribution is called the Exponentiated Gumbel (EG type-2 distribution. The EG type-2 distribution has three nested submodels, namely, the Gumbel type-2 distribution, the Exponentiated Fréchet (EF distribution, and the Fréchet distribution. Some statistical and reliability properties of the new distribution were given and the method of maximum likelihood estimates was proposed for estimating the model parameters. The usefulness and flexibility of the Exponentiated Gumbel (EG type-2 distribution were illustrated with a real lifetime data set. Results based on the log-likelihood and information statistics values showed that the EG type-2 distribution provides a better fit to the data than the other competing distributions. Also, the consistency of the parameters of the new distribution was demonstrated through a simulation study. The EG type-2 distribution is therefore recommended for effective modelling of lifetime data.

  16. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    Science.gov (United States)

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.

  17. Exponential rise of dynamical complexity in quantum computing through projections.

    Science.gov (United States)

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-10-10

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

  18. Design of a 9-loop quasi-exponential waveform generator.

    Science.gov (United States)

    Banerjee, Partha; Shukla, Rohit; Shyam, Anurag

    2015-12-01

    We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.

  19. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  20. Contribution of mono-exponential, bi-exponential and stretched exponential model-based diffusion-weighted MR imaging in the diagnosis and differentiation of uterine cervical carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Meng; Yu, Xiaoduo; Chen, Yan; Ouyang, Han; Zhou, Chunwu [Chinese Academy of Medical Sciences, Department of Diagnostic Radiology, Cancer Institute and Hospital, Peking Union Medical College, Beijing (China); Wu, Bing; Zheng, Dandan [GE MR Research China, Beijing (China)

    2017-06-15

    To investigate the potential of various metrics derived from mono-exponential model (MEM), bi-exponential model (BEM) and stretched exponential model (SEM)-based diffusion-weighted imaging (DWI) in diagnosing and differentiating the pathological subtypes and grades of uterine cervical carcinoma. 71 newly diagnosed patients with cervical carcinoma (50 cases of squamous cell carcinoma [SCC] and 21 cases of adenocarcinoma [AC]) and 32 healthy volunteers received DWI with multiple b values. The apparent diffusion coefficient (ADC), pure molecular diffusion (D), pseudo-diffusion coefficient (D*), perfusion fraction (f), water molecular diffusion heterogeneity index (alpha), and distributed diffusion coefficient (DDC) were calculated and compared between tumour and normal cervix, among different pathological subtypes and grades. All of the parameters were significantly lower in cervical carcinoma than normal cervical stroma except alpha. SCC showed lower ADC, D, f and DDC values and higher D* value than AC; D and DDC values of SCC and ADC and D values of AC were lower in the poorly differentiated group than those in the well-moderately differentiated group. Compared with MEM, diffusion parameters from BEM and SEM may offer additional information in cervical carcinoma diagnosis, predicting pathological tumour subtypes and grades, while f and D showed promising significance. (orig.)

  1. Lagrange α-exponential stability and α-exponential convergence for fractional-order complex-valued neural networks.

    Science.gov (United States)

    Jian, Jigui; Wan, Peng

    2017-07-01

    This paper deals with the problem on Lagrange α-exponential stability and α-exponential convergence for a class of fractional-order complex-valued neural networks. To this end, some new fractional-order differential inequalities are established, which improve and generalize previously known criteria. By using the new inequalities and coupling with the Lyapunov method, some effective criteria are derived to guarantee Lagrange α-exponential stability and α-exponential convergence of the addressed network. Moreover, the framework of the α-exponential convergence ball is also given, where the convergence rate is related to the parameters and the order of differential of the system. These results here, which the existence and uniqueness of the equilibrium points need not to be considered, generalize and improve the earlier publications and can be applied to monostable and multistable fractional-order complex-valued neural networks. Finally, one example with numerical simulations is given to show the effectiveness of the obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Exponential Synchronization of Uncertain Complex Dynamical Networks with Delay Coupling

    International Nuclear Information System (INIS)

    Wang Lifu; Kong Zhi; Jing Yuanwei

    2010-01-01

    This paper studies the global exponential synchronization of uncertain complex delayed dynamical networks. The network model considered is general dynamical delay networks with unknown network structure and unknown coupling functions but bounded. Novel delay-dependent linear controllers are designed via the Lyapunov stability theory. Especially, it is shown that the controlled networks are globally exponentially synchronized with a given convergence rate. An example of typical dynamical network of this class, having the Lorenz system at each node, has been used to demonstrate and verify the novel design proposed. And, the numerical simulation results show the effectiveness of proposed synchronization approaches. (general)

  3. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  4. Stability of the Exponential Functional Equation in Riesz Algebras

    Directory of Open Access Journals (Sweden)

    Bogdan Batko

    2014-01-01

    Full Text Available We deal with the stability of the exponential Cauchy functional equation F(x+y=F(xF(y in the class of functions F:G→L mapping a group (G, + into a Riesz algebra L. The main aim of this paper is to prove that the exponential Cauchy functional equation is stable in the sense of Hyers-Ulam and is not superstable in the sense of Baker. To prove the stability we use the Yosida Spectral Representation Theorem.

  5. Linear, Step by Step Managerial Performance, versus Exponential Performance

    Directory of Open Access Journals (Sweden)

    George MOLDOVEANU

    2011-04-01

    Full Text Available The paper proposes the transition from the potential management concept, which its authors approached by determining its dimension (Roşca, Moldoveanu, 2009b, to the linear, step by step performance concept, as an objective result of management process. In this way, we “answer” the theorists and practitioners, who support exponential management performance. The authors, as detractors of the exponential performance, are influenced by the current crisis (Roşca, Moldoveanu, 2009a, by the lack of organizational excellence in many companies, particularly in Romanian ones and also reaching “the finality” in the evolved companies, developed into an uncontrollable speed.

  6. Minimizing the effect of exponential trends in detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Xu Na; Shang Pengjian; Kamae, Santi

    2009-01-01

    The detrended fluctuation analysis (DFA) and its extensions (MF-DFA) have been used extensively to determine possible long-range correlations in time series. However, recent studies have reported the susceptibility of DFA to trends which give rise to spurious crossovers and prevent reliable estimation of the scaling exponents. In this report, a smoothing algorithm based on the discrete laplace transform (DFT) is proposed to minimize the effect of exponential trends and distortion in the log-log plots obtained by MF-DFA techniques. The effectiveness of the technique is demonstrated on monofractal and multifractal data corrupted with exponential trends.

  7. Late-time acceleration with steep exponential potentials

    Energy Technology Data Exchange (ETDEWEB)

    Shahalam, M. [Zhejiang University of Technology, Institute for Advanced Physics and Mathematics, Hangzhou (China); Yang, Weiqiang [Liaoning Normal University, Department of Physics, Dalian (China); Myrzakulov, R. [Eurasian National University, Department of General and Theoretical Physics, Eurasian International Center for Theoretical Physics, Astana (Kazakhstan); Wang, Anzhong [Zhejiang University of Technology, Institute for Advanced Physics and Mathematics, Hangzhou (China); Baylor University, GCAP-CASPER, Department of Physics, Waco, TX (United States)

    2017-12-15

    In this letter, we study the cosmological dynamics of steeper potential than exponential. Our analysis shows that a simple extension of an exponential potential allows to capture late-time cosmic acceleration and retain the tracker behavior. We also perform statefinder and Om diagnostics to distinguish dark energy models among themselves and with ΛCDM. In addition, to put the observational constraints on the model parameters, we modify the publicly available CosmoMC code and use an integrated data base of baryon acoustic oscillation, latest Type Ia supernova from Joint Light Curves sample and the local Hubble constant value measured by the Hubble Space Telescope. (orig.)

  8. Late-time acceleration with steep exponential potentials

    International Nuclear Information System (INIS)

    Shahalam, M.; Yang, Weiqiang; Myrzakulov, R.; Wang, Anzhong

    2017-01-01

    In this letter, we study the cosmological dynamics of steeper potential than exponential. Our analysis shows that a simple extension of an exponential potential allows to capture late-time cosmic acceleration and retain the tracker behavior. We also perform statefinder and Om diagnostics to distinguish dark energy models among themselves and with ΛCDM. In addition, to put the observational constraints on the model parameters, we modify the publicly available CosmoMC code and use an integrated data base of baryon acoustic oscillation, latest Type Ia supernova from Joint Light Curves sample and the local Hubble constant value measured by the Hubble Space Telescope. (orig.)

  9. Generator of an exponential function with respect to time

    International Nuclear Information System (INIS)

    Janin, Paul; Puyal, Claude.

    1981-01-01

    This invention deals with an exponential function generator, and an application of this generator to simulating the criticality of a nuclear reactor for reactimeter calibration purposes. This generator, which is particularly suitable for simulating the criticality of a nuclear reactor to calibrate a reactimeter, can also be used in any field of application necessitating the generation of an exponential function in real time. In certain fields of thermodynamics, it is necessary to represent temperature gradients as a function of time. The generator might find applications here. Another application is nuclear physics where it is necessary to represent the attenuation of a neutron flux density with respect to time [fr

  10. Exponential stability of neural networks with asymmetric connection weights

    International Nuclear Information System (INIS)

    Yang Jinxiang; Zhong Shouming

    2007-01-01

    This paper investigates the exponential stability of a class of neural networks with asymmetric connection weights. By dividing the network state variables into various parts according to the characters of the neural networks, some new sufficient conditions of exponential stability are derived via constructing a Lyapunov function and using the method of the variation of constant. The new conditions are associated with the initial values and are described by some blocks of the interconnection matrix, and do not depend on other blocks. Examples are given to further illustrate the theory

  11. Collisional avalanche exponentiation of runaway electrons in electrified plasmas

    International Nuclear Information System (INIS)

    Jayakumar, R.; Fleischmann, H.H.; Zweben, S.J.

    1993-01-01

    In contrast to earlier expectations, it is estimated that generation of runaway electrons from close collisions of existing runaways with cold plasma electrons can be significant even for small electric fields, whenever runaways can gain energies of about 20 MeV or more. In that case, the runaway population will grow exponentially with the energy spectrum showing an exponential decrease towards higher energies. Energy gains of the required magnitude may occur in large tokamak devices as well as in cosmic-ray generation. (orig.)

  12. Non-exponential dynamic relaxation in strongly nonequilibrium nonideal plasmas

    International Nuclear Information System (INIS)

    Morozov, I V; Norman, G E

    2003-01-01

    Relaxation of kinetic energy to the equilibrium state is simulated by the molecular dynamics method for nonideal two-component non-degenerate plasmas. Three limiting examples of initial states of strongly nonequilibrium plasma are considered: zero electron velocities, zero ion velocities and zero velocities of both electrons and ions. The initial non-exponential stage, its duration τ nB and subsequent exponential stages of the relaxation process are studied for a wide range of the nonideality parameter and the ion mass

  13. Fuel elements assembling for the DON project exponential experience

    International Nuclear Information System (INIS)

    Anca Abati, R. de

    1966-01-01

    It is described the fuel unit used in the DON exponential experience, the manufacturing installments and tools as well as the stages in the fabrication.These 74 elements contain each 19 cartridges loaded with synterized urania, uranium carbide and indium, gold, and manganese probes. They were arranged in calandria-like tubes and the process-tube. This last one containing a cooling liquid simulating the reactor organic. Besides being used in the DON reactor exponential experience they were used in critic essays by the substitution method in the French reactor AQUILON II. (Author) 6 refs

  14. A cluster expansion approach to exponential random graph models

    International Nuclear Information System (INIS)

    Yin, Mei

    2012-01-01

    The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region

  15. The exponential critical state of high-Tc ceramics

    International Nuclear Information System (INIS)

    Castro, H.; Rinderer, L.

    1994-01-01

    The critical current in high-Tc materials is strongly reduced by a magnetic field. We studied this dependency for tubular YBCO samples. We find an exponential drop as the field is increased from zero up to some tens of oersted. This behavior was already observed by others, however little work has been done in this direction. We define what we call the ''exponential critical state'' of HTSC and compare the prediction for the magnetization with experimental data. Furthermore, the ''Kim critical state'' is obtained as the small field limit. (orig.)

  16. On exponential stability and periodic solutions of CNNs with delays

    Science.gov (United States)

    Cao, Jinde

    2000-03-01

    In this Letter, the author analyses further problems of global exponential stability and the existence of periodic solutions of cellular neural networks with delays (DCNNs). Some simple and new sufficient conditions are given ensuring global exponential stability and the existence of periodic solutions of DCNNs by applying some new analysis techniques and constructing suitable Lyapunov functionals. These conditions have important leading significance in the design and applications of globally stable DCNNs and periodic oscillatory DCNNs and are weaker than those in the earlier works [Phys. Rev. E 60 (1999) 3244], [J. Comput. Syst. Sci. 59 (1999)].

  17. Analysis and reduction of 3D systematic and random setup errors during the simulation and treatment of lung cancer patients with CT-based external beam radiotherapy dose planning.

    NARCIS (Netherlands)

    Boer, H.D. de; Sornsen de Koste, J.R. van; Senan, S.; Visser, A.G.; Heijmen, B.J.M.

    2001-01-01

    PURPOSE: To determine the magnitude of the errors made in (a) the setup of patients with lung cancer on the simulator relative to their intended setup with respect to the planned treatment beams and (b) in the setup of these patients on the treatment unit. To investigate how the systematic component

  18. At least some errors are randomly generated (Freud was wrong)

    Science.gov (United States)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  19. Improvement of the exponential experiment system for the automatical and accurate measurement of the exponential decay constant

    International Nuclear Information System (INIS)

    Shin, Hee Sung; Jang, Ji Woon; Lee, Yoon Hee; Hwang, Yong Hwa; Kim, Ho Dong

    2004-01-01

    The previous exponential experiment system has been improved for the automatical and accurate axial movement of the neutron source and detector with attaching the automatical control system which consists of a Programmable Logical Controller(PLC) and a stepping motor set. The automatic control program which controls MCA and PLC consistently has been also developed on the basis of GENIE 2000 library. The exponential experiments have been carried out for Kori 1 unit spent fuel assemblies, C14, J14 and G23, and Kori 2 unit spent fuel assembly, J44, using the improved systematical measurement system. As the results, the average exponential decay constants for 4 assemblies are determined to be 0.1302, 0.1267, 0.1247, and 0.1210, respectively, with the application of poisson regression

  20. Effects of variable transformations on errors in FORM results

    International Nuclear Information System (INIS)

    Qin Quan; Lin Daojin; Mei Gang; Chen Hao

    2006-01-01

    On the basis of studies on second partial derivatives of the variable transformation functions for nine different non-normal variables the paper comprehensively discusses the effects of the transformation on FORM results and shows that senses and values of the errors in FORM results depend on distributions of the basic variables, whether resistances or actions basic variables represent, and the design point locations in the standard normal space. The transformations of the exponential or Gamma resistance variables can generate +24% errors in the FORM failure probability, and the transformation of Frechet action variables could generate -31% errors

  1. Bandwagon effects and error bars in particle physics

    Science.gov (United States)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  2. Bandwagon effects and error bars in particle physics

    International Nuclear Information System (INIS)

    Jeng, Monwhea

    2007-01-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit 'bandwagon effects': reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations

  3. A method for reducing memory errors in the isotopic analyses of uranium hexafluoride by mass spectrometry; Methode de reduction des erreurs de memoire dans les analyses isotopiques de l'hexafluorure d'uranium par spectrometrie de masse

    Energy Technology Data Exchange (ETDEWEB)

    Bir, R [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1961-07-01

    One of the most serious causes of systematic error in isotopic analyses of uranium from UF{sub 6} is the tendency of this material to become fixed in various ways in the mass spectrometer. As a result the value indicated by the instrument is influenced by the isotopic composition of the substances previously analysed. The resulting error is called a memory error. Making use of an elementary mathematical theory, the various methods used to reduce memory errors are analysed and compared. A new method is then suggested, which reduces the memory errors to an extent where they become negligible over a wide range of {sup 235}U concentration. The method is given in full, together with examples of its application. (author) [French] Une des causes d'erreurs systematiques les plus graves dans les analyses isotopiques d'uranium a partir d'UF{sub 6} est l'aptitude de ce produit a se fixer de diverses manieres dans le spectrometre de masse. Il en resulte une influence de la composition isotopique des produits precedemment analyses sur la valeur indiquee par l'appareil. L'erreur resultante est appelee erreur de memoire. A partir d'une theorie mathematique elementaire, on analyse et on compare les differentes methodes utilisees pour reduire les erreurs de memoire. On suggere ensuite une nouvelle methode qui reduit les erreurs de memoire dans une proportion telle qu'elles deviennent negligeables dans un grand domaine de concentration en {sup 235}U. On donne le mode operatoire complet et des exemples d'application. (auteur)

  4. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    Science.gov (United States)

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  5. Non-exponential extinction of radiation by fractional calculus modelling

    International Nuclear Information System (INIS)

    Casasanta, G.; Ciani, D.; Garra, R.

    2012-01-01

    Possible deviations from exponential attenuation of radiation in a random medium have been recently studied in several works. These deviations from the classical Beer-Lambert law were justified from a stochastic point of view by Kostinski (2001) . In his model he introduced the spatial correlation among the random variables, i.e. a space memory. In this note we introduce a different approach, including a memory formalism in the classical Beer-Lambert law through fractional calculus modelling. We find a generalized Beer-Lambert law in which the exponential memoryless extinction is only a special case of non-exponential extinction solutions described by Mittag-Leffler functions. We also justify this result from a stochastic point of view, using the space fractional Poisson process. Moreover, we discuss some concrete advantages of this approach from an experimental point of view, giving an estimate of the deviation from exponential extinction law, varying the optical depth. This is also an interesting model to understand the meaning of fractional derivative as an instrument to transmit randomness of microscopic dynamics to the macroscopic scale.

  6. Calculation of the exponential function of linear idempotent operators

    International Nuclear Information System (INIS)

    Chavoya-Aceves, O.; Luna, H.M.

    1989-01-01

    We give a method to calculate the exponential EXP[A r ] where A is a linear operator which satisfies the reaction A n =I, n is an integer and I is the identity operator. The method is generalised to operators such that A n +1=A and is applied to obtain some Lorentz transformations which generalise the notion of 'boost'. (Author)

  7. Conditionally exponential convex functions on locally compact groups

    International Nuclear Information System (INIS)

    Okb El-Bab, A.S.

    1992-09-01

    The main results of the thesis are: 1) The construction of a compact base for the convex cone of all conditionally exponential convex functions. 2) The determination of the extreme parts of this cone. Some supplementary lemmas are proved for this purpose. (author). 8 refs

  8. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  9. On ambiguities in the exponentiation of large QCD perturbative corrections

    International Nuclear Information System (INIS)

    Chyla, Jiri

    1986-01-01

    Ambiguities and some practical questions connected with the exponentiation of higher-order QCD perturbative corrections are discussed for the case of deep inelastic lepton-hadron scattering in the non-singlet channel. The importance of still higher-order calculations for resolving these ambiguities is stressed. (author)

  10. The many faces of the quantum Liouville exponentials

    Science.gov (United States)

    Gervais, Jean-Loup; Schnittger, Jens

    1994-01-01

    First, it is proven that the three main operator approaches to the quantum Liouville exponentials—that is the one of Gervais-Neveu (more recently developed further by Gervais), Braaten-Curtright-Ghandour-Thorn, and Otto-Weigt—are equivalent since they are related by simple basis transformations in the Fock space of the free field depending upon the zero-mode only. Second, the GN-G expressions for quantum Liouville exponentials, where the U q( sl(2)) quantum-group structure is manifest, are shown to be given by q-binomial sums over powers of the chiral fields in the J = {1}/{2} representation. Third, the Liouville exponentials are expressed as operator tau functions, whose chiral expansion exhibits a q Gauss decomposition, which is the direct quantum analogue of the classical solution of Leznov and Saveliev. It involves q exponentials of quantum-group generators with group "parameters" equal to chiral components of the quantum metric. Fourth, we point out that the OPE of the J = {1}/{2} Liouville exponential provides the quantum version of the Hirota bilinear equation.

  11. The generalized exponential function and fractional trigonometric identities

    KAUST Repository

    Radwan, Ahmed G.

    2011-08-01

    In this work, we recall the generalized exponential function in the fractional-order domain which enables defining generalized cosine and sine functions. We then re-visit some important trigonometric identities and generalize them from the narrow integer-order subset to the more general fractional-order domain. Generalized hyperbolic function relations are also given. © 2011 IEEE.

  12. The generalized exponential function and fractional trigonometric identities

    KAUST Repository

    Radwan, Ahmed G.; Elwakil, Ahmed S.

    2011-01-01

    In this work, we recall the generalized exponential function in the fractional-order domain which enables defining generalized cosine and sine functions. We then re-visit some important trigonometric identities and generalize them from the narrow integer-order subset to the more general fractional-order domain. Generalized hyperbolic function relations are also given. © 2011 IEEE.

  13. The Dickey-Fuller test for exponential random walks

    NARCIS (Netherlands)

    Davies, P.L.; Krämer, W.

    2003-01-01

    A common test in econometrics is the Dickey–Fuller test, which is based on the test statistic . We investigate the behavior of the test statistic if the data yt are given by an exponential random walk exp(Zt) where Zt = Zt-1 + [sigma][epsilon]t and the [epsilon]t are independent and identically

  14. Exploring parameter constraints on quintessential dark energy: The exponential model

    International Nuclear Information System (INIS)

    Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael

    2008-01-01

    We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ

  15. Exponential models applied to automated processing of radioimmunoassay standard curves

    International Nuclear Information System (INIS)

    Morin, J.F.; Savina, A.; Caroff, J.; Miossec, J.; Legendre, J.M.; Jacolot, G.; Morin, P.P.

    1979-01-01

    An improved computer processing is described for fitting of radio-immunological standard curves by means of an exponential model on a desk-top calculator. This method has been applied to a variety of radioassays and the results are in accordance with those obtained by more sophisticated models [fr

  16. Mean square exponential stability of stochastic delayed Hopfield neural networks

    International Nuclear Information System (INIS)

    Wan Li; Sun Jianhua

    2005-01-01

    Stochastic effects to the stability property of Hopfield neural networks (HNN) with discrete and continuously distributed delay are considered. By using the method of variation parameter, inequality technique and stochastic analysis, the sufficient conditions to guarantee the mean square exponential stability of an equilibrium solution are given. Two examples are also given to demonstrate our results

  17. The exponential age distribution and the Pareto firm size distribution

    OpenAIRE

    Coad, Alex

    2008-01-01

    Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.

  18. Exponential decay for solutions to semilinear damped wave equation

    KAUST Repository

    Gerbi, Stéphane

    2011-10-01

    This paper is concerned with decay estimate of solutions to the semilinear wave equation with strong damping in a bounded domain. Intro- ducing an appropriate Lyapunov function, we prove that when the damping is linear, we can find initial data, for which the solution decays exponentially. This result improves an early one in [4].

  19. Smith-Purcell oscillator in an exponential gain regime

    International Nuclear Information System (INIS)

    Schachter, L.; Ron, A.

    1988-01-01

    A Smith-Purcell oscillator with a thick electron beam is analyzed in its exponential gain regime. A threshold current less than 1[A] is found for a 1 mm wavelength; this threshold is much lower than that of a similar oscillator operating in a linear gain regime

  20. Electron traps in semiconducting polymers : Exponential versus Gaussian trap distribution

    NARCIS (Netherlands)

    Nicolai, H. T.; Mandoc, M. M.; Blom, P. W. M.

    2011-01-01

    The low electron currents in poly(dialkoxy-p-phenylene vinylene) (PPV) derivatives and their steep voltage dependence are generally explained by trap-limited conduction in the presence of an exponential trap distribution. Here we demonstrate that the electron transport of several PPV derivatives can

  1. Electron traps in semiconducting polymers: exponential versus Gaussian trap distribution

    NARCIS (Netherlands)

    Nicolai, H.T.; Mandoc, M.M.; Blom, P.W.M.

    2011-01-01

    The low electron currents in poly(dialkoxy-p-phenylene vinylene) (PPV) derivatives and their steep voltage dependence are generally explained by trap-limited conduction in the presence of an exponential trap distribution. Here we demonstrate that the electron transport of several PPV derivatives can

  2. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  3. Construction of extended exponential general linear methods 524 ...

    African Journals Online (AJOL)

    This paper introduces a new approach for constructing higher order of EEGLM which have become very popular and novel due to its enviable stability properties. This paper also shows that methods 524 is stable with its characteristics root lies in a unit circle. Numerical experiments indicate that Extended Exponential ...

  4. Criteria for exponential asymptotic stability in the large of ...

    African Journals Online (AJOL)

    The purpose of this study is to provide necessary and sufficient conditions for exponential asymptotic stability in the large and uniform asymptotic stability of perturbations of linear systems with unbounded delays. A strong relationship is established between the two types of asymptotic stability. It is found that if the ...

  5. RMS slope of exponentially correlated surface roughness for radar applications

    DEFF Research Database (Denmark)

    Dierking, Wolfgang

    2000-01-01

    In radar signature analysis, the root mean square (RMS) surface slope is utilized to assess the relative contribution of multiple scattering effects. For an exponentially correlated surface, an effective RMS slope can be determined by truncating the high frequency tail of the roughness spectrum...

  6. The Exponential Distribution and the Application to Markov Models ...

    African Journals Online (AJOL)

    ... are close to zero, and very long times are increasingly unlikely. That is, the most likely values are considered to be clustered about the mean, and large deviations from the mean are viewed as increasingly unlike. If this characteristic of the negative exponential distribution seems incompatible with the application one has ...

  7. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    [Xe−θX]/L(θ)=x. The asymptotic formulas involve the Lambert W function. The established relations are used to provide two different numerical methods for evaluating the left tail probability of lognormal sum Sn=X1+⋯+Xn: a saddlepoint approximation and an exponential twisting importance sampling estimator. For the latter we...

  8. On root mean square approximation by exponential functions

    OpenAIRE

    Sharipov, Ruslan

    2014-01-01

    The problem of root mean square approximation of a square integrable function by finite linear combinations of exponential functions is considered. It is subdivided into linear and nonlinear parts. The linear approximation problem is solved. Then the nonlinear problem is studied in some particular example.

  9. Double Exponential Relativity Theory Coupled Theoretically with Quantum Theory?

    International Nuclear Information System (INIS)

    Montero Garcia, Jose de la Luz; Novoa Blanco, Jesus Francisco

    2007-01-01

    Here the problem of special relativity is analyzed into the context of a new theoretical formulation: the Double Exponential Theory of Special Relativity with respect to which the current Special or Restricted Theory of Relativity (STR) turns to be a particular case only

  10. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  11. Exponential current pulse generation for efficient very high-impedance multisite stimulation.

    Science.gov (United States)

    Ethier, S; Sawan, M

    2011-02-01

    We describe in this paper an intracortical current-pulse generator for high-impedance microstimulation. This dual-chip system features a stimuli generator and a high-voltage electrode driver. The stimuli generator produces flexible rising exponential pulses in addition to standard rectangular stimuli. This novel stimulation waveform is expected to provide superior energy efficiency for action potential triggering while releasing less toxic reduced ions in the cortical tissues. The proposed fully integrated electrode driver is used as the output stage where high-voltage supplies are generated on-chip to significantly increase the voltage compliance for stimulation through high-impedance electrode-tissue interfaces. The stimuli generator has been implemented in 0.18-μm CMOS technology while a 0.8-μm CMOS/DMOS process has been used to integrate the high-voltage output stage. Experimental results show that the rectangular pulses cover a range of 1.6 to 167.2 μA with a DNL and an INL of 0.098 and 0.163 least-significant bit, respectively. The maximal dynamic range of the generated exponential reaches 34.36 dB at full scale within an error of ± 0.5 dB while all of its parameters (amplitude, duration, and time constant) are independently programmable over wide ranges. This chip consumes a maximum of 88.3 μ W in the exponential mode. High-voltage supplies of 8.95 and -8.46 V are generated by the output stage, boosting the voltage swing up to 13.6 V for a load as high as 100 kΩ.

  12. Stretched versus compressed exponential kinetics in α-helix folding

    International Nuclear Information System (INIS)

    Hamm, Peter; Helbing, Jan; Bredenbeck, Jens

    2006-01-01

    In a recent paper (J. Bredenbeck, J. Helbing, J.R. Kumita, G.A. Woolley, P. Hamm, α-helix formation in a photoswitchable peptide tracked from picoseconds to microseconds by time resolved IR spectroscopy, Proc. Natl. Acad. Sci USA 102 (2005) 2379), we have investigated the folding of a photo-switchable α-helix with a kinetics that could be fit by a stretched exponential function exp(-(t/τ) β ). The stretching factor β became smaller as the temperature was lowered, a result which has been interpreted in terms of activated diffusion on a rugged energy surface. In the present paper, we discuss under which conditions diffusion problems occur with stretched exponential kinetics (β 1). We show that diffusion problems do have a strong tendency to yield stretched exponential kinetics, yet, that there are conditions (strong perturbation from equilibrium, performing the experiment in the folding direction) under which compressed exponential kinetics would be expected instead. We discuss the kinetics on free energy surfaces predicted by simple initiation-propagation models (zipper models) of α-helix folding, as well as by folding funnel models. We show that our recent experiment has been performed under condition for which models with strong downhill driving force, such as the zipper model, would predict compressed, rather than stretched exponential kinetics, in disagreement with the experimental observation. We therefore propose that the free energy surface along a reaction coordinate that governs the folding kinetics must be relatively flat and has a shape similar to a 1D golf course. We discuss how this conclusion can be unified with the thermodynamically well established zipper model by introducing an additional kinetic reaction coordinate

  13. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  14. Optimal complex exponentials BEM and channel estimation in doubly selective channel

    International Nuclear Information System (INIS)

    Song, Lijun; Lei, Xia; Yu, Feng; Jin, Maozhu

    2016-01-01

    Over doubly selective channel, the optimal complex exponentials BEM (CE-BEM) is required to characterize the transmission in transform domain in order to reducing the huge number of the estimated parameters during directly estimating the impulse response in time domain. This paper proposed an improved CE-BEM to alleviating the high frequency sampling error caused by conventional CE-BEM. On the one hand, exploiting the improved CE-BEM, we achieve the sampling point is in the Doppler spread spectrum and the maximum sampling frequency is equal to the maximum Doppler shift. On the other hand we optimize the function and dimension of basis in CE-BEM respectively ,and obtain the closed solution of the EM based channel estimation differential operator by exploiting the above optimal BEM. Finally, the numerical results and theoretic analysis show that the dimension of basis is mainly depend on the maximum Doppler shift and signal-to-noise ratio (SNR), and if fixing the number of the pilot symbol, the dimension of basis is higher, the modeling error is smaller, while the accuracy of the parameter estimation is reduced, which implies that we need to achieve a tradeoff between the modeling error and the accuracy of the parameter estimation and the basis function influences the accuracy of describing the Doppler spread spectrum after identifying the dimension of the basis.

  15. Yield shear stress model of magnetorheological fluids based on exponential distribution

    International Nuclear Information System (INIS)

    Guo, Chu-wen; Chen, Fei; Meng, Qing-rui; Dong, Zi-xin

    2014-01-01

    The magnetic chain model that considers the interaction between particles and the external magnetic field in a magnetorheological fluid has been widely accepted. Based on the chain model, a yield shear stress model of magnetorheological fluids was proposed by introducing the exponential distribution to describe the distribution of angles between the direction of magnetic field and the chain formed by magnetic particles. The main influencing factors were considered in the model, such as magnetic flux density, intensity of magnetic field, particle size, volume fraction of particles, the angle of magnetic chain, and so on. The effect of magnetic flux density on the yield shear stress was discussed. The yield stress of aqueous Fe 3 O 4 magnetreological fluids with volume fraction of 7.6% and 16.2% were measured by a device designed by ourselves. The results indicate that the proposed model can be used for calculation of yield shear stress with acceptable errors. - Highlights: • A yield shear stress model of magnetorheological fluids was proposed. • Use exponential distribution to describe the distribution of magnetic chain angles. • Experimental and predicted results were in good agreement for 2 types of MR

  16. A 60-dB linear VGA with novel exponential gain approximation

    International Nuclear Information System (INIS)

    Zhou Jiaye; Tan Xi; Wang Junyu; Tang Zhangwen; Min Hao

    2009-01-01

    A CMOS variable gain amplifier (VGA) that adopts a novel exponential gain approximation is presented. No additional exponential gain control circuit is required in the proposed VGA used in a direct conversion receiver. A wide gain control voltage from 0.4 to 1.8 V and a high linearity performance are achieved. The three-stage VGA with automatic gain control (AGC) and DC offset cancellation (DCOC) is fabricated in a 0.18-μm CMOS technology and shows a linear gain range of more than 58-dB with a linearity error less than ±1 dB. The 3-dB bandwidth is over 8 MHz at all gain settings. The measured input-referred third intercept point (IIP3) of the proposed VGA varies from -18.1 to 13.5 dBm, and the measured noise figure varies from 27 to 65 dB at a frequency of 1 MHz. The dynamic range of the closed-loop AGC exceeds 56 dB, where the output signal-to-noise-and-distortion ratio (SNDR) reaches 20 dB. The whole circuit, occupying 0.3 mm 2 of chip area, dissipates less than 3.7 mA from a 1.8-V supply.

  17. BAYESIAN ESTIMATION OF THE SHAPE PARAMETER OF THE GENERALISED EXPONENTIAL DISTRIBUTION UNDER DIFFERENT LOSS FUNCTIONS

    Directory of Open Access Journals (Sweden)

    SANKU DEY

    2010-11-01

    Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of  non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.

  18. Adiabatic approximation with exponential accuracy for many-body systems and quantum computation

    International Nuclear Information System (INIS)

    Lidar, Daniel A.; Rezakhani, Ali T.; Hamma, Alioscia

    2009-01-01

    We derive a version of the adiabatic theorem that is especially suited for applications in adiabatic quantum computation, where it is reasonable to assume that the adiabatic interpolation between the initial and final Hamiltonians is controllable. Assuming that the Hamiltonian is analytic in a finite strip around the real-time axis, that some number of its time derivatives vanish at the initial and final times, and that the target adiabatic eigenstate is nondegenerate and separated by a gap from the rest of the spectrum, we show that one can obtain an error between the final adiabatic eigenstate and the actual time-evolved state which is exponentially small in the evolution time, where this time itself scales as the square of the norm of the time derivative of the Hamiltonian divided by the cube of the minimal gap.

  19. Exponential networked synchronization of master-slave chaotic systems with time-varying communication topologies

    International Nuclear Information System (INIS)

    Yang Dong-Sheng; Liu Zhen-Wei; Liu Zhao-Bing; Zhao Yan

    2012-01-01

    The networked synchronization problem of a class of master-slave chaotic systems with time-varying communication topologies is investigated in this paper. Based on algebraic graph theory and matrix theory, a simple linear state feedback controller is designed to synchronize the master chaotic system and the slave chaotic systems with a time-varying communication topology connection. The exponential stability of the closed-loop networked synchronization error system is guaranteed by applying Lyapunov stability theory. The derived novel criteria are in the form of linear matrix inequalities (LMIs), which are easy to examine and tremendously reduce the computation burden from the feedback matrices. This paper provides an alternative networked secure communication scheme which can be extended conveniently. An illustrative example is given to demonstrate the effectiveness of the proposed networked synchronization method. (general)

  20. Master-slave exponential synchronization of delayed complex-valued memristor-based neural networks via impulsive control.

    Science.gov (United States)

    Li, Xiaofan; Fang, Jian-An; Li, Huiyuan

    2017-09-01

    This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Exponential L2-L∞ Filtering for a Class of Stochastic System with Mixed Delays and Nonlinear Perturbations

    Directory of Open Access Journals (Sweden)

    Zhaohui Chen

    2013-01-01

    Full Text Available The delay-dependent exponential L2-L∞ performance analysis and filter design are investigated for stochastic systems with mixed delays and nonlinear perturbations. Based on the delay partitioning and integral partitioning technique, an improved delay-dependent sufficient condition for the existence of the L2-L∞ filter is established, by choosing an appropriate Lyapunov-Krasovskii functional and constructing a new integral inequality. The full-order filter design approaches are obtained in terms of linear matrix inequalities (LMIs. By solving the LMIs and using matrix decomposition, the desired filter gains can be obtained, which ensure that the filter error system is exponentially stable with a prescribed L2-L∞ performance γ. Numerical examples are provided to illustrate the effectiveness and significant improvement of the proposed method.

  2. Time-resolved infrared stimulated luminescence signals in feldspars: Analysis based on exponential and stretched exponential functions

    International Nuclear Information System (INIS)

    Pagonis, V.; Morthekai, P.; Singhvi, A.K.; Thomas, J.; Balaram, V.; Kitis, G.; Chen, R.

    2012-01-01

    Time-resolved infrared-stimulated luminescence (TR-IRSL) signals from feldspar samples have been the subject of several recent experimental studies. These signals are of importance in the field of luminescence dating, since they exhibit smaller fading effects than the commonly employed continuous-wave infrared signals (CW-IRSL). This paper presents a semi-empirical analysis of TR-IRSL data from feldspar samples, by using a linear combination of exponential and stretched exponential (SE) functions. The best possible estimates of the five parameters in this semi-empirical approach are obtained using five popular commercially available software packages, and by employing a variety of global optimization techniques. The results from all types of software and from the different fitting algorithms were found to be in close agreement with each other, indicating that a global optimum solution has likely been reached during the fitting process. Four complete sets of TR-IRSL data on well-characterized natural feldspars were fitted by using such a linear combination of exponential and SE functions. The dependence of the extracted fitting parameters on the stimulation temperature is discussed within the context of a recently proposed model of luminescence processes in feldspar. Three of the four feldspar samples studied in this paper are K-rich, and these exhibited different behavior at higher stimulation temperatures, than the fourth sample which was a Na-rich feldspar. The new method of analysis proposed in this paper can help isolate mathematically the more thermally stable components, and hence could lead to better dating applications in these materials. - Highlights: ► TR-IRSL from four feldspars were analyzed using exponential and stretched exponential functions. ► A variety of global optimization techniques give good agreement. ► Na-rich sample behavior is different from the three K-rich samples. ► Experimental data are fitted for stimulation temperatures

  3. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  4. Effect of benzalkonium chloride on viability and energy metabolism in exponential- and stationary-growth-phase cells of Listeria monocytogenes.

    Science.gov (United States)

    Luppens, S B; Abee, T; Oosterom, J

    2001-04-01

    The difference in killing exponential- and stationary-phase cells of Listeria monocytogenes by benzalkonium chloride (BAC) was investigated by plate counting and linked to relevant bioenergetic parameters. At a low concentration of BAC (8 mg liter(-1)), a similar reduction in viable cell numbers was observed for stationary-phase cells and exponential-phase cells (an approximately 0.22-log unit reduction), although their membrane potential and pH gradient were dissipated. However, at higher concentrations of BAC, exponential-phase cells were more susceptible than stationary-phase cells. At 25 mg liter(-1), the difference in survival on plates was more than 3 log units. For both types of cells, killing, i.e., more than 1-log unit reduction in survival on plates, coincided with complete inhibition of acidification and respiration and total depletion of ATP pools. Killing efficiency was not influenced by the presence of glucose, brain heart infusion medium, or oxygen. Our results suggest that growth phase is one of the major factors that determine the susceptibility of L. monocytogenes to BAC.

  5. Thermoluminescence under an exponential heating function: I. Theory

    International Nuclear Information System (INIS)

    Kitis, G; Chen, R; Pagonis, V; Carinou, E; Kamenopoulou, V

    2006-01-01

    Constant temperature hot gas readers are widely employed in thermoluminescence dosimetry. In such readers the sample is heated according to an exponential heating function. The single glow-peak shape derived under this heating condition is not described by the TL kinetics equation corresponding to a linear heating rate. In the present work TL kinetics expressions, for first and general order kinetics, describing single glow-peak shapes under an exponential heating function are derived. All expressions were modified from their original form of I(n 0 , E, s, b, T) into I(I m , E, T m , b, T) in order to become more efficient for glow-curve deconvolution analysis. The efficiency of all algorithms was extensively tested using synthetic glow-peaks

  6. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  7. An Exact Analytical Solution to Exponentially Tapered Piezoelectric Energy Harvester

    Directory of Open Access Journals (Sweden)

    H. Salmani

    2015-01-01

    Full Text Available It has been proven that tapering the piezoelectric beam through its length optimizes the power extracted from vibration based energy harvesting. This phenomenon has been investigated by some researchers using semianalytical, finite element and experimental methods. In this paper, an exact analytical solution is presented to calculate the power generated from vibration of exponentially tapered unimorph and bimorph with series and parallel connections. The mass normalized mode shapes of the exponentially tapered piezoelectric beam with tip mass are implemented to transfer the proposed electromechanical coupled equations into modal coordinates. The steady states harmonic solution results are verified both numerically and experimentally. Results show that there exist values for tapering parameter and electric resistance in a way that the output power per mass of the energy harvester will be maximized. Moreover it is concluded that the electric resistance must be higher than a specified value for gaining more power by tapering the beam.

  8. Handbook of exponential and related distributions for engineers and scientists

    CERN Document Server

    Pal, Nabendu; Lim, Wooi K

    2005-01-01

    The normal distribution is widely known and used by scientists and engineers. However, there are many cases when the normal distribution is not appropriate, due to the data being skewed. Rather than leaving you to search through journal articles, advanced theoretical monographs, or introductory texts for alternative distributions, the Handbook of Exponential and Related Distributions for Engineers and Scientists provides a concise, carefully selected presentation of the properties and principles of selected distributions that are most useful for application in the sciences and engineering.The book begins with all the basic mathematical and statistical background necessary to select the correct distribution to model real-world data sets. This includes inference, decision theory, and computational aspects including the popular Bootstrap method. The authors then examine four skewed distributions in detail: exponential, gamma, Weibull, and extreme value. For each one, they discuss general properties and applicabi...

  9. Optimization design of power efficiency of exponential impedance transformer

    International Nuclear Information System (INIS)

    Wang Meng; Zou Wenkang; Chen Lin; Guan Yongchao; Fu Jiabin; Xie Weiping

    2011-01-01

    The paper investigates the optimization design of power efficiency of exponential impedance transformer with analytic method and numerical method. In numerical calculation, a sine wave Jantage with hypothesis of rising edge equivalence is regarded as the forward-going Jantage at input of transformer, and its dominant angular frequency is determined by typical rise-time of actual Jantage waveforms. At the same time, dissipative loss in water dielectric is neglected. The numerical results of three typical modes of impedance transformation, viz. linear mode, saturation mode and steep mode,are compared. Pivotal factors which affect the power efficiency of exponential impedance transformer are discussed, and a certain extent quantitative range of intermediate variables and accordance coefficients are obtained. Finally, the paper discusses some important issues in actual design, such as insulation safety factor in structure design, effects of coupling capacitance on impedance calculation, and dissipative loss in water dielectric. (authors)

  10. The Use of Modeling Approach for Teaching Exponential Functions

    Science.gov (United States)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  11. CMB constraints on β-exponential inflationary models

    Science.gov (United States)

    Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.

    2018-03-01

    We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.

  12. Audit of medication errors by anesthetists in North Western Nigeria ...

    African Journals Online (AJOL)

    ... errors do occur in the everyday practice of anesthetists in Nigeria as in other countries and can lead to morbidity and mortality in our patients. Routine audit and reporting of critical incidents including errors in drug administration should be encouraged. Reduction of medication errors is an important aspect of patient safety, ...

  13. On limiting towards the boundaries of exponential families

    Czech Academy of Sciences Publication Activity Database

    Matúš, František

    2015-01-01

    Roč. 51, č. 5 (2015), s. 725-738 ISSN 0023-5954 R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : exponential family * variance function * Kullback--Leibler divergence * relative entropy * information divergence * mean parametrization * convex support Subject RIV: BD - Theory of Information Impact factor: 0.628, year: 2015 http://library.utia.cas.cz/separaty/2016/MTR/matus-0455604.pdf

  14. On the Dividend Strategies with Non-Exponential Discounting

    OpenAIRE

    Zhao, Qian; Wei, Jiaqin; Wang, Rongming

    2013-01-01

    In this paper, we study the dividend strategies for a shareholder with non-constant discount rate in a diffusion risk model. We assume that the dividends can only be paid at a bounded rate and restrict ourselves to the Markov strategies. This is a time inconsistent control problem. The extended HJB equation is given and the verification theorem is proved for a general discount function. Considering the pseudo-exponential discount functions (Type I and Type II), we get the equilibrium dividend...

  15. Linearization of Nonautonomous Impulsive System with Nonuniform Exponential Dichotomy

    Directory of Open Access Journals (Sweden)

    Yongfei Gao

    2014-01-01

    Full Text Available This paper gives a version of Hartman-Grobman theorem for the impulsive differential equations. We assume that the linear impulsive system has a nonuniform exponential dichotomy. Under some suitable conditions, we proved that the nonlinear impulsive system is topologically conjugated to its linear system. Indeed, we do construct the topologically equivalent function (the transformation. Moreover, the method to prove the topological conjugacy is quite different from those in previous works (e.g., see Barreira and Valls, 2006.

  16. Exponential Inequalities for Positively Associated Random Variables and Applications

    Directory of Open Access Journals (Sweden)

    Yang Shanchao

    2008-01-01

    Full Text Available Abstract We establish some exponential inequalities for positively associated random variables without the boundedness assumption. These inequalities improve the corresponding results obtained by Oliveira (2005. By one of the inequalities, we obtain the convergence rate for the case of geometrically decreasing covariances, which closes to the optimal achievable convergence rate for independent random variables under the Hartman-Wintner law of the iterated logarithm and improves the convergence rate derived by Oliveira (2005 for the above case.

  17. The need for interdisciplinary research on exponential technologies and sustainability

    OpenAIRE

    Alier Forment, Marc; Casany Guerrero, María José

    2017-01-01

    Technology has a clear influence on the way we live, our culture and how society functions, and last but not least our environment. At a moment when the transformational factor of technology is accelerating at an exponential pace, it is really important to reflect the direction that we want this acceleration to go. In this paper we present some of the factors relevant to this mater: 1) the influence of technology in the society and the environment. 2) The acceleration of some technologies ...

  18. Applications exponential approximation by integer shifts of Gaussian functions

    Directory of Open Access Journals (Sweden)

    S. M. Sitnik

    2013-01-01

    Full Text Available In this paper we consider approximations of functions using integer shifts of Gaussians – quadratic exponentials. A method is proposed to find coefficients of node functions by solving linear systems of equations. The explicit formula for the determinant of the system is found, based on it solvability of linear system under consideration is proved and uniqueness of its solution. We compare results with known ones and briefly indicate applications to signal theory.

  19. Notes on spectrum and exponential decay in nonautonomous evolutionary equations

    Directory of Open Access Journals (Sweden)

    Christian Pötzsche

    2016-08-01

    Full Text Available We first determine the dichotomy (Sacker-Sell spectrum for certain nonautonomous linear evolutionary equations induced by a class of parabolic PDE systems. Having this information at hand, we underline the applicability of our second result: If the widths of the gaps in the dichotomy spectrum are bounded away from $0$, then one can rule out the existence of super-exponentially decaying (i.e. slow solutions of semi-linear evolutionary equations.

  20. Extracting the exponential behaviors in the market data

    Science.gov (United States)

    Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako

    2007-08-01

    We introduce a mathematical criterion defining the bubbles or the crashes in financial market price fluctuations by considering exponential fitting of the given data. By applying this criterion we can automatically extract the periods in which bubbles and crashes are identified. From stock market data of so-called the Internet bubbles it is found that the characteristic length of bubble period is about 100 days.

  1. Stretched exponentials and power laws in granular avalanching

    Science.gov (United States)

    Head, D. A.; Rodgers, G. J.

    1999-02-01

    We introduce a model for granular surface flow which exhibits both stretched exponential and power law avalanching over its parameter range. Two modes of transport are incorporated, a rolling layer consisting of individual particles and the overdamped, sliding motion of particle clusters. The crossover in behaviour observed in experiments on piles of rice is attributed to a change in the dominant mode of transport. We predict that power law avalanching will be observed whenever surface flow is dominated by clustered motion.

  2. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  3. Exponentiation and deformations of Lie-admissible algebras

    International Nuclear Information System (INIS)

    Myung, H.C.

    1982-01-01

    The exponential function is defined for a finite-dimensional real power-associative algebra with unit element. The application of the exponential function is focused on the power-associative (p,q)-mutation of a real or complex associative algebra. Explicit formulas are computed for the (p,q)-mutation of the real envelope of the spin 1 algebra and the Lie algebra so(3) of the rotation group, in light of earlier investigations of the spin 1/2. A slight variant of the mutated exponential is interpreted as a continuous function of the Lie algebra into some isotope of the corresponding linear Lie group. The second part of this paper is concerned with the representation and deformation of a Lie-admissible algebra. The second cohomology group of a Lie-admissible algebra is introduced as a generalization of those of associative and Lie algebras in the Hochschild and Chevalley-Eilenberg theory. Some elementary theory of algebraic deformation of Lie-admissible algebras is discussed in view of generalization of that of associative and Lie algebras. Lie-admissible deformations are also suggested by the representation of Lie-admissible algebras. Some explicit examples of Lie-admissible deformation are given in terms of the (p,q)-mutation of associative deformation of an associative algebra. Finally, we discuss Lie-admissible deformations of order one

  4. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  5. Validation of predicted exponential concentration profiles of chemicals in soils

    International Nuclear Information System (INIS)

    Hollander, Anne; Baijens, Iris; Ragas, Ad; Huijbregts, Mark; Meent, Dik van de

    2007-01-01

    Multimedia mass balance models assume well-mixed homogeneous compartments. Particularly for soils, this does not correspond to reality, which results in potentially large uncertainties in estimates of transport fluxes from soils. A theoretically expected exponential decrease model of chemical concentrations with depth has been proposed, but hardly tested against empirical data. In this paper, we explored the correspondence between theoretically predicted soil concentration profiles and 84 field measured profiles. In most cases, chemical concentrations in soils appear to decline exponentially with depth, and values for the chemical specific soil penetration depth (d p ) are predicted within one order of magnitude. Over all, the reliability of multimedia models will improve when they account for depth-dependent soil concentrations, so we recommend to take into account the described theoretical exponential decrease model of chemical concentrations with depth in chemical fate studies. In this model the d p -values should estimated be either based on local conditions or on a fixed d p -value, which we recommend to be 10 cm for chemicals with a log K ow > 3. - Multimedia mass model predictions will improve when taking into account depth dependent soil concentrations

  6. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  7. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  8. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  9. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  10. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  11. Research on Copy-Move Image Forgery Detection Using Features of Discrete Polar Complex Exponential Transform

    Science.gov (United States)

    Gan, Yanfen; Zhong, Junliu

    2015-12-01

    With the aid of sophisticated photo-editing software, such as Photoshop, copy-move image forgery operation has been widely applied and has become a major concern in the field of information security in the modern society. A lot of work on detecting this kind of forgery has gained great achievements, but the detection results of geometrical transformations of copy-move regions are not so satisfactory. In this paper, a new method based on the Polar Complex Exponential Transform is proposed. This method addresses issues in image geometric moment, focusing on constructing rotation invariant moment and extracting features of the rotation invariant moment. In order to reduce rounding errors of the transform from the Polar coordinate system to the Cartesian coordinate system, a new transformation method is presented and discussed in detail at the same time. The new method constructs a 9 × 9 shrunk template to transform the Cartesian coordinate system back to the Polar coordinate system. It can reduce transform errors to a much greater degree. Forgery detection, such as copy-move image forgery detection, is a difficult procedure, but experiments prove our method is a great improvement in detecting and identifying forgery images affected by the rotated transform.

  12. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  13. Exponential frequency spectrum and Lorentzian pulses in magnetized plasmas

    International Nuclear Information System (INIS)

    Pace, D. C.; Shi, M.; Maggs, J. E.; Morales, G. J.; Carter, T. A.

    2008-01-01

    Two different experiments involving pressure gradients across the confinement magnetic field in a large plasma column are found to exhibit a broadband turbulence that displays an exponential frequency spectrum for frequencies below the ion cyclotron frequency. The exponential feature has been traced to the presence of solitary pulses having a Lorentzian temporal signature. These pulses arise from nonlinear interactions of drift-Alfven waves driven by the pressure gradients. In both experiments the width of the pulses is narrowly distributed resulting in exponential spectra with a single characteristic time scale. The temporal width of the pulses is measured to be a fraction of a period of the drift-Alfven waves. The experiments are performed in the Large Plasma Device (LAPD-U) [W. Gekelman et al., Rev. Sci. Instrum. 62, 2875 (1991)] operated by the Basic Plasma Science Facility at the University of California, Los Angeles. One experiment involves a controlled, pure electron temperature gradient associated with a microscopic (6 mm gradient length) hot electron temperature filament created by the injection a small electron beam embedded in the center of a large, cold magnetized plasma. The other experiment is a macroscopic (3.5 cm gradient length) limiter-edge experiment in which a density gradient is established by inserting a metallic plate at the edge of the nominal plasma column of the LAPD-U. The temperature filament experiment permits a detailed study of the transition from coherent to turbulent behavior and the concomitant change from classical to anomalous transport. In the limiter experiment the turbulence sampled is always fully developed. The similarity of the results in the two experiments strongly suggests a universal feature of pressure-gradient driven turbulence in magnetized plasmas that results in nondiffusive cross-field transport. This may explain previous observations in helical confinement devices, research tokamaks, and arc plasmas.

  14. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    Science.gov (United States)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  15. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  16. On the stability of some systems of exponential difference equations

    Directory of Open Access Journals (Sweden)

    N. Psarros

    2018-01-01

    Full Text Available In this paper we prove the stability of the zero equilibria of two systems of difference equations of exponential type, which are some extensions of an one-dimensional biological model. The stability of these systems is investigated in the special case when one of the eigenvalues is equal to -1 and the other eigenvalue has absolute value less than 1, using centre manifold theory. In addition, we study the existence and uniqueness of positive equilibria, the attractivity and the global asymptotic stability of these equilibria of some related systems of difference equations.

  17. Exponential complexity and ontological theories of quantum mechanics

    International Nuclear Information System (INIS)

    Montina, A.

    2008-01-01

    Ontological theories of quantum mechanics describe a single system by means of well-defined classical variables and attribute the quantum uncertainties to our ignorance about the underlying reality represented by these variables. We consider the general class of ontological theories describing a quantum system by a set of variables with Markovian (either deterministic or stochastic) evolution. We provide proof that the number of continuous variables cannot be smaller than 2N-2, N being the Hilbert-space dimension. Thus, any ontological Markovian theory of quantum mechanics requires a number of variables which grows exponentially with the physical size. This result is relevant also in the framework of quantum Monte Carlo methods

  18. Exponential and power laws in public procurement markets

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav; Skuhrovec, J.

    2012-01-01

    Roč. 99, č. 2 (2012), 28005-1-28005-6 ISSN 0295-5075 R&D Projects: GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310; SVV(CZ) 265 504; GA TA ČR(CZ) TD010133 Institutional support: RVO:67985556 Keywords : Public procurement * Scaling * Power law Subject RIV: AH - Economics Impact factor: 2.260, year: 2012 http://library.utia.cas.cz/separaty/2012/E/kristoufek-exponential and power laws in public procurement markets.pdf

  19. Galilean invariance in the exponential model of atomic collisions

    International Nuclear Information System (INIS)

    del Pozo, A.; Riera, A.; Yaez, M.

    1986-01-01

    Using the X/sup n/ + (1s 2 )+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He + (1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed

  20. Finite differences with exponential filtering in the calculation of reactivity

    International Nuclear Information System (INIS)

    Suescun Diaz, Daniel; Senra Martinez, Aquilino

    2010-01-01

    A formulation for the calculation of reactivity using a recursive process is presented in this paper, as well as the treatment to reduce noise intensity that is found in the nuclear power signal. Using the history of nuclear power considered as the memory of such power and the filter exponentially adjusted with the least squares method, it is possible to reduce the nuclear power fluctuations without causing attenuation for the calculation of reactivity and with a smaller delay than that for low-pass filter of first order delay filter. (orig.)

  1. Finite differences with exponential filtering in the calculation of reactivity

    Energy Technology Data Exchange (ETDEWEB)

    Suescun Diaz, Daniel; Senra Martinez, Aquilino [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). COPPE - Programa de Engenharia Nuclear

    2010-08-15

    A formulation for the calculation of reactivity using a recursive process is presented in this paper, as well as the treatment to reduce noise intensity that is found in the nuclear power signal. Using the history of nuclear power considered as the memory of such power and the filter exponentially adjusted with the least squares method, it is possible to reduce the nuclear power fluctuations without causing attenuation for the calculation of reactivity and with a smaller delay than that for low-pass filter of first order delay filter. (orig.)

  2. On Exponential Hedging and Related Quadratic Backward Stochastic Differential Equations

    International Nuclear Information System (INIS)

    Sekine, Jun

    2006-01-01

    The dual optimization problem for the exponential hedging problem is addressed with a cone constraint. Without boundedness conditions on the terminal payoff and the drift of the Ito-type controlled process, the backward stochastic differential equation, which has a quadratic growth term in the drift, is derived as a necessary and sufficient condition for optimality via a variational method and dynamic programming. Further, solvable situations are given, in which the value and the optimizer are expressed in closed forms with the help of the Clark-Haussmann-Ocone formula

  3. New robust chaotic system with exponential quadratic term

    International Nuclear Information System (INIS)

    Bao Bocheng; Li Chunbiao; Liu Zhong; Xu Jianping

    2008-01-01

    This paper proposes a new robust chaotic system of three-dimensional quadratic autonomous ordinary differential equations by introducing an exponential quadratic term. This system can display a double-scroll chaotic attractor with only two equilibria, and can be found to be robust chaotic in a very wide parameter domain with positive maximum Lyapunov exponent. Some basic dynamical properties and chaotic behaviour of novel attractor are studied. By numerical simulation, this paper verifies that the three-dimensional system can also evolve into periodic and chaotic behaviours by a constant controller. (general)

  4. Neural pulse frequency modulation of an exponentially correlated Gaussian process

    Science.gov (United States)

    Hutchinson, C. E.; Chon, Y.-T.

    1976-01-01

    The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.

  5. Dark energy exponential potential models as curvature quintessence

    International Nuclear Information System (INIS)

    Capozziello, S; Cardone, V F; Piedipalumbo, E; Rubano, C

    2006-01-01

    It has been recently shown that, under some general conditions, it is always possible to find a fourth-order gravity theory capable of reproducing the same dynamics as a given dark energy model. Here, we discuss this approach for a dark energy model with a scalar field evolving under the action of an exponential potential. In the absence of matter, such a potential can be recovered from a fourth-order theory via a conformal transformation. Including the matter term, the function f(R) entering the generalized gravity Lagrangian can be reconstructed according to the dark energy model

  6. Exponential Martingales and Changes of Measure for Counting Processes

    DEFF Research Database (Denmark)

    Sokol, Alexander; Hansen, Niels Richard

    2015-01-01

    We give sufficient criteria for the Doléans-Dade exponential of a stochastic integral with respect to a counting process local martingale to be a true martingale. The criteria are adapted particularly to the case of counting processes and are sufficiently weak to be useful and verifiable, as we i...... illustrate by several examples. In particular, the criteria allow for the construction of for example nonexplosive Hawkes processes, counting processes with stochastic intensities depending on diffusion processes as well as inhomogeneous finite-state Markov processes....

  7. Polar exponential sensor arrays unify iconic and Hough space representation

    Science.gov (United States)

    Weiman, Carl F. R.

    1990-01-01

    The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.

  8. Exponential rarefaction of real curves with many components

    OpenAIRE

    Gayet , Damien; Welschinger , Jean-Yves

    2011-01-01

    21 pages; Given a positive real Hermitian holomorphic line bundle L over a smooth real projective manifold X, the space of real holomorphic sections of the bundle L^d inherits for every positive integer d a L^2 scalar product which induces a Gaussian measure. When X is a curve or a surface, we estimate the volume of the cone of real sections whose vanishing locus contains many real components. In particular, the volume of the cone of maximal real sections decreases exponentially as d grows to...

  9. Exponential convergence and acceleration of Hartree-Fock calculations

    International Nuclear Information System (INIS)

    Bonaccorso, A.; Di Toro, M.; Lomnitz-Adler, J.

    1979-01-01

    It is shown that one can expect an exponential behaviour for the convergence of the Hartree-Fock solution during the HF iteration procedure. This property is used to extrapolate some collective degrees of freedom, in this case the shape, in order to speed up the self-consistent calculation. For axially deformed nuclei the method is applied to the quadrupole moment which corresponds to a simple scaling transformation on the single particle wave functions. Results are shown for the deformed nuclei 20 Ne and 28 Si with a Skyrme interaction. (Auth.)

  10. Exponential GARCH Modeling with Realized Measures of Volatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Huang, Zhuo

    returns and volatility. We apply the model to DJIA stocks and an exchange traded fund that tracks the S&P 500 index and find that specifications with multiple realized measures dominate those that rely on a single realized measure. The empirical analysis suggests some convenient simplifications......We introduce the Realized Exponential GARCH model that can utilize multiple realized volatility measures for the modeling of a return series. The model specifies the dynamic properties of both returns and realized measures, and is characterized by a flexible modeling of the dependence between...

  11. Exponential Time Complexity of the Permanent and the Tutte Polynomial

    DEFF Research Database (Denmark)

    Dell, Holger; Husfeldt, Thore; Marx, Dániel

    2014-01-01

    bounds are relative to (variants of) the Exponential Time Hypothesis (ETH), which says that the satisfiability of n-variable 3-CNF formulas cannot be decided in time exp(o(n)). We relax this hypothesis by introducing its counting version #ETH; namely, that the satisfying assignments cannot be counted......We show conditional lower bounds for well-studied #P-hard problems: The number of satisfying assignments of a 2-CNF formula with n variables cannot be computed in time exp(o(n)), and the same is true for computing the number of all independent sets in an n-vertex graph. The permanent of an n× n...

  12. THE EXPONENTIAL STABILIZATION FOR A SEMILINEAR WAVE EQUATION WITH LOCALLY DISTRIBUTED FEEDBACK

    Institute of Scientific and Technical Information of China (English)

    JIA CHAOHUA; FENG DEXING

    2005-01-01

    This paper considers the exponential decay of the solution to a damped semilinear wave equation with variable coefficients in the principal part by Riemannian multiplier method. A differential geometric condition that ensures the exponential decay is obtained.

  13. Integration of large chemical kinetic mechanisms via exponential methods with Krylov approximations to Jacobian matrix functions

    KAUST Repository

    Bisetti, Fabrizio

    2012-01-01

    with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix

  14. Exponential relationship between DMIPP uptake and blood flow in normal and ischemic canine myocardium

    Energy Technology Data Exchange (ETDEWEB)

    Comans, E.F.I.; Lingen, A. van; Bax, J.J.; Sloof, G.W. [Free Univ. Hospital, Amsterdam (Netherlands). Dept. of Nuclear Medicine; Visser, F.C. [Free Univ. Hospital, Amsterdam (Netherlands). Dept. of Cardiology; Vusse, G.J. van der [Limburg Univ., Maastricht (Netherlands). Cardiovascular Research Inst.; Knapp, F.F. Jun. [Oak Ridge Lab., TN (United States). Nuclear Medicine Group

    1998-12-31

    In 10 open-chest dogs the left anterior descending coronary artery was cannulated and extracorporally bypass (ECB) perfused at reduced flow. Myocardial blood flow (MBF) was assessed with Scandium-46 labeled microspheres. Forty minutes after i.v. injection of DMIPP, the heart was excised and cut into 120 samples. In each sample MBF (ml/g*min) and DMIPP uptake (percentage of the injected dose per gram: %id/g) were assessed. The relation between normalized MBF and DMIPP uptake was assessed using a linear, with a zero and with a non-zero intercept, and an exponential model function: A[1-e{sup -MBF/Fc}], where A and Fc are the amplitude and flow constant, respectively. The goodness of fit for all models was expressed as the standard error of estimate (SEE). In all individual dogs the relation between DMIPP uptake and MBF was significantly better (p<0.001) represented by an exponential model than a linear model with zero intercept. In 8 of 10 dogs the exponential model showed a better fit than the linear model with a non-zero intercept. The difference was significant (p<0.05) in 5 dogs. For Pooled data, linear regression analysis with a non-zero intercept revealed: DMIPP=0.54+0.44*MBF (SEE: 0.18) and with a zero intercept: DMIPP=0.97*MBF (SEE: 0.27). The goodness of fit of the exponential model: DMIPP=1.07[1-e{sup -MBF/0.35}] (SEE: 0.15) was significantly better (p<0.0001) than the linear models. In the normal to low MBF range, uptake of the dimenthyl branched fatty acid analogue DMIPP shows an exponential relationship, which is more appropriate than a linear relationship from a physiological point of view. (orig./MG) [Deutsch] An 10 Hunden mit eroeffnetem Brustkorb wurde der Ramus interventricularis anterior der linken Koronararterie kanueliert und ueber einen extrakorporalen Bypass (ECB) mit einem reduzierten Fluss perfundiert. Der myokardiale Blutfluss (MBF) wurde ueber Scandium-46-markierte Mikrosphaeren erfasst. Vierzig Minuten nach der iv. Injektion von DMIPP wurde

  15. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  16. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  17. The McDonald exponentiated gamma distribution and its statistical properties

    OpenAIRE

    Al-Babtain, Abdulhakim A; Merovci, Faton; Elbatal, Ibrahim

    2015-01-01

    Abstract In this paper, we propose a five-parameter lifetime model called the McDonald exponentiated gamma distribution to extend beta exponentiated gamma, Kumaraswamy exponentiated gamma and exponentiated gamma, among several other models. We provide a comprehensive mathematical treatment of this distribution. We derive the moment generating function and the rth moment. We discuss estimation of the parameters by maximum likelihood and provide the information matrix. AMS Subject Classificatio...

  18. Flow of viscous fluid along an exponentially stretching curved surface

    Directory of Open Access Journals (Sweden)

    N.F. Okechi

    Full Text Available In this paper, we present the boundary layer analysis of flow induced by rapidly stretching curved surface with exponential velocity. The governing boundary value problem is reduced into self-similar form using a new similarity transformation. The resulting equations are solved numerically using shooting and Runge-Kutta methods. The numerical results depicts that the fluid velocity as well as the skin friction coefficient increases with the surface curvature, similar trend is also observed for the pressure. The dimensionless wall shear stress defined for this problem is greater than that of a linearly stretching curved surface, but becomes comparably less for a surface stretching with a power-law velocity. In addition, the result for the plane surface is a special case of this study when the radius of curvature of the surface is sufficiently large. The numerical investigations presented in terms of the graphs are interpreted with the help of underlying physics of the fluid flow and the consequences arising from the curved geometry. Keywords: Boundary layer flow, Curved surface, Exponential stretching, Curvature

  19. An interim report on the Zenith Exponential Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Absalom, R M; Cameron, I R; Kinchin, G H; Sanders, J E; Wilson, D J [Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1959-06-15

    The following memorandum gives an interim account of the exponential experiments with Zenith-type fuel elements being carried out at Winfrith. Results quoted are still subject to revision: however it is hoped that the description of the work at this stage will stimulate discussion and suggestions for further measurements before the experiment is dismantled later in the year. The measurements are being undertaken in order to form some initial understanding of the reactor physics of uranium 235-thorium-graphite systems of the type later to be studied in Zenith. There have been no previous investigations of this type of system in the U.K. though measurements on enriched uranium-graphite systems have been reported from the U.S.A.. A practical result of the measurements will be a revision of the estimated critical loadings for Zenith, since the exponential systems studied cover the range of loadings proposed for the first critical assemblies. The theoretical work on these systems includes a two-group analysis being carried on in the Zenith group and a multigroup analysis being made by the H.T.G.C. Technical Assessments Group, including a Monte Carlo study of resonance capture.

  20. DNAzyme Feedback Amplification: Relaying Molecular Recognition to Exponential DNA Amplification.

    Science.gov (United States)

    Liu, Meng; Yin, Qingxin; McConnell, Erin M; Chang, Yangyang; Brennan, John D; Li, Yingfu

    2018-03-26

    Technologies capable of linking DNA amplification to molecular recognition are very desirable for ultrasensitive biosensing applications. We have developed a simple but powerful isothermal DNA amplification method, termed DNAzyme feedback amplification (DFA), that is capable of relaying molecular recognition to exponential DNA amplification. The method incorporates both an RNA-cleaving DNAzyme (RCD) and rolling circle amplification (RCA) carried out by a special DNA polymerase using a circular DNA template. DFA begins with a stimulus-dependent RCA reaction, producing tandemly linked RCDs in long-chain DNA products. These RCDs cleave an RNA-containing DNA sequence to form additional primers that hybridize to the circular DNA molecule, giving rise to DNA assemblies that act as the new inputs for RCA. The RCA reaction and the cleavage event keep on feeding each other autonomously, resulting in exponential growth of repetitive DNA sequences that can be easily detected. This method can be used for the detection of both nucleic acid based targets and non-nucleic acid analytes. In this article, we discuss the conceptual framework of the feedback amplification approach, the essential features of this method as well as remaining challenges and possible solutions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Comparison of matrix exponential methods for fuel burnup calculations

    International Nuclear Information System (INIS)

    Oh, Hyung Suk; Yang, Won Sik

    1999-01-01

    Series expansion methods to compute the exponential of a matrix have been compared by applying them to fuel depletion calculations. Specifically, Taylor, Pade, Chebyshev, and rational Chebyshev approximations have been investigated by approximating the exponentials of bum matrices by truncated series of each method with the scaling and squaring algorithm. The accuracy and efficiency of these methods have been tested by performing various numerical tests using one thermal reactor and two fast reactor depletion problems. The results indicate that all the four series methods are accurate enough to be used for fuel depletion calculations although the rational Chebyshev approximation is relatively less accurate. They also show that the rational approximations are more efficient than the polynomial approximations. Considering the computational accuracy and efficiency, the Pade approximation appears to be better than the other methods. Its accuracy is better than the rational Chebyshev approximation, while being comparable to the polynomial approximations. On the other hand, its efficiency is better than the polynomial approximations and is similar to the rational Chebyshev approximation. In particular, for fast reactor depletion calculations, it is faster than the polynomial approximations by a factor of ∼ 1.7. (author). 11 refs., 4 figs., 2 tabs

  2. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  3. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  4. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  5. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  6. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  7. Forecasting Inflow and Outflow of Money Currency in East Java Using a Hybrid Exponential Smoothing and Calendar Variation Model

    Science.gov (United States)

    Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut

    2018-03-01

    Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.

  8. Textbook Error: Short Circuiting on Electrochemical Cell

    Science.gov (United States)

    Bonicamp, Judith M.; Clark, Roy W.

    2007-01-01

    Short circuiting an electrochemical cell is an unreported but persistent error in the electrochemistry textbooks. It is suggested that diagrams depicting a cell delivering usable current to a load be postponed, the theory of open-circuit galvanic cells is explained, the voltages from the tables of standard reduction potentials is calculated and…

  9. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  10. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  11. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  12. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  13. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  14. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  15. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  16. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  17. Three-Step Predictor-Corrector of Exponential Fitting Method for Nonlinear Schroedinger Equations

    International Nuclear Information System (INIS)

    Tang Chen; Zhang Fang; Yan Haiqing; Luo Tao; Chen Zhanqing

    2005-01-01

    We develop the three-step explicit and implicit schemes of exponential fitting methods. We use the three-step explicit exponential fitting scheme to predict an approximation, then use the three-step implicit exponential fitting scheme to correct this prediction. This combination is called the three-step predictor-corrector of exponential fitting method. The three-step predictor-corrector of exponential fitting method is applied to numerically compute the coupled nonlinear Schroedinger equation and the nonlinear Schroedinger equation with varying coefficients. The numerical results show that the scheme is highly accurate.

  18. Income inequality in Romania: The exponential-Pareto distribution

    Science.gov (United States)

    Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan

    2017-03-01

    We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.

  19. Fitting and Analyzing Randomly Censored Geometric Extreme Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Muhammad Yameen Danish

    2016-06-01

    Full Text Available The paper presents the Bayesian analysis of two-parameter geometric extreme exponential distribution with randomly censored data. The continuous conjugate prior of the scale and shape parameters of the model does not exist while computing the Bayes estimates, it is assumed that the scale and shape parameters have independent gamma priors. It is seen that the closed-form expressions for the Bayes estimators are not possible; we suggest the Lindley’s approximation to obtain the Bayes estimates. However, the Bayesian credible intervals cannot be constructed while using this method, we propose Gibbs sampling to obtain the Bayes estimates and also to construct the Bayesian credible intervals. Monte Carlo simulation study is carried out to observe the behavior of the Bayes estimators and also to compare with the maximum likelihood estimators. One real data analysis is performed for illustration.

  20. Generalized variational formulations for extended exponentially fractional integral

    Directory of Open Access Journals (Sweden)

    Zuo-Jun Wang

    2016-01-01

    Full Text Available Recently, the fractional variational principles as well as their applications yield a special attention. For a fractional variational problem based on different types of fractional integral and derivatives operators, corresponding fractional Lagrangian and Hamiltonian formulation and relevant Euler–Lagrange type equations are already presented by scholars. The formulations of fractional variational principles still can be developed more. We make an attempt to generalize the formulations for fractional variational principles. As a result we obtain generalized and complementary fractional variational formulations for extended exponentially fractional integral for example and corresponding Euler–Lagrange equations. Two illustrative examples are presented. It is observed that the formulations are in exact agreement with the Euler–Lagrange equations.

  1. Hausdorff dimension of exponential parameter rays and their endpoints

    International Nuclear Information System (INIS)

    Bailesteanu, Mihai; Balan, Horia Vlad; Schleicher, Dierk

    2008-01-01

    We investigate the set I of parameters κ for which the singular value of z map e z + κ converges to ∞. The set I consists of uncountably many parameter rays, plus landing points of some of these rays (Förster et al 2008 Proc Am. Math. Soc. 136 at press (Preprint math.DS/0311427)). We show that the parameter rays have Hausdorff dimension 1, which implies (Qiu 1994 Acta Math. Sin. (N.S.) 10 362–8) that the ray endpoints in I alone have dimension 2. Analogous results were known for dynamical planes of exponential maps (Karpińska 1999 C. R. Acad. Sci. Paris Sér. I: Math. 328 1039–44; Schleicher and Zimmer 2003 J. Lond. Math. Soc. 67 380–400); our result shows that this also holds in parameter space

  2. Winning Concurrent Reachability Games Requires Doubly-Exponential Patience

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Koucký, Michal; Miltersen, Peter Bro

    2009-01-01

    We exhibit a deterministic concurrent reachability game PURGATORYn with n non-terminal positions and a binary choice for both players in every position so that any positional strategy for Player 1 achieving the value of the game within given isin ... that are less than (isin2/(1 - isin))2n-2 . Also, even to achieve the value within say 1 - 2-n/2, doubly exponentially small behavior probabilities in the number of positions must be used. This behavior is close to worst case: We show that for any such game and 0 ... with all non-zero behavior probabilities being 20(n) at least isin2O(n). As a corollary to our results, we conclude that any (deterministic or nondeterministic) algorithm that given a concurrent reachability game explicitly manipulates isin-optimal strategies for Player 1 represented in several standard...

  3. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart......We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...

  4. Exponential Lower Bounds for the PPSZ k-SAT Algorithm

    DEFF Research Database (Denmark)

    Chen, Shiteng; Scheder, Dominik Alban; Talebanfard, Navid

    2013-01-01

    In 1998, Paturi, Pudl´ak, Saks, and Zane presented PPSZ, an elegant randomized algorithm for k-SAT. Fourteen years on, this algorithm is still the fastest known worst-case algorithm. They proved that its expected running time on k-CNF formulas with n variables is at most 2(1−k)n, where k 2 (1/k).......). So far, no exponential lower bounds at all have been known. In this paper, we construct hard instances for PPSZ. That is, we construct satisfiable k-CNF formulas over n variables on which the expected running time is at least 2(1−k)n, for k 2 O(log2 k/k)....

  5. Vacuum heating evaluation for plasmas of exponentially decreasing density profile

    International Nuclear Information System (INIS)

    Pestehe, S.J.; Mohammadnejad, M.

    2008-01-01

    Ultra-short pulse lasers have opened a regime of laser-plasma interaction where plasmas have scale lengths shorter than the laser wavelength and allow the possibility of generating near-solid density plasmas. The interaction of high-intensity laser beams with sharply bounded high-density and small scale length plasmas is considered. Absorption of the laser energy associated with the mechanism of dragging electrons out of the plasma into the vacuum and sending them back into the plasma with the electric field component along the density gradient, so called vacuum heating, is studied. An exponentially decreasing electron density profile is assumed. The vector potential of the electromagnetic field propagating through the plasma is calculated and the behaviour of the electric and magnetic components of the electromagnetic field is studied. The fraction of laser power absorbed in this process is calculated and plotted versus the laser beam incidence angle, illumination energy, and the plasma scale length

  6. Exponential random graph models for networks with community structure.

    Science.gov (United States)

    Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian

    2013-09-01

    Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.

  7. Galilean invariance in the exponential model of atomic collisions

    Energy Technology Data Exchange (ETDEWEB)

    del Pozo, A.; Riera, A.; Yaez, M.

    1986-11-01

    Using the X/sup n//sup +/(1s/sup 2/)+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He/sup +/(1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed.

  8. arXiv Exponentially Light Dark Matter from Coannihilation

    CERN Document Server

    D'Agnolo, Raffaele Tito; Ruderman, Joshua T.; Wang, Po-Jen

    Dark matter may be a thermal relic whose abundance is set by mutual annihilations among multiple species. Traditionally, this coannihilation scenario has been applied to weak scale dark matter that is highly degenerate with other states. We show that coannihilation among states with split masses points to dark matter that is exponentially lighter than the weak scale, down to the keV scale. We highlight the regime where dark matter does not participate in the annihilations that dilute its number density. In this "sterile coannihilation" limit, the dark matter relic density is independent of its couplings, implying a broad parameter space of thermal relic targets for future experiments. Light dark matter from coannihilation evades stringent bounds from the cosmic microwave background, but will be tested by future direct detection, fixed target, and long-lived particle experiments.

  9. Closed-Form Expressions for the Matrix Exponential

    Directory of Open Access Journals (Sweden)

    F. De Zela

    2014-04-01

    Full Text Available We discuss a method to obtain closed-form expressions of f(A, where f is an analytic function and A a square, diagonalizable matrix. The method exploits the Cayley–Hamilton theorem and has been previously reported using tools that are perhaps not sufficiently appealing to physicists. Here, we derive the results on which the method is based by using tools most commonly employed by physicists. We show the advantages of the method in comparison with standard approaches, especially when dealing with the exponential of low-dimensional matrices. In contrast to other approaches that require, e.g., solving differential equations, the present method only requires the construction of the inverse of the Vandermonde matrix. We show the advantages of the method by applying it to different cases, mostly restricting the calculational effort to the handling of two-by-two matrices.

  10. Rotating Dilaton Black Strings Coupled to Exponential Nonlinear Electrodynamics

    Directory of Open Access Journals (Sweden)

    Ahmad Sheykhi

    2014-01-01

    Full Text Available We construct a new class of charged rotating black string solutions coupled to dilaton and exponential nonlinear electrodynamic fields with cylindrical or toroidal horizons in the presence of a Liouville-type potential for the dilaton field. Due to the presence of the dilaton field, the asymptotic behaviors of these solutions are neither flat nor (AdS. We analyze the physical properties of the solutions in detail. We compute the conserved and thermodynamic quantities of the solutions and verify the first law of thermodynamics on the black string horizon. When the nonlinear parameter β2 goes to infinity, our results reduce to those of black string solutions in Einstein-Maxwell-dilaton gravity.

  11. Academia-industry collaboration feeds exponential growth curve

    CERN Document Server

    Jones Bey Hassaun, A

    2004-01-01

    The use of silicon strip detectors in high-energy particle tracking is discussed. The functional strength of silicon for high-energy particle physics as well as astrophysics lies in the ability to detect passage of charged particles with micron-scale spatial resolution. In addition to vertex detection, silicon strip detectors also provide full tracking detection to include momentum determination of particles in the magnetic field. Even if silicon detectors for basic science applications do not continue to grow larger, exponential growth of the technology for terrestrial commercial applications is likely to follow a healthy growth curve, as researchers continue to adapt silicon detector technology for low- dose medical x-ray imaging. (Edited abstract)

  12. A simple derivation of r* for curved exponential families

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1997-01-01

    For curved exponential families we consider modified likelihood ratio statistics of the form rL = r + log (u/r)/r, where r is the signed root of the likelihood ratio statistic. We are testing a one-dimensional hypothesis, but in order to specify approximate ancillary statistics we consider the test...... as one in a series of tests. By requiring asymptotic independence and asymptotic normality of the test statistics in a large deviation region there is a particular choice of the statistic u which suggests itself. The derivation of this result is quite simple, only involving a standard saddlepoint...... approximation followed by a transformation. We give explicit formulas for the statistic u, and include a discussion of the case where some coordinates of the underlying variable are lattice....

  13. Hyponormal quantization of planar domains exponential transform in dimension two

    CERN Document Server

    Gustafsson, Björn

    2017-01-01

    This book exploits the classification of a class of linear bounded operators with rank-one self-commutators in terms of their spectral parameter, known as the principal function. The resulting dictionary between two dimensional planar shapes with a degree of shade and Hilbert space operators turns out to be illuminating and beneficial for both sides. An exponential transform, essentially a Riesz potential at critical exponent, is at the heart of this novel framework; its best rational approximants unveil a new class of complex orthogonal polynomials whose asymptotic distribution of zeros is thoroughly studied in the text. Connections with areas of potential theory, approximation theory in the complex domain and fluid mechanics are established. The text is addressed, with specific aims, at experts and beginners in a wide range of areas of current interest: potential theory, numerical linear algebra, operator theory, inverse problems, image and signal processing, approximation theory, mathematical physics.

  14. Exact error estimation for solutions of nuclide chain equations

    International Nuclear Information System (INIS)

    Tachihara, Hidekazu; Sekimoto, Hiroshi

    1999-01-01

    The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)

  15. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  16. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  17. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  18. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  19. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  20. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  1. Additivity of statistical moments in the exponentially modified Gaussian model of chromatography

    International Nuclear Information System (INIS)

    Howerton, Samuel B.; Lee Chomin; McGuffin, Victoria L.

    2002-01-01

    A homologous series of saturated fatty acids ranging from C 10 to C 22 was separated by reversed-phase capillary liquid chromatography. The resultant zone profiles were found to be fit best by an exponentially modified Gaussian (EMG) function. To compare the EMG function and statistical moments for the analysis of the experimental zone profiles, a series of simulated profiles was generated by using fixed values for retention time and different values for the symmetrical (σ) and asymmetrical (τ) contributions to the variance. The simulated profiles were modified with respect to the integration limits, the number of points, and the signal-to-noise ratio. After modification, each profile was analyzed by using statistical moments and an iteratively fit EMG equation. These data indicate that the statistical moment method is much more susceptible to error when the degree of asymmetry is large, when the integration limits are inappropriately chosen, when the number of points is small, and when the signal-to-noise ratio is small. The experimental zone profiles were then analyzed by using the statistical moment and EMG methods. Although care was taken to minimize the sources of error discussed above, significant differences were found between the two methods. The differences in the second moment suggest that the symmetrical and asymmetrical contributions to broadening in the experimental zone profiles are not independent. As a consequence, the second moment is not equal to the sum of σ 2 and τ 2 , as is commonly assumed. This observation has important implications for the elucidation of thermodynamic and kinetic information from chromatographic zone profiles

  2. Defining near misses : towards a sharpened definition based on empirical data about error handling processes

    NARCIS (Netherlands)

    Kessels-Habraken, M.M.P.; Schaaf, van der T.W.; Jonge, de J.; Rutte, C.G.

    2010-01-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and

  3. EXPALS, Least Square Fit of Linear Combination of Exponential Decay Function

    International Nuclear Information System (INIS)

    Douglas Gardner, C.

    1980-01-01

    1 - Description of problem or function: This program fits by least squares a function which is a linear combination of real exponential decay functions. The function is y(k) = summation over j of a(j) * exp(-lambda(j) * k). Values of the independent variable (k) and the dependent variable y(k) are specified as input data. Weights may be specified as input information or set by the program (w(k) = 1/y(k)). 2 - Method of solution: The Prony-Householder iteration method is used. For unequally-spaced data, a number of interpolation options are provided. This revision includes an option to call a differential correction subroutine REFINE to improve the approximation to unequally-spaced data when equal-interval interpolation is faulty. If convergence is achieved, the probable errors in the computed parameters are calculated also. 3 - Restrictions on the complexity of the problem: Generally, it is desirable to have at least 10n observations where n equals the number of terms and to input k+n significant figures if k significant figures are expected

  4. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  5. Stretched exponential relaxation in molecular and electronic glasses

    Science.gov (United States)

    Phillips, J. C.

    1996-09-01

    Stretched exponential relaxation, 0034-4885/59/9/003/img1, fits many relaxation processes in disordered and quenched electronic and molecular systems, but it is widely believed that this function has no microscopic basis, especially in the case of molecular relaxation. For electronic relaxation the appearance of the stretched exponential is often described in the context of dispersive transport, where 0034-4885/59/9/003/img2 is treated as an adjustable parameter, but in almost all cases it is generally assumed that no microscopic meaning can be assigned to 0034-4885/59/9/003/img3 even at 0034-4885/59/9/003/img4, a glass transition temperature. We show that for molecular relaxation 0034-4885/59/9/003/img5 can be understood, providing that one separates extrinsic and intrinsic effects, and that the intrinsic effects are dominated by two magic numbers, 0034-4885/59/9/003/img6 for short-range forces, and 0034-4885/59/9/003/img7 for long-range Coulomb forces, as originally observed by Kohlrausch for the decay of residual charge on a Leyden jar. Our mathematical model treats relaxation kinetics using the Lifshitz - Kac - Luttinger diffusion to traps depletion model in a configuration space of effective dimensionality, the latter being determined using axiomatic set theory and Phillips - Thorpe constraint theory. The experiments discussed include ns neutron scattering experiments, particularly those based on neutron spin echoes which measure S( Q,t) directly, and the traditional linear response measurements which span the range from 0034-4885/59/9/003/img8 to s, as collected and analysed phenomenologically by Angell, Ngai, Böhmer and others. The electronic materials discussed include a-Si:H, granular 0034-4885/59/9/003/img9, semiconductor nanocrystallites, charge density waves in 0034-4885/59/9/003/img10, spin glasses, and vortex glasses in high-temperature semiconductors. The molecular materials discussed include polymers, network glasses, electrolytes and alcohols, Van

  6. Stretched exponential relaxation in molecular and electronic glasses

    International Nuclear Information System (INIS)

    Phillips, J.C.

    1996-01-01

    Stretched exponential relaxation, exp[-(t/τ) β ], fits many relaxation processes in disordered and quenched electronic and molecular systems, but it is widely believed that this function has no microscopic basis, especially in the case of molecular relaxation. For electronic relaxation the appearance of the stretched exponential is often described in the context of dispersive transport, where β is treated as an adjustable parameter, but in almost all cases it is generally assumed that no microscopic meaning can be assigned to 0 g , a glass transition temperature. We show that for molecular relaxation β(T g ) can be understood, providing that one separates extrinsic and intrinsic effects, and that the intrinsic effects are dominated by two magic numbers, β SR =3/5 for short-range forces, and β K =3/7 for long-range Coulomb forces, as originally observed by Kohlrausch for the decay of residual charge on a Leyden jar. Our mathematical model treats relaxation kinetics using the Lifshitz-Kac-Luttinger diffusion to traps depletion model in a configuration space of effective dimensionality, the latter being determined using axiomatic set theory and Phillips-Thorpe constraint theory. The experiments discussed include ns neutron scattering experiments, particularly those based on neutron spin echoes which measure S(Q, t) directly, and the traditional linear response measurements which span the range from μs to s, as collected and analysed phenomenologically by Angell, Ngai, Boehmer and others. The electronic materials discussed include a-Si:H, granular C 60 , semiconductor nanocrystallites, charge density waves in TaS 3 , spin glasses, and vortex glasses in high-temperature semiconductors. The molecular materials discussed include polymers, network glasses, electrolytes and alcohols, Van der Waals supercooled liquids and glasses, orientational glasses, water, fused salts, and heme proteins. In the intrinsic cases the theory of β(T g ) is often accurate to 2%, which

  7. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  8. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  9. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  10. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  11. Exponential fading to white of black holes in quantum gravity

    International Nuclear Information System (INIS)

    Barceló, Carlos; Carballo-Rubio, Raúl; Garay, Luis J

    2017-01-01

    Quantization of the gravitational field may allow the existence of a decay channel of black holes into white holes with an explicit time-reversal symmetry. The definition of a meaningful decay probability for this channel is studied in spherically symmetric situations. As a first nontrivial calculation, we present the functional integration over a set of geometries using a single-variable function to interpolate between black-hole and white-hole geometries in a bounded region of spacetime. This computation gives a finite result which depends only on the Schwarzschild mass and a parameter measuring the width of the interpolating region. The associated probability distribution displays an exponential decay law on the latter parameter, with a mean lifetime inversely proportional to the Schwarzschild mass. In physical terms this would imply that matter collapsing to a black hole from a finite radius bounces back elastically and instantaneously, with negligible time delay as measured by external observers. These results invite to reconsider the ultimate nature of astrophysical black holes, providing a possible mechanism for the formation of black stars instead of proper general relativistic black holes. The existence of both this decay channel and black stars can be tested in future observations of gravitational waves. (paper)

  12. Kinetically modified non-minimal inflation with exponential frame function

    Energy Technology Data Exchange (ETDEWEB)

    Pallis, C. [University of Cyprus, Department of Physics, Nicosia (Cyprus)

    2017-09-15

    We consider supersymmetric (SUSY) and non-SUSY models of chaotic inflation based on the φ{sup n} potential with n = 2 or 4. We show that the coexistence of an exponential non-minimal coupling to gravity f{sub R} = e{sup c{sub R}φ{sup p}} with a kinetic mixing of the form f{sub K} = c{sub K}f{sub R}{sup m} can accommodate inflationary observables favored by the Planck and Bicep2/Keck Array results for p = 1 and 2, 1 ≤ m ≤ 15 and 2.6 x 10{sup -3} ≤ r{sub RK} = c{sub R}/c{sub K}{sup p/2} ≤ 1, where the upper limit is not imposed for p = 1. Inflation is of hilltop type and it can be attained for subplanckian inflaton values with the corresponding effective theories retaining the perturbative unitarity up to the Planck scale. The supergravity embedding of these models is achieved employing two chiral gauge singlet supefields, a monomial superpotential and several (semi)logarithmic or semi-polynomial Kaehler potentials. (orig.)

  13. Mutant number distribution in an exponentially growing population

    Science.gov (United States)

    Keller, Peter; Antal, Tibor

    2015-01-01

    We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück (1943 Genetics 28 491-511) 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the ‘large population-small mutation’ limit we recover recent results of Kessler and Levine (2014 J. Stat. Phys. doi:10.1007/s10955-014-1143-3) for a fully stochastic version of the process.

  14. Mutant number distribution in an exponentially growing population

    International Nuclear Information System (INIS)

    Keller, Peter; Antal, Tibor

    2015-01-01

    We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück (1943 Genetics 28 491–511) 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the ‘large population-small mutation’ limit we recover recent results of Kessler and Levine (2014 J. Stat. Phys. doi:10.1007/s10955-014-1143-3) for a fully stochastic version of the process. (paper)

  15. Asymmetric Bimodal Exponential Power Distribution on the Real Line

    Directory of Open Access Journals (Sweden)

    Mehmet Niyazi Çankaya

    2018-01-01

    Full Text Available The asymmetric bimodal exponential power (ABEP distribution is an extension of the generalized gamma distribution to the real line via adding two parameters that fit the shape of peakedness in bimodality on the real line. The special values of peakedness parameters of the distribution are a combination of half Laplace and half normal distributions on the real line. The distribution has two parameters fitting the height of bimodality, so capacity of bimodality is enhanced by using these parameters. Adding a skewness parameter is considered to model asymmetry in data. The location-scale form of this distribution is proposed. The Fisher information matrix of these parameters in ABEP is obtained explicitly. Properties of ABEP are examined. Real data examples are given to illustrate the modelling capacity of ABEP. The replicated artificial data from maximum likelihood estimates of parameters of ABEP and other distributions having an algorithm for artificial data generation procedure are provided to test the similarity with real data. A brief simulation study is presented.

  16. Predictors of the peak width for networks with exponential links

    Science.gov (United States)

    Troutman, B.M.; Karlinger, M.R.

    1989-01-01

    We investigate optimal predictors of the peak (S) and distance to peak (T) of the width function of drainage networks under the assumption that the networks are topologically random with independent and exponentially distributed link lengths. Analytical results are derived using the fact that, under these assumptions, the width function is a homogeneous Markov birth-death process. In particular, exact expressions are derived for the asymptotic conditional expectations of S and T given network magnitude N and given mainstream length H. In addition, a simulation study is performed to examine various predictors of S and T, including N, H, and basin morphometric properties; non-asymptotic conditional expectations and variances are estimated. The best single predictor of S is N, of T is H, and of the scaled peak (S divided by the area under the width function) is H. Finally, expressions tested on a set of drainage basins from the state of Wyoming perform reasonably well in predicting S and T despite probable violations of the original assumptions. ?? 1989 Springer-Verlag.

  17. A Study on The Mixture of Exponentiated-Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Adel Tawfik Elshahat

    2016-12-01

    Full Text Available Mixtures of measures or distributions occur frequently in the theory and applications of probability and statistics. In the simplest case it may, for example, be reasonable to assume that one is dealing with the mixture in given proportions of a finite number of normal populations with different means or variances. The mixture parameter may also be denumerable infinite, as in the theory of sums of a random number of random variables, or continuous, as in the compound Poisson distribution. The use of finite mixture distributions, to control for unobserved heterogeneity, has become increasingly popular among those estimating dynamic discrete choice models. One of the barriers to using mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive reparability of the log likelihood function. In this thesis, the maximum likelihood estimators have been obtained for the parameters of the mixture of exponentiated Weibull distribution when sample is available from censoring scheme. The maximum likelihood estimators of the parameters and the asymptotic variance covariance matrix have been also obtained. A numerical illustration for these new results is given.

  18. Exponential 6 parameterization for the JCZ3-EOS

    Energy Technology Data Exchange (ETDEWEB)

    McGee, B.C.; Hobbs, M.L.; Baer, M.R.

    1998-07-01

    A database has been created for use with the Jacobs-Cowperthwaite-Zwisler-3 equation-of-state (JCZ3-EOS) to determine thermochemical equilibrium for detonation and expansion states of energetic materials. The JCZ3-EOS uses the exponential 6 intermolecular potential function to describe interactions between molecules. All product species are characterized by r*, the radius of the minimum pair potential energy, and {var_epsilon}/k, the well depth energy normalized by Boltzmann`s constant. These parameters constitute the JCZS (S for Sandia) EOS database describing 750 gases (including all the gases in the JANNAF tables), and have been obtained by using Lennard-Jones potential parameters, a corresponding states theory, pure liquid shock Hugoniot data, and fit values using an empirical EOS. This database can be used with the CHEETAH 1.40 or CHEETAH 2.0 interface to the TIGER computer program that predicts the equilibrium state of gas- and condensed-phase product species. The large JCZS-EOS database permits intermolecular potential based equilibrium calculations of energetic materials with complex elemental composition.

  19. Exponential critical-state model for magnetization of hard superconductors

    International Nuclear Information System (INIS)

    Chen, D.; Sanchez, A.; Munoz, J.S.

    1990-01-01

    We have calculated the initial magnetization curves and hysteresis loops for hard type-II superconductors based on the exponential-law model, J c (H i ) =k exp(-|H i |/H 0 ), where k and H 0 are constants. After discussing the general behavior of penetrated supercurrents in an infinitely long column specimen, we define a general cross-sectional shape based on two equal circles of radius a, which can be rendered into a circle, a rectangle, or many other shapes. With increasing parameter p (=ka/H 0 ), the computed M-H curves show obvious differences with those computed from Kim's model and approach the results of a simple infinitely narrow square pulse J c (H i ). For high-T c superconductors, our results can be applied to the study of the magnetic properties and the critical-current density of single crystals, as well as to the determination of the intergranular critical-current density from magnetic measurements

  20. Is blood pressure reduction a valid surrogate endpoint for stroke prevention? an analysis incorporating a systematic review of randomised controlled trials, a by-trial weighted errors-in-variables regression, the surrogate threshold effect (STE and the biomarker-surrogacy (BioSurrogate evaluation schema (BSES

    Directory of Open Access Journals (Sweden)

    Lassere Marissa N

    2012-03-01

    Full Text Available Abstract Background Blood pressure is considered to be a leading example of a valid surrogate endpoint. The aims of this study were to (i formally evaluate systolic and diastolic blood pressure reduction as a surrogate endpoint for stroke prevention and (ii determine what blood pressure reduction would predict a stroke benefit. Methods We identified randomised trials of at least six months duration comparing any pharmacologic anti-hypertensive treatment to placebo or no treatment, and reporting baseline blood pressure, on-trial blood pressure, and fatal and non-fatal stroke. Trials with fewer than five strokes in at least one arm were excluded. Errors-in-variables weighted least squares regression modelled the reduction in stroke as a function of systolic blood pressure reduction and diastolic blood pressure reduction respectively. The lower 95% prediction band was used to determine the minimum systolic blood pressure and diastolic blood pressure difference, the surrogate threshold effect (STE, below which there would be no predicted stroke benefit. The STE was used to generate the surrogate threshold effect proportion (STEP, a surrogacy metric, which with the R-squared trial-level association was used to evaluate blood pressure as a surrogate endpoint for stroke using the Biomarker-Surrogacy Evaluation Schema (BSES3. Results In 18 qualifying trials representing all pharmacologic drug classes of antihypertensives, assuming a reliability coefficient of 0.9, the surrogate threshold effect for a stroke benefit was 7.1 mmHg for systolic blood pressure and 2.4 mmHg for diastolic blood pressure. The trial-level association was 0.41 and 0.64 and the STEP was 66% and 78% for systolic and diastolic blood pressure respectively. The STE and STEP were more robust to measurement error in the independent variable than R-squared trial-level associations. Using the BSES3, assuming a reliability coefficient of 0.9, systolic blood pressure was a B + grade and