WorldWideScience

Sample records for exponential error reduction

  1. FEL small signal gain reduction due to phase error of undulator

    International Nuclear Information System (INIS)

    Jia Qika

    2002-01-01

    The effects of undulator phase errors on the Free Electron Laser small signal gain is analyzed and discussed. The gain reduction factor due to the phase error is given analytically for low-gain regimes, it shows that degradation of the gain is similar to that of the spontaneous radiation, has a simple exponential relation with square of the rms phase error, and the linear variation part of phase error induces the position shift of maximum gain. The result also shows that the Madey's theorem still hold in the presence of phase error. The gain reduction factor due to the phase error for high-gain regimes also can be given in a simple way

  2. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul

    2015-01-01

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions

  3. The Negative Sign and Exponential Expressions: Unveiling Students' Persistent Errors and Misconceptions

    Science.gov (United States)

    Cangelosi, Richard; Madrid, Silvia; Cooper, Sandra; Olson, Jo; Hartter, Beverly

    2013-01-01

    The purpose of this study was to determine whether or not certain errors made when simplifying exponential expressions persist as students progress through their mathematical studies. College students enrolled in college algebra, pre-calculus, and first- and second-semester calculus mathematics courses were asked to simplify exponential…

  4. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  5. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  7. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  8. Bayesian Exponential Smoothing.

    OpenAIRE

    Forbes, C.S.; Snyder, R.D.; Shami, R.S.

    2000-01-01

    In this paper, a Bayesian version of the exponential smoothing method of forecasting is proposed. The approach is based on a state space model containing only a single source of error for each time interval. This model allows us to improve current practices surrounding exponential smoothing by providing both point predictions and measures of the uncertainty surrounding them.

  9. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian

    2015-01-07

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions for the existence of a L? bound that separates the dynamical contribution from that arising from the type of the option n in question. The bound achieved does not rely on information of the asymptotic behaviour of option prices at extreme asset values. In addition, we demonstrate improved numerical performance for select examples of practical relevance when compared to established bounding methods.

  10. The District Nursing Clinical Error Reduction Programme.

    Science.gov (United States)

    McGraw, Caroline; Topping, Claire

    2011-01-01

    The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.

  11. Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes

    KAUST Repository

    Crocce, Fabian

    2016-01-06

    Prices of European options whose underlying asset is driven by the L´evy process are solutions to partial integrodifferential Equations (PIDEs) that generalise the Black-Scholes equation by incorporating a non-local integral term to account for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE. The rapid rate of convergence of the trapezoid quadrature and the speedup provide efficient methods for evaluationg option prices, possibly for a range of parameter configurations simultaneously. A couple of works have been devoted to the error analysis and parameter selection for these transform-based methods. In [5] several payoff functions are considered for a rather general set of models, whose characteristic function is assumed to be known. [4] presents the framework and theoretical approach for the error analysis, and establishes polynomial convergence rates for approximations of the option prices. [1] presents FT-related methods with curved integration contour. The classical flat FT-methods have been, on the other hand, extended for option pricing problems beyond the European framework [3]. We present a methodology for studying and bounding the error committed when using FT methods to compute option prices. We also provide a systematic way of choosing the parameters of the numerical method, minimising the error bound and guaranteeing adherence to a pre-described error tolerance. We focus on exponential L´evy processes that may be of either diffusive or pure jump in type. Our contribution is to derive a tight error bound for a Fourier transform method when pricing options under risk-neutral Levy dynamics. We present a simplified bound that separates the contributions of the payoff and of the process in an easily processed and extensible product form that

  12. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Singh

    2011-06-01

    Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

  13. Hyperbolic Cosine–Exponentiated Exponential Lifetime Distribution and its Application in Reliability

    Directory of Open Access Journals (Sweden)

    Omid Kharazmi

    2017-02-01

    Full Text Available Recently, Kharazmi and Saadatinik (2016 introduced a new family of lifetime distributions called hyperbolic cosine – F (HCF distribution. In the present paper, it is focused on a special case of HCF family with exponentiated exponential distribution as a baseline distribution (HCEE. Various properties of the proposed distribution including explicit expressions for the moments, quantiles, mode, moment generating function, failure rate function, mean residual lifetime, order statistics and expression of the entropy are derived. Estimating parameters of HCEE distribution are obtained by eight estimation methods: maximum likelihood, Bayesian, maximum product of spacings, parametric bootstrap, non-parametric bootstrap, percentile, least-squares and weighted least-squares. A simulation study is conducted to examine the bias, mean square error of the maximum likelihood estimators. Finally, one real data set has been analyzed for illustrative purposes and it is observed that the proposed model fits better than Weibull, gamma and generalized exponential distributions.

  14. Exponential noise reduction in Lattice QCD: new tools for new physics

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The numerical computations of many quantities of theoretical and phenomenological interest are plagued by statistical errors which increase exponentially with the distance of the sources in the relevant correlators. Notable examples are baryon masses and matrix elements, the hadronic vacuum polarization and the light-by-light scattering contributions to the muon g-2, and the form factors of semileptonic B decays. Reliable and precise determinations of these quantities are very difficult if not impractical with state-of-the-art standard Monte Carlo integration schemes. I will discuss a recent proposal for factorizing the fermion determinant in lattice QCD that leads to a local action in the gauge field and in the auxiliary boson fields. Once combined with the corresponding factorization of the quark propagator, it paves the way for multi-level Monte Carlo integration in the presence of fermions opening new perspectives in lattice QCD and in its capability to unveil new physics. Exploratory results on the impac...

  15. Integration of large chemical kinetic mechanisms via exponential methods with Krylov approximations to Jacobian matrix functions

    KAUST Repository

    Bisetti, Fabrizio

    2012-06-01

    Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components. © 2012 Copyright Taylor and Francis Group, LLC.

  16. Global impulsive exponential synchronization of stochastic perturbed chaotic delayed neural networks

    International Nuclear Information System (INIS)

    Hua-Guang, Zhang; Tie-Dong, Ma; Jie, Fu; Shao-Cheng, Tong

    2009-01-01

    In this paper, the global impulsive exponential synchronization problem of a class of chaotic delayed neural networks (DNNs) with stochastic perturbation is studied. Based on the Lyapunov stability theory, stochastic analysis approach and an efficient impulsive delay differential inequality, some new exponential synchronization criteria expressed in the form of the linear matrix inequality (LMI) are derived. The designed impulsive controller not only can globally exponentially stabilize the error dynamics in mean square, but also can control the exponential synchronization rate. Furthermore, to estimate the stable region of the synchronization error dynamics, a novel optimization control algorithm is proposed, which can deal with the minimum problem with two nonlinear terms coexisting in LMIs effectively. Simulation results finally demonstrate the effectiveness of the proposed method

  17. TRANSMUTED EXPONENTIATED EXPONENTIAL DISTRIBUTION

    OpenAIRE

    MEROVCI, FATON

    2013-01-01

    In this article, we generalize the exponentiated exponential distribution using the quadratic rank transmutation map studied by Shaw etal. [6] to develop a transmuted exponentiated exponential distribution. Theproperties of this distribution are derived and the estimation of the model parameters is discussed. An application to real data set are finally presented forillustration

  18. Error reduction techniques for Monte Carlo neutron transport calculations

    International Nuclear Information System (INIS)

    Ju, J.H.W.

    1981-01-01

    Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas

  19. Matrix-exponential description of radiative transfer

    International Nuclear Information System (INIS)

    Waterman, P.C.

    1981-01-01

    By appling the matrix-exponential operator technique to the radiative-transfer equation in discrete form, new analytical solutions are obtained for the transmission and reflection matrices in the limiting cases x >1, where x is the optical depth of the layer. Orthongonality of the eigenvectors of the matrix exponential apparently yields new conditions for determining. Chandrasekhar's characteristic roots. The exact law of reflection for the discrete eigenfunctions is also obtained. Finally, when used in conjuction with the doubling method, the matrix exponential should result in reduction in both computation time and loss of precision

  20. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Science.gov (United States)

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  1. Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems

    Directory of Open Access Journals (Sweden)

    Zheng-Fan Liu

    2014-01-01

    Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.

  2. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  3. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  4. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  5. Error Reduction in an Operating Environment - Comanche Peak Steam Electric Station

    International Nuclear Information System (INIS)

    Blevins, Mike; Gallman, Jim

    1998-01-01

    After having outlined that a program to manage human performance and to reduce human performance errors has reached an 88% error reduction rate and a 99% significant error reduction rate, the authors present this program. It takes three cornerstones of human performance management into account: training, leadership and procedures. Other aspects are introduced: communication, corrective action programs, a root cause analysis, seven steps of self checking, trending, and a human performance enhancement program. These other aspects and their relationships are discussed. Program strengths and downsides are outlined, as well as actions needed for success. Another approach is then proposed which comprises proactive interventions and indicators for human performance. These indicators are identified and introduced by analyzing the anatomy of an event. The limitations of this model are discussed

  6. Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.

    Science.gov (United States)

    Heard, Bridgette; Miller, Laura; Kumar, Pankaj

    2012-03-01

    Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.

  7. ADVANCED MMIS TOWARD SUBSTANTIAL REDUCTION IN HUMAN ERRORS IN NPPS

    Directory of Open Access Journals (Sweden)

    POONG HYUN SEONG

    2013-04-01

    Full Text Available This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS. It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs. Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  8. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Seong, Poong Hyun; Kang, Hyun Gook [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Na, Man Gyun [Chosun Univ., Gwangju (Korea, Republic of); Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of); Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Jung, Yoensub [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2013-04-15

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  9. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    International Nuclear Information System (INIS)

    Seong, Poong Hyun; Kang, Hyun Gook; Na, Man Gyun; Kim, Jong Hyun; Heo, Gyunyoung; Jung, Yoensub

    2013-01-01

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs

  10. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  11. Stochastic Frontier Models with Dependent Errors based on Normal and Exponential Margins || Modelos de frontera estocástica con errores dependientes basados en márgenes normal y exponencial

    Directory of Open Access Journals (Sweden)

    Gómez-Déniz, Emilio

    2017-06-01

    Full Text Available Following the recent work of Gómez-Déniz and Pérez-Rodríguez (2014, this paper extends the results obtained there to the normal-exponential distribution with dependence. Accordingly, the main aim of the present paper is to enhance stochastic production frontier and stochastic cost frontier modelling by proposing a bivariate distribution for dependent errors which allows us to nest the classical models. Closed-form expressions for the error term and technical efficiency are provided. An illustration using real data from the econometric literature is provided to show the applicability of the model proposed. || Continuando el reciente trabajo de Gómez-Déniz y Pérez-Rodríguez (2014, el presente artículo extiende los resultados obtenidos a la distribución normal-exponencial con dependencia. En consecuencia, el principal propósito de este artículo es mejorar el modelado de la frontera estocástica tanto de producción como de coste proponiendo para ello una distribución bivariante para errores dependientes que nos permitan encajar los modelos clásicos. Se obtienen las expresiones en forma cerrada para el término de error y la eficiencia técnica. Se ilustra la aplicabilidad del modelo propouesto usando datos reales existentes en la literatura econométrica.

  12. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  13. An exponential observer for the generalized Rossler chaotic system

    International Nuclear Information System (INIS)

    Sun, Y.-J.

    2009-01-01

    In this paper, the generalized Rossler chaotic system is considered and the state observation problem of such a system is investigated. Based on the time-domain approach, a state observer for the generalized Rossler chaotic system is developed to guarantee the global exponential stability of the resulting error system. Moreover, the guaranteed exponential convergence rate can be arbitrarily pre-specified. Finally, a numerical example is provided to illustrate the feasibility and effectiveness of the obtained result.

  14. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication

    Science.gov (United States)

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-01

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  15. Reduction of low frequency error for SED36 and APS based HYDRA star trackers

    Science.gov (United States)

    Ouaknine, Julien; Blarre, Ludovic; Oddos-Marcel, Lionel; Montel, Johan; Julio, Jean-Marc

    2017-11-01

    In the frame of the CNES Pleiades satellite, a reduction of the star tracker low frequency error, which is the most penalizing error for the satellite attitude control, was performed. For that purpose, the SED36 star tracker was developed, with a design based on the flight qualified SED16/26. In this paper, the SED36 main features will be first presented. Then, the reduction process of the low frequency error will be developed, particularly the optimization of the optical distortion calibration. The result is an attitude low frequency error of 1.1" at 3 sigma along transverse axes. The implementation of these improvements to HYDRA, the new multi-head APS star tracker developed by SODERN, will finally be presented.

  16. Exponential characteristics spatial quadrature for discrete ordinates radiation transport in slab geometry

    International Nuclear Information System (INIS)

    Mathews, K.; Sjoden, G.; Minor, B.

    1994-01-01

    The exponential characteristic spatial quadrature for discrete ordinates neutral particle transport in slab geometry is derived and compared with current methods. It is similar to the linear characteristic (or, in slab geometry, the linear nodal) quadrature but differs by assuming an exponential distribution of the scattering source within each cell, S(x) = a exp(bx), whose parameters are root-solved to match the known (from the previous iteration) average and first moment of the source over the cell. Like the linear adaptive method, the exponential characteristic method is positive and nonlinear but more accurate and more readily extended to other cell shapes. The nonlinearity has not interfered with convergence. The authors introduce the ''exponential moment functions,'' a generalization of the functions used by Walters in the linear nodal method, and use them to avoid numerical ill-conditioning. The method exhibits O(Δx 4 ) truncation error on fine enough meshes; the error is insensitive to mesh size for coarse meshes. In a shielding problem, it is accurate to 10% using 16-mfp-thick cells; conventional methods err by 8 to 15 orders of magnitude. The exponential characteristic method is computationally more costly per cell than current methods but can be accurate with very thick cells, leading to increased computational efficiency on appropriate problems

  17. Is Radioactive Decay Really Exponential?

    OpenAIRE

    Aston, Philip J.

    2012-01-01

    Radioactive decay of an unstable isotope is widely believed to be exponential. This view is supported by experiments on rapidly decaying isotopes but is more difficult to verify for slowly decaying isotopes. The decay of 14C can be calibrated over a period of 12,550 years by comparing radiocarbon dates with dates obtained from dendrochronology. It is well known that this approach shows that radiocarbon dates of over 3,000 years are in error, which is generally attributed to past variation in ...

  18. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  19. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    Directory of Open Access Journals (Sweden)

    J. F. Newman

    2017-02-01

    Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  20. Real-Time Exponential Curve Fits Using Discrete Calculus

    Science.gov (United States)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  1. A quantification of the hazards of fitting sums of exponentials to noisy data

    International Nuclear Information System (INIS)

    Bromage, G.E.

    1983-06-01

    The ill-conditioned nature of sums-of-exponentials analyses is confirmed and quantified, using synthetic noisy data. In particular, the magnification of errors is plotted for various two-exponential models, to illustrate its dependence on the ratio of decay constants, and on the ratios of amplitudes of the contributing terms. On moving from two- to three-exponential models, the condition deteriorates badly. It is also shown that the use of 'direct' Prony-type analyses (rather than general iterative nonlinear optimisation) merely aggravates the condition. (author)

  2. Exponentially convergent state estimation for delayed switched recurrent neural networks.

    Science.gov (United States)

    Ahn, Choon Ki

    2011-11-01

    This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.

  3. Removal of round off errors in the matrix exponential method for solving the heavy nuclide chain

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Noh, Jae Man; Joo, Hyung Kook

    2005-01-01

    Many nodal codes for core simulation adopt the micro-depletion procedure for the depletion analysis. Unlike the macro-depletion procedure, the microdepletion procedure uses micro-cross sections and number densities of important nuclides to generate the macro cross section of a spatial calculational node. Therefore, it needs to solve the chain equations of the nuclides of interest to obtain their number densities. There are several methods such as the matrix exponential method (MEM) and the chain linearization method (CLM) for solving the nuclide chain equations. The former solves chain equations exactly even when the cycles that come from the alpha decay exist in the chain while the latter solves the chain approximately when the cycles exist in the chain. The former has another advantage over the latter. Many nodal codes for depletion analysis, such as MASTER, solve only the hard coded nuclide chains with the CLM. Therefore, if we want to extend the chain by adding some more nuclides to the chain, we have to modify the source code. In contrast, we can extend the chain just by modifying the input in the MEM because it is easy to implement the MEM solver for solving an arbitrary nuclide chain. In spite of these advantages of the MEM, many nodal codes adopt the chain linearization because the former has a large round off error when the flux level is very high or short lived or strong absorber nuclides exist in the chain. In this paper, we propose a new technique to remove the round off errors in the MEM and we compared the performance of the two methods

  4. Exponentially-convergent Monte Carlo for the 1-D transport equation

    International Nuclear Information System (INIS)

    Peterson, J. R.; Morel, J. E.; Ragusa, J. C.

    2013-01-01

    We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)

  5. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    Science.gov (United States)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  6. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  7. Exponentially asymptotical synchronization in uncertain complex dynamical networks with time delay

    Energy Technology Data Exchange (ETDEWEB)

    Luo Qun; Yang Han; Li Lixiang; Yang Yixian [Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876 (China); Han Jiangxue, E-mail: luoqun@bupt.edu.c [National Engineering Laboratory for Disaster Backup and Recovery, Beijing University of Posts and Telecommunications, Beijing 100876 (China)

    2010-12-10

    Over the past decade, complex dynamical network synchronization has attracted more and more attention and important developments have been made. In this paper, we explore the scheme of globally exponentially asymptotical synchronization in complex dynamical networks with time delay. Based on Lyapunov stability theory and through defining the error function between adjacent nodes, four novel adaptive controllers are designed under four situations where the Lipschitz constants of the state function in nodes are known or unknown and the network structure is certain or uncertain, respectively. These controllers could not only globally asymptotically synchronize all nodes in networks, but also ensure that the error functions do not exceed the pre-scheduled exponential function. Finally, simulations of the synchronization among the chaotic system in the small-world and scale-free network structures are presented, which prove the effectiveness and feasibility of our controllers.

  8. The 3 faces of clinical reasoning: Epistemological explorations of disparate error reduction strategies.

    Science.gov (United States)

    Monteiro, Sandra; Norman, Geoff; Sherbino, Jonathan

    2018-03-13

    There is general consensus that clinical reasoning involves 2 stages: a rapid stage where 1 or more diagnostic hypotheses are advanced and a slower stage where these hypotheses are tested or confirmed. The rapid hypothesis generation stage is considered inaccessible for analysis or observation. Consequently, recent research on clinical reasoning has focused specifically on improving the accuracy of the slower, hypothesis confirmation stage. Three perspectives have developed in this line of research, and each proposes different error reduction strategies for clinical reasoning. This paper considers these 3 perspectives and examines the underlying assumptions. Additionally, this paper reviews the evidence, or lack of, behind each class of error reduction strategies. The first perspective takes an epidemiological stance, appealing to the benefits of incorporating population data and evidence-based medicine in every day clinical reasoning. The second builds on the heuristic and bias research programme, appealing to a special class of dual process reasoning models that theorizes a rapid error prone cognitive process for problem solving with a slower more logical cognitive process capable of correcting those errors. Finally, the third perspective borrows from an exemplar model of categorization that explicitly relates clinical knowledge and experience to diagnostic accuracy. © 2018 John Wiley & Sons, Ltd.

  9. Reduction in pediatric identification band errors: a quality collaborative.

    Science.gov (United States)

    Phillips, Shannon Connor; Saysana, Michele; Worley, Sarah; Hain, Paul D

    2012-06-01

    Accurate and consistent placement of a patient identification (ID) band is used in health care to reduce errors associated with patient misidentification. Multiple safety organizations have devoted time and energy to improving patient ID, but no multicenter improvement collaboratives have shown scalability of previously successful interventions. We hoped to reduce by half the pediatric patient ID band error rate, defined as absent, illegible, or inaccurate ID band, across a quality improvement learning collaborative of hospitals in 1 year. On the basis of a previously successful single-site intervention, we conducted a self-selected 6-site collaborative to reduce ID band errors in heterogeneous pediatric hospital settings. The collaborative had 3 phases: preparatory work and employee survey of current practice and barriers, data collection (ID band failure rate), and intervention driven by data and collaborative learning to accelerate change. The collaborative audited 11377 patients for ID band errors between September 2009 and September 2010. The ID band failure rate decreased from 17% to 4.1% (77% relative reduction). Interventions including education of frontline staff regarding correct ID bands as a safety strategy; a change to softer ID bands, including "luggage tag" type ID bands for some patients; and partnering with families and patients through education were applied at all institutions. Over 13 months, a collaborative of pediatric institutions significantly reduced the ID band failure rate. This quality improvement learning collaborative demonstrates that safety improvements tested in a single institution can be disseminated to improve quality of care across large populations of children.

  10. Survival analysis approach to account for non-exponential decay rate effects in lifetime experiments

    International Nuclear Information System (INIS)

    Coakley, K.J.; Dewey, M.S.; Huber, M.G.; Huffer, C.R.; Huffman, P.R.; Marley, D.E.; Mumm, H.P.; O'Shaughnessy, C.M.; Schelhammer, K.W.; Thompson, A.K.; Yue, A.T.

    2016-01-01

    In experiments that measure the lifetime of trapped particles, in addition to loss mechanisms with exponential survival probability functions, particles can be lost by mechanisms with non-exponential survival probability functions. Failure to account for such loss mechanisms produces systematic measurement error and associated systematic uncertainties in these measurements. In this work, we develop a general competing risks survival analysis method to account for the joint effect of loss mechanisms with either exponential or non-exponential survival probability functions, and a method to quantify the size of systematic effects and associated uncertainties for lifetime estimates. As a case study, we apply our survival analysis formalism and method to the Ultra Cold Neutron lifetime experiment at NIST. In this experiment, neutrons can escape a magnetic trap before they decay due to a wall loss mechanism with an associated non-exponential survival probability function.

  11. Survival analysis approach to account for non-exponential decay rate effects in lifetime experiments

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevincoakley@nist.gov [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S.; Huber, M.G. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); Huffer, C.R.; Huffman, P.R. [North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Marley, D.E. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Mumm, H.P. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States); O' Shaughnessy, C.M. [University of North Carolina at Chapel Hill, 120 E. Cameron Ave., CB #3255, Chapel Hill, NC 27599 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Schelhammer, K.W. [North Carolina State University, 2401 Stinson Drive, Box 8202, Raleigh, NC 27695 (United States); Triangle Universities Nuclear Laboratory, 116 Science Drive, Box 90308, Durham, NC 27708 (United States); Thompson, A.K.; Yue, A.T. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8461, Gaithersburg, MD 20899 (United States)

    2016-03-21

    In experiments that measure the lifetime of trapped particles, in addition to loss mechanisms with exponential survival probability functions, particles can be lost by mechanisms with non-exponential survival probability functions. Failure to account for such loss mechanisms produces systematic measurement error and associated systematic uncertainties in these measurements. In this work, we develop a general competing risks survival analysis method to account for the joint effect of loss mechanisms with either exponential or non-exponential survival probability functions, and a method to quantify the size of systematic effects and associated uncertainties for lifetime estimates. As a case study, we apply our survival analysis formalism and method to the Ultra Cold Neutron lifetime experiment at NIST. In this experiment, neutrons can escape a magnetic trap before they decay due to a wall loss mechanism with an associated non-exponential survival probability function.

  12. Reduction of sources of error and simplification of the Carbon-14 urea breath test

    International Nuclear Information System (INIS)

    Bellon, M.S.

    1997-01-01

    Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially

  13. Forecasting Financial Extremes: A Network Degree Measure of Super-Exponential Growth.

    Directory of Open Access Journals (Sweden)

    Wanfeng Yan

    Full Text Available Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous overreaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential growth of stock prices, by measuring the degree of the price network, generated from the price time series. Twelve major international stock indices have been investigated. Error diagram tests show that this new indicator has strong predictive power for financial extremes, both peaks and troughs. By varying the parameters used to construct the error diagram, we show the predictive power is very robust. The new indicator has a better performance than the LPPL pattern recognition indicator.

  14. Forecasting Financial Extremes: A Network Degree Measure of Super-Exponential Growth.

    Science.gov (United States)

    Yan, Wanfeng; van Tuyll van Serooskerken, Edgar

    2015-01-01

    Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous overreaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential growth of stock prices, by measuring the degree of the price network, generated from the price time series. Twelve major international stock indices have been investigated. Error diagram tests show that this new indicator has strong predictive power for financial extremes, both peaks and troughs. By varying the parameters used to construct the error diagram, we show the predictive power is very robust. The new indicator has a better performance than the LPPL pattern recognition indicator.

  15. Critical mutation rate has an exponential dependence on population size in haploid and diploid populations.

    Directory of Open Access Journals (Sweden)

    Elizabeth Aston

    Full Text Available Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the

  16. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  17. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  18. A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates

    CERN Document Server

    Hagedorn, G A

    2004-01-01

    We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\

  19. Chapter 3: The analysis of exponential experiments

    International Nuclear Information System (INIS)

    Brown, G.; Moore, P.F.G.; Richmond, R.

    1963-01-01

    A description is given of the methods used by the BICEP group for the analysis of exponential experiments on graphite-moderated natural uranium lattices. These differ in some respects from the methods formerly employed at A.E.R.E. and have resulted in a reduction by a factor of four in the time taken to carry out and analyse an experiment. (author)

  20. Extended Poisson Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Anum Fatima

    2015-09-01

    Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.

  1. Quantitative shearography: error reduction by using more than three measurement channels

    International Nuclear Information System (INIS)

    Charrett, Tom O. H.; Francis, Daniel; Tatam, Ralph P.

    2011-01-01

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.

  2. Filtering of Discrete-Time Switched Neural Networks Ensuring Exponential Dissipative and $l_{2}$ - $l_{\\infty }$ Performances.

    Science.gov (United States)

    Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg

    2017-10-01

    This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.

  3. Exponential time-dependent perturbation theory in rotationally inelastic scattering

    International Nuclear Information System (INIS)

    Cross, R.J.

    1983-01-01

    An exponential form of time-dependent perturbation theory (the Magnus approximation) is developed for rotationally inelastic scattering. A phase-shift matrix is calculated as an integral in time over the anisotropic part of the potential. The trajectory used for this integral is specified by the diagonal part of the potential matrix and the arithmetic average of the initial and final velocities and the average orbital angular momentum. The exponential of the phase-shift matrix gives the scattering matrix and the various cross sections. A special representation is used where the orbital angular momentum is either treated classically or may be frozen out to yield the orbital sudden approximation. Calculations on Ar+N 2 and Ar+TIF show that the theory generally gives very good agreement with accurate calculations, even where the orbital sudden approximation (coupled-states) results are seriously in error

  4. Medical error reduction and tort reform through private, contractually-based quality medicine societies.

    Science.gov (United States)

    MacCourt, Duncan; Bernstein, Joseph

    2009-01-01

    The current medical malpractice system is broken. Many patients injured by malpractice are not compensated, whereas some patients who recover in tort have not suffered medical negligence; furthermore, the system's failures demoralize patients and physicians. But most importantly, the system perpetuates medical error because the adversarial nature of litigation induces a so-called "Culture of Silence" in physicians eager to shield themselves from liability. This silence leads to the pointless repetition of error, as the open discussion and analysis of the root causes of medical mistakes does not take place as fully as it should. In 1993, President Clinton's Task Force on National Health Care Reform considered a solution characterized by Enterprise Medical Liability (EML), Alternative Dispute Resolution (ADR), some limits on recovery for non-pecuniary damages (Caps), and offsets for collateral source recovery. Yet this list of ingredients did not include a strategy to surmount the difficulties associated with each element. Specifically, EML might be efficient, but none of the enterprises contemplated to assume responsibility, i.e., hospitals and payers, control physician behavior enough so that it would be fair to foist liability on them. Likewise, although ADR might be efficient, it will be resisted by individual litigants who perceive themselves as harmed by it. Finally, while limitations on collateral source recovery and damages might effectively reduce costs, patients and trial lawyers likely would not accept them without recompense. The task force also did not place error reduction at the center of malpractice tort reform -a logical and strategic error, in our view. In response, we propose a new system that employs the ingredients suggested by the task force but also addresses the problems with each. We also explicitly consider steps to rebuff the Culture of Silence and promote error reduction. We assert that patients would be better off with a system where

  5. An exponential distribution

    International Nuclear Information System (INIS)

    Anon

    2009-01-01

    In this presentation author deals with the probabilistic evaluation of product life on the example of the exponential distribution. The exponential distribution is special one-parametric case of the weibull distribution.

  6. Characterization of electromagnetic fields in the αSPECTspectrometer and reduction of systematic errors

    International Nuclear Information System (INIS)

    Ayala Guardia, Fidel

    2011-10-01

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  7. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    Science.gov (United States)

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform

  8. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  9. Multivariate Matrix-Exponential Distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    2010-01-01

    be written as linear combinations of the elements in the exponential of a matrix. For this reason we shall refer to multivariate distributions with rational Laplace transform as multivariate matrix-exponential distributions (MVME). The marginal distributions of an MVME are univariate matrix......-exponential distributions. We prove a characterization that states that a distribution is an MVME distribution if and only if all non-negative, non-null linear combinations of the coordinates have a univariate matrix-exponential distribution. This theorem is analog to a well-known characterization theorem...

  10. Exponential characteristic spatial quadrature for discrete ordinates radiation transport with rectangular cells

    International Nuclear Information System (INIS)

    Minor, B.; Mathews, K.

    1995-01-01

    The exponential characteristic (EC) spatial quadrature for discrete ordinates neutral particle transport previously introduced in slab geometry is extended here to x-y geometry with rectangular cells. The method is derived and compared with current methods. It is similar to the linear characteristic (LC) quadrature (a linear-linear moments method) but differs by assuming an exponential distribution of the scattering source within each cell, S(x) = a exp(bx + cy), whose parameters are rootsolved to match the known (from the previous iteration) spatial average and first moments of the source over the cell. Similarly, EC assumes exponential distributions of flux along cell edges through which particles enter the cell, with parameters chosen to match the average and first moments of flux, as passed from the adjacent, upstream cells (or as determined by boundary conditions). Like the linear adaptive (LA) method, EC is positive and nonlinear. It is more accurate than LA and does not require subdivision of cells. The nonlinearity has not interfered with convergence. The exponential moment functions, which were introduced with the slab geometry method, are extended to arbitrary dimensions (numbers of arguments) and used to avoid numerical ill conditioning. As in slab geometry, the method approaches O(Δx 4 ) global truncation error on fine-enough meshes, while the error is insensitive to mesh size for coarse meshes. Performance of the method is compared with that of the step characteristic, LC, linear nodal, step adaptive, and LA schemes. The EC method is a strong performer with scattering ratios ranging from 0 to 0.9 (the range tested), particularly so for lower scattering ratios. As in slab geometry, EC is computationally more costly per cell than current methods but can be accurate with very thick cells, leading to increased computational efficiency on appropriate problems

  11. The exponential edge-gradient effect in x-ray computed tomography

    International Nuclear Information System (INIS)

    Joseph, P.M.

    1981-01-01

    The exponential edge-gradient effect must arise in any X-ray transmission CT scanner whenever long sharp edges of high contrast are encountered. The effect is non-linear and is due to the interaction of the exponential law of X-ray attenuation and the finite width of the scanning beam in the x-y plane. The error induced in the projection values is proved to be always negative. While the most common effect is lucent streaks emerging from single straight edges, it is demonstrated that dense streaks from pairs of edges are possible. It is shown that an exact correction of the error is possible only under very special (and rather unrealistic) circumstances in which an infinite number of samples per beam width are available and all thin rays making up the beam can be considered parallel. As a practical matter, nevertheless, increased sample density is highly desirable in making good approximate corrections; this is demonstrated with simulated scans. Two classes of approximate correction algorithms are described and their effectiveness evaluated on simulated CT phantom scans. One such algorithm is also shown to work well with a real scan of a physical phantom on a machine that provides approximately four samples per beam width. (author)

  12. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    Science.gov (United States)

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  13. Fast Fourier Transform Pricing Method for Exponential Lévy Processes

    KAUST Repository

    Crocce, Fabian; Happola, Juho; Kiessling, Jonas; Tempone, Raul

    2014-01-01

    We describe a set of partial-integro-differential equations (PIDE) whose solutions represent the prices of european options when the underlying asset is driven by an exponential L´evy process. Exploiting the L´evy -Khintchine formula, we give a Fourier based method for solving this class of PIDEs. We present a novel L1 error bound for solving a range of PIDEs in asset pricing and use this bound to set parameters for numerical methods.

  14. Fast Fourier Transform Pricing Method for Exponential Lévy Processes

    KAUST Repository

    Crocce, Fabian

    2014-05-04

    We describe a set of partial-integro-differential equations (PIDE) whose solutions represent the prices of european options when the underlying asset is driven by an exponential L´evy process. Exploiting the L´evy -Khintchine formula, we give a Fourier based method for solving this class of PIDEs. We present a novel L1 error bound for solving a range of PIDEs in asset pricing and use this bound to set parameters for numerical methods.

  15. Error reduction techniques for measuring long synchrotron mirrors

    International Nuclear Information System (INIS)

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP

  16. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  17. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  18. Exponential dependence of potential barrier height on biased voltages of inorganic/organic static induction transistor

    International Nuclear Information System (INIS)

    Zhang Yong; Yang Jianhong; Cai Xueyuan; Wang Zaixing

    2010-01-01

    The exponential dependence of the potential barrier height φ c on the biased voltages of the inorganic/organic static induction transistor (SIT/OSIT) through a normalized approach in the low-current regime is presented. It shows a more accurate description than the linear expression of the potential barrier height. Through the verification of the numerical calculated and experimental results, the exponential dependence of φ c on the applied biases can be used to derive the I-V characteristics. For both SIT and OSIT, the calculated results, using the presented relationship, are agreeable with the experimental results. Compared to the previous linear relationship, the exponential description of φ c can contribute effectively to reduce the error between the theoretical and experimental results of the I-V characteristics. (semiconductor devices)

  19. Exponential Cardassian universe

    International Nuclear Information System (INIS)

    Liu Daojun; Sun Changbo; Li Xinzhou

    2006-01-01

    The expectation of explaining cosmological observations without requiring new energy sources is forsooth worthy of investigation. In this Letter, a new kind of Cardassian models, called exponential Cardassian models, for the late-time universe are investigated in the context of the spatially flat FRW universe scenario. We fit the exponential Cardassian models to current type Ia supernovae data and find they are consistent with the observations. Furthermore, we point out that the equation-of-state parameter for the effective dark fluid component in exponential Cardassian models can naturally cross the cosmological constant divide w=-1 that observations favor mildly without introducing exotic material that destroy the weak energy condition

  20. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  1. Optimal Exponential Synchronization of Chaotic Systems with Multiple Time Delays via Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Feng-Hsiag Hsiao

    2013-01-01

    Full Text Available This study presents an effective approach to realize the optimal exponential synchronization of multiple time-delay chaotic (MTDC systems. First, a neural network (NN model is employed to approximate the MTDC system. Then, a linear differential inclusion (LDI state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion of the error system derived in terms of Lyapunov’s direct method to ensure that the trajectories of the slave system can approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI. Based on the LMI, a fuzzy controller is synthesized not only to realize the exponential synchronization but also to achieve the optimal performance by minimizing the disturbance attenuation level. Finally, a numerical example with simulations is provided to illustrate the concepts discussed throughout this work.

  2. Estimation of exponential convergence rate and exponential stability for neural networks with time-varying delay

    International Nuclear Information System (INIS)

    Tu Fenghua; Liao Xiaofeng

    2005-01-01

    We study the problem of estimating the exponential convergence rate and exponential stability for neural networks with time-varying delay. Some criteria for exponential stability are derived by using the linear matrix inequality (LMI) approach. They are less conservative than the existing ones. Some analytical methods are employed to investigate the bounds on the interconnection matrix and activation functions so that the systems are exponentially stable

  3. Characterization of electromagnetic fields in the aSPECT spectrometer and reduction of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Ayala Guardia, Fidel

    2011-10-15

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  4. Schur Complement Reduction in the Mixed-Hybrid Approximation of Darcy's Law: Rounding Error Analysis

    Czech Academy of Sciences Publication Activity Database

    Maryška, Jiří; Rozložník, Miroslav; Tůma, Miroslav

    2000-01-01

    Roč. 117, - (2000), s. 159-173 ISSN 0377-0427 R&D Projects: GA AV ČR IAA2030706; GA ČR GA201/98/P108 Institutional research plan: AV0Z1030915 Keywords : potential fluid flow problem * symmetric indefinite linear systems * Schur complement reduction * iterative methods * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 0.455, year: 2000

  5. Continuous exponential martingales and BMO

    CERN Document Server

    Kazamaki, Norihiko

    1994-01-01

    In three chapters on Exponential Martingales, BMO-martingales, and Exponential of BMO, this book explains in detail the beautiful properties of continuous exponential martingales that play an essential role in various questions concerning the absolute continuity of probability laws of stochastic processes. The second and principal aim is to provide a full report on the exciting results on BMO in the theory of exponential martingales. The reader is assumed to be familiar with the general theory of continuous martingales.

  6. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Hä ppö lä , Juho

    2016-01-01

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE

  7. Dynamics of exponential maps

    OpenAIRE

    Rempe, Lasse

    2003-01-01

    This thesis contains several new results about the dynamics of exponential maps $z\\mapsto \\exp(z)+\\kappa$. In particular, we prove that periodic external rays of exponential maps with nonescaping singular value always land. This is an analog of a theorem of Douady and Hubbard for polynomials. We also answer a question of Herman, Baker and Rippon by showing that the boundary of an unbounded exponential Siegel disk always contains the singular value. In addition to the presentation of new resul...

  8. Exact error estimation for solutions of nuclide chain equations

    International Nuclear Information System (INIS)

    Tachihara, Hidekazu; Sekimoto, Hiroshi

    1999-01-01

    The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)

  9. Reduction of very large reaction mechanisms using methods based on simulation error minimization

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, Tibor; Turanyi, Tamas [Institute of Chemistry, Eoetvoes University (ELTE), P.O. Box 32, H-1518 Budapest (Hungary)

    2009-02-15

    A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)

  10. Novel Exponentially Fitted Two-Derivative Runge-Kutta Methods with Equation-Dependent Coefficients for First-Order Differential Equations

    Directory of Open Access Journals (Sweden)

    Yanping Yang

    2016-01-01

    Full Text Available The construction of exponentially fitted two-derivative Runge-Kutta (EFTDRK methods for the numerical solution of first-order differential equations is investigated. The revised EFTDRK methods proposed, with equation-dependent coefficients, take into consideration the errors produced in the internal stages to the update. The local truncation errors and stability of the new methods are analyzed. The numerical results are reported to show the accuracy of the new methods.

  11. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...

  12. Delay-Dependent Exponential Optimal Synchronization for Nonidentical Chaotic Systems via Neural-Network-Based Approach

    Directory of Open Access Journals (Sweden)

    Feng-Hsiag Hsiao

    2013-01-01

    Full Text Available A novel approach is presented to realize the optimal exponential synchronization of nonidentical multiple time-delay chaotic (MTDC systems via fuzzy control scheme. A neural-network (NN model is first constructed for the MTDC system. Then, a linear differential inclusion (LDI state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, a delay-dependent exponential stability criterion of the error system derived in terms of Lyapunov's direct method is proposed to guarantee that the trajectories of the slave system can approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI. According to the LMI, a fuzzy controller is synthesized not only to realize the exponential synchronization but also to achieve the optimal performance by minimizing the disturbance attenuation level at the same time. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach.

  13. Direction-dependent exponential biassing

    International Nuclear Information System (INIS)

    Bending, R.C.

    1974-01-01

    When Monte Carlo methods are applied to penetration problems, the use of variance reduction techniques is essential if realistic computing times are to be achieved. A technique known as direction-dependent exponential biassing is described which is simple to apply and therefore suitable for problems with difficult geometry. The material cross section in any region is multiplied by a factor which depends on the particle direction, so that particles travelling in a preferred direction ''see'' a smaller cross section than those travelling in the opposite direction. A theoretical study shows that substantial gains may be obtained, and that the choice of biassing parameter is not critical. The method has been implemented alongside other importance sampling techniques in the general Monte Carlo code SPARTAN, and results obtained for simple problems using this code are included. 4 references. (U.S.)

  14. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  15. Generalized approach to non-exponential relaxation

    Indian Academy of Sciences (India)

    Non-exponential relaxation is a universal feature of systems as diverse as glasses, spin ... which changes from a simple exponential to a stretched exponential and a power law by increasing the constraints in the system. ... Current Issue

  16. Two statistics for evaluating parameter identifiability and error reduction

    Science.gov (United States)

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  17. Practical pulse engineering: Gradient ascent without matrix exponentiation

    Science.gov (United States)

    Bhole, Gaurav; Jones, Jonathan A.

    2018-06-01

    Since 2005, there has been a huge growth in the use of engineered control pulses to perform desired quantum operations in systems such as nuclear magnetic resonance quantum information processors. These approaches, which build on the original gradient ascent pulse engineering algorithm, remain computationally intensive because of the need to calculate matrix exponentials for each time step in the control pulse. In this study, we discuss how the propagators for each time step can be approximated using the Trotter-Suzuki formula, and a further speedup achieved by avoiding unnecessary operations. The resulting procedure can provide substantial speed gain with negligible costs in the propagator error, providing a more practical approach to pulse engineering.

  18. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  19. Advancing the research agenda for diagnostic error reduction

    NARCIS (Netherlands)

    Zwaan, L.; Schiff, G.D.; Singh, H.

    2013-01-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research.

  20. An Exponential Growth Learning Trajectory: Students' Emerging Understanding of Exponential Growth through Covariation

    Science.gov (United States)

    Ellis, Amy B.; Ozgur, Zekiye; Kulow, Torrey; Dogan, Muhammed F.; Amidon, Joel

    2016-01-01

    This article presents an Exponential Growth Learning Trajectory (EGLT), a trajectory identifying and characterizing middle grade students' initial and developing understanding of exponential growth as a result of an instructional emphasis on covariation. The EGLT explicates students' thinking and learning over time in relation to a set of tasks…

  1. Lagrange α-exponential stability and α-exponential convergence for fractional-order complex-valued neural networks.

    Science.gov (United States)

    Jian, Jigui; Wan, Peng

    2017-07-01

    This paper deals with the problem on Lagrange α-exponential stability and α-exponential convergence for a class of fractional-order complex-valued neural networks. To this end, some new fractional-order differential inequalities are established, which improve and generalize previously known criteria. By using the new inequalities and coupling with the Lyapunov method, some effective criteria are derived to guarantee Lagrange α-exponential stability and α-exponential convergence of the addressed network. Moreover, the framework of the α-exponential convergence ball is also given, where the convergence rate is related to the parameters and the order of differential of the system. These results here, which the existence and uniqueness of the equilibrium points need not to be considered, generalize and improve the earlier publications and can be applied to monostable and multistable fractional-order complex-valued neural networks. Finally, one example with numerical simulations is given to show the effectiveness of the obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The exponentiated generalized Pareto distribution | Adeyemi | Ife ...

    African Journals Online (AJOL)

    Recently Gupta et al. (1998) introduced the exponentiated exponential distribution as a generalization of the standard exponential distribution. In this paper, we introduce a three-parameter generalized Pareto distribution, the exponentiated generalized Pareto distribution (EGP). We present a comprehensive treatment of the ...

  3. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  4. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  5. Effect of benzalkonium chloride on viability and energy metabolism in exponential- and stationary-growth-phase cells of Listeria monocytogenes.

    Science.gov (United States)

    Luppens, S B; Abee, T; Oosterom, J

    2001-04-01

    The difference in killing exponential- and stationary-phase cells of Listeria monocytogenes by benzalkonium chloride (BAC) was investigated by plate counting and linked to relevant bioenergetic parameters. At a low concentration of BAC (8 mg liter(-1)), a similar reduction in viable cell numbers was observed for stationary-phase cells and exponential-phase cells (an approximately 0.22-log unit reduction), although their membrane potential and pH gradient were dissipated. However, at higher concentrations of BAC, exponential-phase cells were more susceptible than stationary-phase cells. At 25 mg liter(-1), the difference in survival on plates was more than 3 log units. For both types of cells, killing, i.e., more than 1-log unit reduction in survival on plates, coincided with complete inhibition of acidification and respiration and total depletion of ATP pools. Killing efficiency was not influenced by the presence of glucose, brain heart infusion medium, or oxygen. Our results suggest that growth phase is one of the major factors that determine the susceptibility of L. monocytogenes to BAC.

  6. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    Science.gov (United States)

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  7. Effects of variable transformations on errors in FORM results

    International Nuclear Information System (INIS)

    Qin Quan; Lin Daojin; Mei Gang; Chen Hao

    2006-01-01

    On the basis of studies on second partial derivatives of the variable transformation functions for nine different non-normal variables the paper comprehensively discusses the effects of the transformation on FORM results and shows that senses and values of the errors in FORM results depend on distributions of the basic variables, whether resistances or actions basic variables represent, and the design point locations in the standard normal space. The transformations of the exponential or Gamma resistance variables can generate +24% errors in the FORM failure probability, and the transformation of Frechet action variables could generate -31% errors

  8. Thermoluminescence dating of chinese porcelain using a regression method of saturating exponential in pre-dose technique

    International Nuclear Information System (INIS)

    Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.

    2001-01-01

    Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating

  9. Universality in stochastic exponential growth.

    Science.gov (United States)

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  10. Modeling of Single Event Transients With Dual Double-Exponential Current Sources: Implications for Logic Cell Characterization

    Science.gov (United States)

    Black, Dolores A.; Robinson, William H.; Wilcox, Ian Z.; Limbrick, Daniel B.; Black, Jeffrey D.

    2015-08-01

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. An accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventional model based on one double-exponential source can be incomplete. A small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. The parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.

  11. An Unusual Exponential Graph

    Science.gov (United States)

    Syed, M. Qasim; Lovatt, Ian

    2014-01-01

    This paper is an addition to the series of papers on the exponential function begun by Albert Bartlett. In particular, we ask how the graph of the exponential function y = e[superscript -t/t] would appear if y were plotted versus ln t rather than the normal practice of plotting ln y versus t. In answering this question, we find a new way to…

  12. Fast quantum modular exponentiation

    International Nuclear Information System (INIS)

    Meter, Rodney van; Itoh, Kohei M.

    2005-01-01

    We present a detailed analysis of the impact on quantum modular exponentiation of architectural features and possible concurrent gate execution. Various arithmetic algorithms are evaluated for execution time, potential concurrency, and space trade-offs. We find that to exponentiate an n-bit number, for storage space 100n (20 times the minimum 5n), we can execute modular exponentiation 200-700 times faster than optimized versions of the basic algorithms, depending on architecture, for n=128. Addition on a neighbor-only architecture is limited to O(n) time, whereas non-neighbor architectures can reach O(log n), demonstrating that physical characteristics of a computing device have an important impact on both real-world running time and asymptotic behavior. Our results will help guide experimental implementations of quantum algorithms and devices

  13. Fully exponentially correlated wavefunctions for small atoms

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Frank E. [Department of Physics, University of Utah, Salt Lake City, UT 84112 and Quantum Theory Project, University of Florida, P.O. Box 118435, Gainesville, FL 32611 (United States)

    2015-01-22

    Fully exponentially correlated atomic wavefunctions are constructed from exponentials in all the interparticle coordinates, in contrast to correlated wavefunctions of the Hylleraas form, in which only the electron-nuclear distances occur exponentially, with electron-electron distances entering only as integer powers. The full exponential correlation causes many-configuration wavefunctions to converge with expansion length more rapidly than either orbital formulations or correlated wavefunctions of the Hylleraas type. The present contribution surveys the effectiveness of fully exponentially correlated functions for the three-body system (the He isoelectronic series) and reports their application to a four-body system (the Li atom)

  14. Bandwagon effects and error bars in particle physics

    Science.gov (United States)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  15. Bandwagon effects and error bars in particle physics

    International Nuclear Information System (INIS)

    Jeng, Monwhea

    2007-01-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit 'bandwagon effects': reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations

  16. Transverse exponential stability and applications

    NARCIS (Netherlands)

    Andrieu, Vincent; Jayawardhana, Bayu; Praly, Laurent

    2016-01-01

    We investigate how the following properties are related to each other: i) A manifold is “transversally” exponentially stable; ii) The “transverse” linearization along any solution in the manifold is exponentially stable; iii) There exists a field of positive definite quadratic forms whose

  17. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2006-01-01

    A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...

  18. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  19. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  20. Finite Difference Solution of Elastic-Plastic Thin Rotating Annular Disk with Exponentially Variable Thickness and Exponentially Variable Density

    Directory of Open Access Journals (Sweden)

    Sanjeev Sharma

    2013-01-01

    Full Text Available Elastic-plastic stresses, strains, and displacements have been obtained for a thin rotating annular disk with exponentially variable thickness and exponentially variable density with nonlinear strain hardening material by finite difference method using Von-Mises' yield criterion. Results have been computed numerically and depicted graphically. From the numerical results, it can be concluded that disk whose thickness decreases radially and density increases radially is on the safer side of design as compared to the disk with exponentially varying thickness and exponentially varying density as well as to flat disk.

  1. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Liu, Zhengjun

    2015-01-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)

  2. Exponential Hilbert series of equivariant embeddings

    OpenAIRE

    Johnson, Wayne A.

    2018-01-01

    In this article, we study properties of the exponential Hilbert series of a $G$-equivariant projective variety, where $G$ is a semisimple, simply-connected complex linear algebraic group. We prove a relationship between the exponential Hilbert series and the degree and dimension of the variety. We then prove a combinatorial identity for the coefficients of the polynomial representing the exponential Hilbert series. This formula is used in examples to prove further combinatorial identities inv...

  3. Exponential and Logarithmic Functions

    OpenAIRE

    Todorova, Tamara

    2010-01-01

    Exponential functions find applications in economics in relation to growth and economic dynamics. In these fields, quite often the choice variable is time and economists are trying to determine the best timing for certain economic activities to take place. An exponential function is one in which the independent variable appears in the exponent. Very often that exponent is time. In highly mathematical courses, it is a truism that students learn by doing, not by reading. Tamara Todorova’s Pr...

  4. Master-slave exponential synchronization of delayed complex-valued memristor-based neural networks via impulsive control.

    Science.gov (United States)

    Li, Xiaofan; Fang, Jian-An; Li, Huiyuan

    2017-09-01

    This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Exponential x-ray transform

    International Nuclear Information System (INIS)

    Hazou, I.A.

    1986-01-01

    In emission computed tomography one wants to determine the location and intensity of radiation emitted by sources in the presence of an attenuating medium. If the attenuation is known everywhere and equals a constant α in a convex neighborhood of the support of f, then the problem reduces to that of inverting the exponential x-ray transform P/sub α/. The exponential x-ray transform P/sub μ/ with the attenuation μ variable, is of interest mathematically. For the exponential x-ray transform in two dimensions, it is shown that for a large class of approximate δ functions E, convolution kernels K exist for use in the convolution backprojection algorithm. For the case where the attenuation is constant, exact formulas are derived for calculating the convolution kernels from radial point spread functions. From these an exact inversion formula for the constantly attenuated transform is obtained

  6. On the formation of exponential discs

    International Nuclear Information System (INIS)

    Yoshii, Yuzuru; Sommer-Larsen, Jesper

    1989-01-01

    Spiral galaxy discs are characterized by approximately exponential surface luminosity profiles. In this paper the evolutionary equations for a star-forming, viscous disc are solved analytically or semi-analytically. It is shown that approximately exponential stellar surface density profiles result if the viscous time-scale t ν is comparable to the star-formation time scale t * everywhere in the disc. The analytical solutions are used to illuminate further on the issue of why the above mechanism leads to resulting exponential stellar profiles under certain conditions. The sensitivity of the solution to variations of various parameters are investigated and show that the initial gas surface density distribution has to be fairly regular in order that final exponential stellar surface density profiles result. (author)

  7. Errors and mistakes in the traditional optimum design of experiments on exponential absorption

    International Nuclear Information System (INIS)

    Burge, E.J.

    1977-01-01

    The treatment of statistical errors in absorption experiments using particle counters, given by Rose and Shapiro (1948), is shown to be incorrect for non-zero background counts. For the simplest case of only one absorber thickness, revised conditions are computed for the optimum geometry and the best apportionment of counting times for the incident and transmitted beams for a wide range of relative backgrounds (0, 10 -5 -10 2 ). The two geometries of Rose and Shapiro are treated, (I) beam area fixed, absorber thickness varied, and (II) beam area and absorber thickness both varied, but with effective volume of absorber constant. For case (I) the new calculated errors in the absorption coefficients are shown to be about 0.7 of the Rose and Shapiro values for the largest background, and for case (II) about 0.4. The corresponding fractional times for background counts are (I) 0.7 and (II) 0.07 of those given by Rose and Shapiro. For small backgrounds the differences are negligible. Revised values are also computed for the sensitivity of the accuracy to deviations from optimum transmission. (Auth.)

  8. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  9. On Using Exponential Parameter Estimators with an Adaptive Controller

    Science.gov (United States)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  10. Robust D-optimal designs under correlated error, applicable invariantly for some lifetime distributions

    International Nuclear Information System (INIS)

    Das, Rabindra Nath; Kim, Jinseog; Park, Jeong-Soo

    2015-01-01

    In quality engineering, the most commonly used lifetime distributions are log-normal, exponential, gamma and Weibull. Experimental designs are useful for predicting the optimal operating conditions of the process in lifetime improvement experiments. In the present article, invariant robust first-order D-optimal designs are derived for correlated lifetime responses having the above four distributions. Robust designs are developed for some correlated error structures. It is shown that robust first-order D-optimal designs for these lifetime distributions are always robust rotatable but the converse is not true. Moreover, it is observed that these designs depend on the respective error covariance structure but are invariant to the above four lifetime distributions. This article generalizes the results of Das and Lin [7] for the above four lifetime distributions with general (intra-class, inter-class, compound symmetry, and tri-diagonal) correlated error structures. - Highlights: • This paper presents invariant robust first-order D-optimal designs under correlated lifetime responses. • The results of Das and Lin [7] are extended for the four lifetime (log-normal, exponential, gamma and Weibull) distributions. • This paper also generalizes the results of Das and Lin [7] to more general correlated error structures

  11. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  12. Implicit and fully implicit exponential finite difference methods

    Indian Academy of Sciences (India)

    Burgers' equation; exponential finite difference method; implicit exponential finite difference method; ... This paper describes two new techniques which give improved exponential finite difference solutions of Burgers' equation. ... Current Issue

  13. SU-F-T-241: Reduction in Planning Errors Via a Process Control Developed Using the Eclipse Scripting API

    Energy Technology Data Exchange (ETDEWEB)

    Barbee, D; McCarthy, A; Galavis, P; Xu, A [NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# to check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria

  14. Chemical model reduction under uncertainty

    KAUST Repository

    Malpica Galassi, Riccardo

    2017-03-06

    A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.

  15. Continuous multivariate exponential extension

    International Nuclear Information System (INIS)

    Block, H.W.

    1975-01-01

    The Freund-Weinman multivariate exponential extension is generalized to the case of nonidentically distributed marginal distributions. A fatal shock model is given for the resulting distribution. Results in the bivariate case and the concept of constant multivariate hazard rate lead to a continuous distribution related to the multivariate exponential distribution (MVE) of Marshall and Olkin. This distribution is shown to be a special case of the extended Freund-Weinman distribution. A generalization of the bivariate model of Proschan and Sullo leads to a distribution which contains both the extended Freund-Weinman distribution and the MVE

  16. Exponential Frequency Spectrum in Magnetized Plasmas

    International Nuclear Information System (INIS)

    Pace, D. C.; Shi, M.; Maggs, J. E.; Morales, G. J.; Carter, T. A.

    2008-01-01

    Measurements of a magnetized plasma with a controlled electron temperature gradient show the development of a broadband spectrum of density and temperature fluctuations having an exponential frequency dependence at frequencies below the ion cyclotron frequency. The origin of the exponential frequency behavior is traced to temporal pulses of Lorentzian shape. Similar exponential frequency spectra are also found in limiter-edge plasma turbulence associated with blob transport. This finding suggests a universal feature of magnetized plasma turbulence leading to nondiffusive, cross-field transport, namely, the presence of Lorentzian shaped pulses

  17. The Parity of Set Systems under Random Restrictions with Applications to Exponential Time Problems

    DEFF Research Database (Denmark)

    Björklund, Andreas; Dell, Holger; Husfeldt, Thore

    2015-01-01

    problems. We find three applications of our reductions: 1. An exponential-time algorithm: We show how to decide Hamiltonicity in directed n-vertex graphs with running time 1.9999^n provided that the graph has at most 1.0385^n Hamiltonian cycles. We do so by reducing to the algorithm of Björklund...

  18. Phenomenology of stochastic exponential growth

    Science.gov (United States)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  19. Yield shear stress model of magnetorheological fluids based on exponential distribution

    International Nuclear Information System (INIS)

    Guo, Chu-wen; Chen, Fei; Meng, Qing-rui; Dong, Zi-xin

    2014-01-01

    The magnetic chain model that considers the interaction between particles and the external magnetic field in a magnetorheological fluid has been widely accepted. Based on the chain model, a yield shear stress model of magnetorheological fluids was proposed by introducing the exponential distribution to describe the distribution of angles between the direction of magnetic field and the chain formed by magnetic particles. The main influencing factors were considered in the model, such as magnetic flux density, intensity of magnetic field, particle size, volume fraction of particles, the angle of magnetic chain, and so on. The effect of magnetic flux density on the yield shear stress was discussed. The yield stress of aqueous Fe 3 O 4 magnetreological fluids with volume fraction of 7.6% and 16.2% were measured by a device designed by ourselves. The results indicate that the proposed model can be used for calculation of yield shear stress with acceptable errors. - Highlights: • A yield shear stress model of magnetorheological fluids was proposed. • Use exponential distribution to describe the distribution of magnetic chain angles. • Experimental and predicted results were in good agreement for 2 types of MR

  20. Exponential current pulse generation for efficient very high-impedance multisite stimulation.

    Science.gov (United States)

    Ethier, S; Sawan, M

    2011-02-01

    We describe in this paper an intracortical current-pulse generator for high-impedance microstimulation. This dual-chip system features a stimuli generator and a high-voltage electrode driver. The stimuli generator produces flexible rising exponential pulses in addition to standard rectangular stimuli. This novel stimulation waveform is expected to provide superior energy efficiency for action potential triggering while releasing less toxic reduced ions in the cortical tissues. The proposed fully integrated electrode driver is used as the output stage where high-voltage supplies are generated on-chip to significantly increase the voltage compliance for stimulation through high-impedance electrode-tissue interfaces. The stimuli generator has been implemented in 0.18-μm CMOS technology while a 0.8-μm CMOS/DMOS process has been used to integrate the high-voltage output stage. Experimental results show that the rectangular pulses cover a range of 1.6 to 167.2 μA with a DNL and an INL of 0.098 and 0.163 least-significant bit, respectively. The maximal dynamic range of the generated exponential reaches 34.36 dB at full scale within an error of ± 0.5 dB while all of its parameters (amplitude, duration, and time constant) are independently programmable over wide ranges. This chip consumes a maximum of 88.3 μ W in the exponential mode. High-voltage supplies of 8.95 and -8.46 V are generated by the output stage, boosting the voltage swing up to 13.6 V for a load as high as 100 kΩ.

  1. On the conditions of exponential stability in active disturbance rejection control based on singular perturbation analysis

    Science.gov (United States)

    Shao, S.; Gao, Z.

    2017-10-01

    Stability of active disturbance rejection control (ADRC) is analysed in the presence of unknown, nonlinear, and time-varying dynamics. In the framework of singular perturbations, the closed-loop error dynamics are semi-decoupled into a relatively slow subsystem (the feedback loop) and a relatively fast subsystem (the extended state observer), respectively. It is shown, analytically and geometrically, that there exists a unique exponential stable solution if the size of the initial observer error is sufficiently small, i.e. in the same order of the inverse of the observer bandwidth. The process of developing the uniformly asymptotic solution of the system reveals the condition on the stability of the ADRC and the relationship between the rate of change in the total disturbance and the size of the estimation error. The differentiability of the total disturbance is the only assumption made.

  2. A Bayesian approach for the stochastic modeling error reduction of magnetic material identification of an electromagnetic device

    International Nuclear Information System (INIS)

    Abdallh, A; Crevecoeur, G; Dupré, L

    2012-01-01

    Magnetic material properties of an electromagnetic device can be recovered by solving an inverse problem where measurements are adequately interpreted by a mathematical forward model. The accuracy of these forward models dramatically affects the accuracy of the material properties recovered by the inverse problem. The more accurate the forward model is, the more accurate recovered data are. However, the more accurate ‘fine’ models demand a high computational time and memory storage. Alternatively, less accurate ‘coarse’ models can be used with a demerit of the high expected recovery errors. This paper uses the Bayesian approximation error approach for improving the inverse problem results when coarse models are utilized. The proposed approach adapts the objective function to be minimized with the a priori misfit between fine and coarse forward model responses. In this paper, two different electromagnetic devices, namely a switched reluctance motor and an EI core inductor, are used as case studies. The proposed methodology is validated on both purely numerical and real experimental results. The results show a significant reduction in the recovery error within an acceptable computational time. (paper)

  3. Exponential Synchronization of Networked Chaotic Delayed Neural Network by a Hybrid Event Trigger Scheme.

    Science.gov (United States)

    Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun; Zhongyang Fei; Chaoxu Guan; Huijun Gao; Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun

    2018-06-01

    This paper is concerned with the exponential synchronization for master-slave chaotic delayed neural network with event trigger control scheme. The model is established on a network control framework, where both external disturbance and network-induced delay are taken into consideration. The desired aim is to synchronize the master and slave systems with limited communication capacity and network bandwidth. In order to save the network resource, we adopt a hybrid event trigger approach, which not only reduces the data package sending out, but also gets rid of the Zeno phenomenon. By using an appropriate Lyapunov functional, a sufficient criterion for the stability is proposed for the error system with extended ( , , )-dissipativity performance index. Moreover, hybrid event trigger scheme and controller are codesigned for network-based delayed neural network to guarantee the exponential synchronization between the master and slave systems. The effectiveness and potential of the proposed results are demonstrated through a numerical example.

  4. Exponential Expansion in Evolutionary Economics

    DEFF Research Database (Denmark)

    Frederiksen, Peter; Jagtfelt, Tue

    2013-01-01

    This article attempts to solve current problems of conceptual fragmentation within the field of evolutionary economics. One of the problems, as noted by a number of observers, is that the field suffers from an assemblage of fragmented and scattered concepts (Boschma and Martin 2010). A solution...... to this problem is proposed in the form of a model of exponential expansion. The model outlines the overall structure and function of the economy as exponential expansion. The pictographic model describes four axiomatic concepts and their exponential nature. The interactive, directional, emerging and expanding...... concepts are described in detail. Taken together it provides the rudimentary aspects of an economic system within an analytical perspective. It is argued that the main dynamic processes of the evolutionary perspective can be reduced to these four concepts. The model and concepts are evaluated in the light...

  5. Method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  6. Exponential L2-L∞ Filtering for a Class of Stochastic System with Mixed Delays and Nonlinear Perturbations

    Directory of Open Access Journals (Sweden)

    Zhaohui Chen

    2013-01-01

    Full Text Available The delay-dependent exponential L2-L∞ performance analysis and filter design are investigated for stochastic systems with mixed delays and nonlinear perturbations. Based on the delay partitioning and integral partitioning technique, an improved delay-dependent sufficient condition for the existence of the L2-L∞ filter is established, by choosing an appropriate Lyapunov-Krasovskii functional and constructing a new integral inequality. The full-order filter design approaches are obtained in terms of linear matrix inequalities (LMIs. By solving the LMIs and using matrix decomposition, the desired filter gains can be obtained, which ensure that the filter error system is exponentially stable with a prescribed L2-L∞ performance γ. Numerical examples are provided to illustrate the effectiveness and significant improvement of the proposed method.

  7. Blowing-up Semilinear Wave Equation with Exponential ...

    Indian Academy of Sciences (India)

    Blowing-up Semilinear Wave Equation with Exponential Nonlinearity in Two Space ... We investigate the initial value problem for some semi-linear wave equation in two space dimensions with exponential nonlinearity growth. ... Current Issue

  8. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  9. Characterizing quantum correlations. Entanglement, uncertainty relations and exponential families

    Energy Technology Data Exchange (ETDEWEB)

    Niekamp, Soenke

    2012-04-20

    This thesis is concerned with different characterizations of multi-particle quantum correlations and with entropic uncertainty relations. The effect of statistical errors on the detection of entanglement is investigated. First, general results on the statistical significance of entanglement witnesses are obtained. Then, using an error model for experiments with polarization-entangled photons, it is demonstrated that Bell inequalities with lower violation can have higher significance. The question for the best observables to discriminate between a state and the equivalence class of another state is addressed. Two measures for the discrimination strength of an observable are defined, and optimal families of observables are constructed for several examples. A property of stabilizer bases is shown which is a natural generalization of mutual unbiasedness. For sets of several dichotomic, pairwise anticommuting observables, uncertainty relations using different entropies are constructed in a systematic way. Exponential families provide a classification of states according to their correlations. In this classification scheme, a state is considered as k-correlated if it can be written as thermal state of a k-body Hamiltonian. Witness operators for the detection of higher-order interactions are constructed, and an algorithm for the computation of the nearest k-correlated state is developed.

  10. Characterizing quantum correlations. Entanglement, uncertainty relations and exponential families

    International Nuclear Information System (INIS)

    Niekamp, Soenke

    2012-01-01

    This thesis is concerned with different characterizations of multi-particle quantum correlations and with entropic uncertainty relations. The effect of statistical errors on the detection of entanglement is investigated. First, general results on the statistical significance of entanglement witnesses are obtained. Then, using an error model for experiments with polarization-entangled photons, it is demonstrated that Bell inequalities with lower violation can have higher significance. The question for the best observables to discriminate between a state and the equivalence class of another state is addressed. Two measures for the discrimination strength of an observable are defined, and optimal families of observables are constructed for several examples. A property of stabilizer bases is shown which is a natural generalization of mutual unbiasedness. For sets of several dichotomic, pairwise anticommuting observables, uncertainty relations using different entropies are constructed in a systematic way. Exponential families provide a classification of states according to their correlations. In this classification scheme, a state is considered as k-correlated if it can be written as thermal state of a k-body Hamiltonian. Witness operators for the detection of higher-order interactions are constructed, and an algorithm for the computation of the nearest k-correlated state is developed.

  11. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  12. Zero inflated negative binomial-generalized exponential distributionand its applications

    Directory of Open Access Journals (Sweden)

    Sirinapa Aryuyuen

    2014-08-01

    Full Text Available In this paper, we propose a new zero inflated distribution, namely, the zero inflated negative binomial-generalized exponential (ZINB-GE distribution. The new distribution is used for count data with extra zeros and is an alternative for data analysis with over-dispersed count data. Some characteristics of the distribution are given, such as mean, variance, skewness, and kurtosis. Parameter estimation of the ZINB-GE distribution uses maximum likelihood estimation (MLE method. Simulated and observed data are employed to examine this distribution. The results show that the MLE method seems to have high efficiency for large sample sizes. Moreover, the mean square error of parameter estimation is increased when the zero proportion is higher. For the real data sets, this new zero inflated distribution provides a better fit than the zero inflated Poisson and zero inflated negative binomial distributions.

  13. A 60-dB linear VGA with novel exponential gain approximation

    International Nuclear Information System (INIS)

    Zhou Jiaye; Tan Xi; Wang Junyu; Tang Zhangwen; Min Hao

    2009-01-01

    A CMOS variable gain amplifier (VGA) that adopts a novel exponential gain approximation is presented. No additional exponential gain control circuit is required in the proposed VGA used in a direct conversion receiver. A wide gain control voltage from 0.4 to 1.8 V and a high linearity performance are achieved. The three-stage VGA with automatic gain control (AGC) and DC offset cancellation (DCOC) is fabricated in a 0.18-μm CMOS technology and shows a linear gain range of more than 58-dB with a linearity error less than ±1 dB. The 3-dB bandwidth is over 8 MHz at all gain settings. The measured input-referred third intercept point (IIP3) of the proposed VGA varies from -18.1 to 13.5 dBm, and the measured noise figure varies from 27 to 65 dB at a frequency of 1 MHz. The dynamic range of the closed-loop AGC exceeds 56 dB, where the output signal-to-noise-and-distortion ratio (SNDR) reaches 20 dB. The whole circuit, occupying 0.3 mm 2 of chip area, dissipates less than 3.7 mA from a 1.8-V supply.

  14. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  15. Optimal complex exponentials BEM and channel estimation in doubly selective channel

    International Nuclear Information System (INIS)

    Song, Lijun; Lei, Xia; Yu, Feng; Jin, Maozhu

    2016-01-01

    Over doubly selective channel, the optimal complex exponentials BEM (CE-BEM) is required to characterize the transmission in transform domain in order to reducing the huge number of the estimated parameters during directly estimating the impulse response in time domain. This paper proposed an improved CE-BEM to alleviating the high frequency sampling error caused by conventional CE-BEM. On the one hand, exploiting the improved CE-BEM, we achieve the sampling point is in the Doppler spread spectrum and the maximum sampling frequency is equal to the maximum Doppler shift. On the other hand we optimize the function and dimension of basis in CE-BEM respectively ,and obtain the closed solution of the EM based channel estimation differential operator by exploiting the above optimal BEM. Finally, the numerical results and theoretic analysis show that the dimension of basis is mainly depend on the maximum Doppler shift and signal-to-noise ratio (SNR), and if fixing the number of the pilot symbol, the dimension of basis is higher, the modeling error is smaller, while the accuracy of the parameter estimation is reduced, which implies that we need to achieve a tradeoff between the modeling error and the accuracy of the parameter estimation and the basis function influences the accuracy of describing the Doppler spread spectrum after identifying the dimension of the basis.

  16. Quantum algorithms and quantum maps - implementation and error correction

    International Nuclear Information System (INIS)

    Alber, G.; Shepelyansky, D.

    2005-01-01

    Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)

  17. Numerical solution of matrix exponential in burn-up equation using mini-max polynomial approximation

    International Nuclear Information System (INIS)

    Kawamoto, Yosuke; Chiba, Go; Tsuji, Masashi; Narabayashi, Tadashi

    2015-01-01

    Highlights: • We propose a new numerical solution of matrix exponential in burn-up depletion calculations. • The depletion calculation with extremely short half-lived nuclides can be done numerically stable with this method. • The computational time is shorter than the other conventional methods. - Abstract: Nuclear fuel burn-up depletion calculations are essential to compute the nuclear fuel composition transition. In the burn-up calculations, the matrix exponential method has been widely used. In the present paper, we propose a new numerical solution of the matrix exponential, a Mini-Max Polynomial Approximation (MMPA) method. This method is numerically stable for burn-up matrices with extremely short half-lived nuclides as the Chebyshev Rational Approximation Method (CRAM), and it has several advantages over CRAM. We also propose a multi-step calculation, a computational time reduction scheme of the MMPA method, which can perform simultaneously burn-up calculations with several time periods. The applicability of these methods has been theoretically and numerically proved for general burn-up matrices. The numerical verification has been performed, and it has been shown that these methods have high precision equivalent to CRAM

  18. On Uniform Exponential Trichotomy in Banach Spaces

    Directory of Open Access Journals (Sweden)

    Kovacs Monteola Ilona

    2014-06-01

    Full Text Available In this paper we consider three concepts of uniform exponential trichotomy on the half-line in the general framework of evolution operators in Banach spaces. We obtain a systematic classification of uniform exponential trichotomy concepts and the connections between them.

  19. Beam induced vacuum measurement error in BEPC II

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  20. Time-resolved infrared stimulated luminescence signals in feldspars: Analysis based on exponential and stretched exponential functions

    International Nuclear Information System (INIS)

    Pagonis, V.; Morthekai, P.; Singhvi, A.K.; Thomas, J.; Balaram, V.; Kitis, G.; Chen, R.

    2012-01-01

    Time-resolved infrared-stimulated luminescence (TR-IRSL) signals from feldspar samples have been the subject of several recent experimental studies. These signals are of importance in the field of luminescence dating, since they exhibit smaller fading effects than the commonly employed continuous-wave infrared signals (CW-IRSL). This paper presents a semi-empirical analysis of TR-IRSL data from feldspar samples, by using a linear combination of exponential and stretched exponential (SE) functions. The best possible estimates of the five parameters in this semi-empirical approach are obtained using five popular commercially available software packages, and by employing a variety of global optimization techniques. The results from all types of software and from the different fitting algorithms were found to be in close agreement with each other, indicating that a global optimum solution has likely been reached during the fitting process. Four complete sets of TR-IRSL data on well-characterized natural feldspars were fitted by using such a linear combination of exponential and SE functions. The dependence of the extracted fitting parameters on the stimulation temperature is discussed within the context of a recently proposed model of luminescence processes in feldspar. Three of the four feldspar samples studied in this paper are K-rich, and these exhibited different behavior at higher stimulation temperatures, than the fourth sample which was a Na-rich feldspar. The new method of analysis proposed in this paper can help isolate mathematically the more thermally stable components, and hence could lead to better dating applications in these materials. - Highlights: ► TR-IRSL from four feldspars were analyzed using exponential and stretched exponential functions. ► A variety of global optimization techniques give good agreement. ► Na-rich sample behavior is different from the three K-rich samples. ► Experimental data are fitted for stimulation temperatures

  1. Multivariate Marshall and Olkin Exponential Minification Process ...

    African Journals Online (AJOL)

    A stationary bivariate minification process with bivariate Marshall-Olkin exponential distribution that was earlier studied by Miroslav et al [15]is in this paper extended to multivariate minification process with multivariate Marshall and Olkin exponential distribution as its stationary marginal distribution. The innovation and the ...

  2. Improvement of the exponential experiment system for the automatical and accurate measurement of the exponential decay constant

    International Nuclear Information System (INIS)

    Shin, Hee Sung; Jang, Ji Woon; Lee, Yoon Hee; Hwang, Yong Hwa; Kim, Ho Dong

    2004-01-01

    The previous exponential experiment system has been improved for the automatical and accurate axial movement of the neutron source and detector with attaching the automatical control system which consists of a Programmable Logical Controller(PLC) and a stepping motor set. The automatic control program which controls MCA and PLC consistently has been also developed on the basis of GENIE 2000 library. The exponential experiments have been carried out for Kori 1 unit spent fuel assemblies, C14, J14 and G23, and Kori 2 unit spent fuel assembly, J44, using the improved systematical measurement system. As the results, the average exponential decay constants for 4 assemblies are determined to be 0.1302, 0.1267, 0.1247, and 0.1210, respectively, with the application of poisson regression

  3. An Analysis of Medication Errors at the Military Medical Center: Implications for a Systems Approach for Error Reduction

    National Research Council Canada - National Science Library

    Scheirman, Katherine

    2001-01-01

    An analysis was accomplished of all inpatient medication errors at a military academic medical center during the year 2000, based on the causes of medication errors as described by current research in the field...

  4. The technological singularity and exponential medicine

    Directory of Open Access Journals (Sweden)

    Iraj Nabipour

    2016-01-01

    Full Text Available The "technological singularity" is forecasted to occur in 2045. It is a point when non-biological intelligence becomes more intelligent than humans and each generation of intelligent machines re-designs itself smarter. Beyond this point, there is a symbiosis between machines and humans. This co-existence will produce incredible impacts on medicine that its sparkles could be seen in healthcare industry and the future medicine since 2025. Ray Kurzweil, the great futurist, suggested that three revolutions in science and technology consisting genetic and molecular science, nanotechnology, and robotic (artificial intelligence provided an exponential growth rate for medicine. The "exponential medicine" is going to create more disruptive technologies in healthcare industry. The exponential medicine shifts the paradigm of medical philosophy and produces significant impacts on the healthcare system and patient-physician relationship.   

  5. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  6. Exponential Stability of Switched Positive Homogeneous Systems

    Directory of Open Access Journals (Sweden)

    Dadong Tian

    2017-01-01

    Full Text Available This paper studies the exponential stability of switched positive nonlinear systems defined by cooperative and homogeneous vector fields. In order to capture the decay rate of such systems, we first consider the subsystems. A sufficient condition for exponential stability of subsystems with time-varying delays is derived. In particular, for the corresponding delay-free systems, we prove that this sufficient condition is also necessary. Then, we present a sufficient condition of exponential stability under minimum dwell time switching for the switched positive nonlinear systems. Some results in the previous literature are extended. Finally, a numerical example is given to demonstrate the effectiveness of the obtained results.

  7. Central limit theorem and deformed exponentials

    International Nuclear Information System (INIS)

    Vignat, C; Plastino, A

    2007-01-01

    The central limit theorem (CLT) can be ranked among the most important ones in probability theory and statistics and plays an essential role in several basic and applied disciplines, notably in statistical thermodynamics. We show that there exists a natural extension of the CLT from exponentials to so-called deformed exponentials (also denoted as q-Gaussians). Our proposal applies exactly in the usual conditions in which the classical CLT is used. (fast track communication)

  8. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  9. Lake Area Analysis Using Exponential Smoothing Model and Long Time-Series Landsat Images in Wuhan, China

    Directory of Open Access Journals (Sweden)

    Gonghao Duan

    2018-01-01

    Full Text Available The loss of lake area significantly influences the climate change in a region, and this loss represents a serious and unavoidable challenge to maintaining ecological sustainability under the circumstances of lakes that are being filled. Therefore, mapping and forecasting changes in the lake is critical for protecting the environment and mitigating ecological problems in the urban district. We created an accessible map displaying area changes for 82 lakes in the Wuhan city using remote sensing data in conjunction with visual interpretation by combining field data with Landsat 2/5/7/8 Thematic Mapper (TM time-series images for the period 1987–2013. In addition, we applied a quadratic exponential smoothing model to forecast lake area changes in Wuhan city. The map provides, for the first time, estimates of lake development in Wuhan using data required for local-scale studies. The model predicted a lake area reduction of 18.494 km2 in 2015. The average error reached 0.23 with a correlation coefficient of 0.98, indicating that the model is reliable. The paper provided a numerical analysis and forecasting method to provide a better understanding of lake area changes. The modeling and mapping results can help assess aquatic habitat suitability and property planning for Wuhan lakes.

  10. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Häppölä, Juho

    2016-01-06

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.

  11. Effect of benzalkonium chloride on viability and energy metabolism in exponential- and stationary-growth-phase cells of Listeria monocytogenes

    NARCIS (Netherlands)

    Luppens, S.B.I.; Abee, T.; Oosterom, J.

    2001-01-01

    The difference in killing exponential- and stationary-phase cells of Listeria monocytogenes by benzalkonium chloride (BAC) was investigated by plate counting and linked to relevant bioenergetic parameters. At a low concentration of BAC (8 mg liter-1), a similar reduction in viable cell numbers was

  12. Spatial-temporal analysis of wind power forecast errors for West-Coast Norway

    Energy Technology Data Exchange (ETDEWEB)

    Revheim, Paal Preede; Beyer, Hans Georg [Agder Univ. (UiA), Grimstad (Norway). Dept. of Engineering Sciences

    2012-07-01

    In this paper the spatial-temporal structure of forecast errors for wind power in West-Coast Norway is analyzed. Starting on the qualitative analysis of the forecast error reduction, with respect to single site data, for the lumped conditions of groups of sites the spatial and temporal correlations of the wind power forecast errors within and between the same groups are studied in detail. Based on this, time-series regression models to be used to analytically describe the error reduction are set up. The models give an expected reduction in forecast error between 48.4% and 49%. (orig.)

  13. The McDonald exponentiated gamma distribution and its statistical properties

    OpenAIRE

    Al-Babtain, Abdulhakim A; Merovci, Faton; Elbatal, Ibrahim

    2015-01-01

    Abstract In this paper, we propose a five-parameter lifetime model called the McDonald exponentiated gamma distribution to extend beta exponentiated gamma, Kumaraswamy exponentiated gamma and exponentiated gamma, among several other models. We provide a comprehensive mathematical treatment of this distribution. We derive the moment generating function and the rth moment. We discuss estimation of the parameters by maximum likelihood and provide the information matrix. AMS Subject Classificatio...

  14. Exponential Shear Flow of Linear, Entangled Polymeric Liquids

    DEFF Research Database (Denmark)

    Neergaard, Jesper; Park, Kyungho; Venerus, David C.

    2000-01-01

    A previously proposed reptation model is used to interpret exponential shear flow data taken on an entangled polystyrenesolution. Both shear and normal stress measurements are made during exponential shear using mechanical means. The model iscapable of explaining all trends seen in the data......, and suggests a novel analysis of the data. This analysis demonstrates thatexponential shearing flow is no more capable of stretching polymer chains than is inception of steady shear at comparableinstantaneous shear rates. In fact, all exponential shear flow stresses measured are bounded quantitatively...

  15. Dual exponential polynomials and linear differential equations

    Science.gov (United States)

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne

    2018-01-01

    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  16. Simultaneous determination of exponential background and Gaussian peak functions in gamma ray scintillation spectrometers by maximum likelihood technique

    International Nuclear Information System (INIS)

    Eisler, P.; Youl, S.; Lwin, T.; Nelson, G.

    1983-01-01

    Simultaneous fitting of peaks and background functions from gamma-ray spectrometry using multichannel pulse height analysis is considered. The specific case of Gaussian peak and exponential background is treated in detail with respect to simultaneous estimation of both functions by using a technique which incorporates maximum likelihood method as well as a graphical method. Theoretical expressions for the standard errors of the estimates are also obtained. The technique is demonstrated for two experimental data sets. (orig.)

  17. At least some errors are randomly generated (Freud was wrong)

    Science.gov (United States)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  18. Periodic oscillation and exponential stability of delayed CNNs

    Science.gov (United States)

    Cao, Jinde

    2000-05-01

    Both the global exponential stability and the periodic oscillation of a class of delayed cellular neural networks (DCNNs) is further studied in this Letter. By applying some new analysis techniques and constructing suitable Lyapunov functionals, some simple and new sufficient conditions are given ensuring global exponential stability and the existence of periodic oscillatory solution of DCNNs. These conditions can be applied to design globally exponentially stable DCNNs and periodic oscillatory DCNNs and easily checked in practice by simple algebraic methods. These play an important role in the design and applications of DCNNs.

  19. Contribution of mono-exponential, bi-exponential and stretched exponential model-based diffusion-weighted MR imaging in the diagnosis and differentiation of uterine cervical carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Meng; Yu, Xiaoduo; Chen, Yan; Ouyang, Han; Zhou, Chunwu [Chinese Academy of Medical Sciences, Department of Diagnostic Radiology, Cancer Institute and Hospital, Peking Union Medical College, Beijing (China); Wu, Bing; Zheng, Dandan [GE MR Research China, Beijing (China)

    2017-06-15

    To investigate the potential of various metrics derived from mono-exponential model (MEM), bi-exponential model (BEM) and stretched exponential model (SEM)-based diffusion-weighted imaging (DWI) in diagnosing and differentiating the pathological subtypes and grades of uterine cervical carcinoma. 71 newly diagnosed patients with cervical carcinoma (50 cases of squamous cell carcinoma [SCC] and 21 cases of adenocarcinoma [AC]) and 32 healthy volunteers received DWI with multiple b values. The apparent diffusion coefficient (ADC), pure molecular diffusion (D), pseudo-diffusion coefficient (D*), perfusion fraction (f), water molecular diffusion heterogeneity index (alpha), and distributed diffusion coefficient (DDC) were calculated and compared between tumour and normal cervix, among different pathological subtypes and grades. All of the parameters were significantly lower in cervical carcinoma than normal cervical stroma except alpha. SCC showed lower ADC, D, f and DDC values and higher D* value than AC; D and DDC values of SCC and ADC and D values of AC were lower in the poorly differentiated group than those in the well-moderately differentiated group. Compared with MEM, diffusion parameters from BEM and SEM may offer additional information in cervical carcinoma diagnosis, predicting pathological tumour subtypes and grades, while f and D showed promising significance. (orig.)

  20. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas; Tempone, Raul

    2014-01-01

    jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time

  1. Does proton decay follow the exponential law

    International Nuclear Information System (INIS)

    Sanchez-Gomez, J.L.; Alvarez-Estrada, R.F.; Fernandez, L.A.

    1984-01-01

    In this paper, we discuss the exponential law for proton decay. By using a simple model based upon SU(5)GUT and the current theories of hadron structure, we explicitely show that the corrections to the Wigner-Weisskopf approximation are quite negligible for present day protons, so that their eventual decay should follow the exponential law. Previous works are critically analyzed. (orig.)

  2. An experimental investigation on the effects of exponential window and impact force level on harmonic reduction in impact-synchronous model analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin [Mechanical Engineering Department, Faculty of EngineeringUniversity of Malaya, Kuala Lumpur (Malaysia); Rahman, Abdul Ghaffar Abdul [Faculty of Mechanical Engineering, University Malaysia Pahang, Pekan (Malaysia); Ismail, Zubaidah [Civil Engineering Department, Faculty of Engineering, University of Malaya, Kuala Lumpur (Malaysia)

    2016-08-15

    A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)

  3. An experimental investigation on the effects of exponential window and impact force level on harmonic reduction in impact-synchronous model analysis

    International Nuclear Information System (INIS)

    Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin; Rahman, Abdul Ghaffar Abdul; Ismail, Zubaidah

    2016-01-01

    A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)

  4. Exponential asymptotics of homoclinic snaking

    International Nuclear Information System (INIS)

    Dean, A D; Matthews, P C; Cox, S M; King, J R

    2011-01-01

    We study homoclinic snaking in the cubic-quintic Swift–Hohenberg equation (SHE) close to the onset of a subcritical pattern-forming instability. Application of the usual multiple-scales method produces a leading-order stationary front solution, connecting the trivial solution to the patterned state. A localized pattern may therefore be constructed by matching between two distant fronts placed back-to-back. However, the asymptotic expansion of the front is divergent, and hence should be truncated. By truncating optimally, such that the resultant remainder is exponentially small, an exponentially small parameter range is derived within which stationary fronts exist. This is shown to be a direct result of the 'locking' between the phase of the underlying pattern and its slowly varying envelope. The locking mechanism remains unobservable at any algebraic order, and can only be derived by explicitly considering beyond-all-orders effects in the tail of the asymptotic expansion, following the method of Kozyreff and Chapman as applied to the quadratic-cubic SHE (Chapman and Kozyreff 2009 Physica D 238 319–54, Kozyreff and Chapman 2006 Phys. Rev. Lett. 97 44502). Exponentially small, but exponentially growing, contributions appear in the tail of the expansion, which must be included when constructing localized patterns in order to reproduce the full snaking diagram. Implicit within the bifurcation equations is an analytical formula for the width of the snaking region. Due to the linear nature of the beyond-all-orders calculation, the bifurcation equations contain an analytically indeterminable constant, estimated in the previous work by Chapman and Kozyreff using a best fit approximation. A more accurate estimate of the equivalent constant in the cubic-quintic case is calculated from the iteration of a recurrence relation, and the subsequent analytical bifurcation diagram compared with numerical simulations, with good agreement

  5. Exponential Growth of Nonlinear Ballooning Instability

    International Nuclear Information System (INIS)

    Zhu, P.; Hegna, C. C.; Sovinec, C. R.

    2009-01-01

    Recent ideal magnetohydrodynamic (MHD) theory predicts that a perturbation evolving from a linear ballooning instability will continue to grow exponentially in the intermediate nonlinear phase at the same linear growth rate. This prediction is confirmed in ideal MHD simulations. When the Lagrangian compression, a measure of the ballooning nonlinearity, becomes of the order of unity, the intermediate nonlinear phase is entered, during which the maximum plasma displacement amplitude as well as the total kinetic energy continues to grow exponentially at the rate of the corresponding linear phase.

  6. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  7. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    Science.gov (United States)

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  8. Exponential Operators, Dobinski Relations and Summability

    International Nuclear Information System (INIS)

    Blasiak, P; Gawron, A; Horzela, A; Penson, K A; Solomon, A I

    2006-01-01

    We investigate properties of exponential operators preserving the particle number using combinatorial methods developed in order to solve the boson normal ordering problem. In particular, we apply generalized Dobinski relations and methods of multivariate Bell polynomials which enable us to understand the meaning of perturbation-like expansions of exponential operators. Such expansions, obtained as formal power series, are everywhere divergent but the Pade summation method is shown to give results which very well agree with exact solutions got for simplified quantum models of the one mode bosonic systems

  9. Exponential Data Fitting and its Applications

    CERN Document Server

    Pereyra, Victor

    2010-01-01

    Real and complex exponential data fitting is an important activity in many different areas of science and engineering, ranging from Nuclear Magnetic Resonance Spectroscopy and Lattice Quantum Chromodynamics to Electrical and Chemical Engineering, Vision and Robotics. The most commonly used norm in the approximation by linear combinations of exponentials is the l2 norm (sum of squares of residuals), in which case one obtains a nonlinear separable least squares problem. A number of different methods have been proposed through the years to solve these types of problems and new applications appear

  10. Exponentially tapered Josephson flux-flow oscillator

    DEFF Research Database (Denmark)

    Benabdallah, A.; Caputo, J. G.; Scott, Alwyn C.

    1996-01-01

    We introduce an exponentially tapered Josephson flux-flow oscillator that is tuned by applying a bias current to the larger end of the junction. Numerical and analytical studies show that above a threshold level of bias current the static solution becomes unstable and gives rise to a train...... of fluxons moving toward the unbiased smaller end, as in the standard flux-flow oscillator. An exponentially shaped junction provides several advantages over a rectangular junction including: (i) smaller linewidth, (ii) increased output power, (iii) no trapped flux because of the type of current injection...

  11. Ranking Exponential Trapezoidal Fuzzy Numbers by Median Value

    Directory of Open Access Journals (Sweden)

    S. Rezvani

    2013-12-01

    Full Text Available In this paper, we want represented a method for ranking of two exponential trapezoidal fuzzy numbers. A median value is proposed for the ranking of exponential trapezoidal fuzzy numbers. For the validation the results of the proposed approach are compared with different existing approaches.

  12. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  13. Exponentiation for products of Wilson lines within the generating function approach

    International Nuclear Information System (INIS)

    Vladimirov, A.A.

    2015-01-01

    We present the generating function approach to the perturbative exponentiation of correlators of a product of Wilson lines and loops. The exponentiated expression is presented in closed form as an algebraic function of correlators of known operators, which can be seen as a generating function for web diagrams. The expression is naturally split onto two parts: the exponentiation kernel, which accumulates all non-trivial information about web diagrams, and the defect of exponentiation, which reconstructs the matrix exponent and is a function of the exponentiation kernel. The detailed comparison of the presented approach with existing approaches to exponentiation is presented as well. We also give examples of calculations within the generating function exponentiation, namely, we consider different configurations of light-like Wilson lines in the multi-gluon-exchange-webs (MGEW) approximation. Within this approximation the corresponding correlators can be calculated exactly at any order of perturbative expansion by only algebraic manipulations. The MGEW approximation shows violation of the dipole formula for infrared singularities at three-loop order.

  14. Increased Patient Satisfaction and a Reduction in Pre-Analytical Errors Following Implementation of an Electronic Specimen Collection Module in Outpatient Phlebotomy.

    Science.gov (United States)

    Kantartjis, Michalis; Melanson, Stacy E F; Petrides, Athena K; Landman, Adam B; Bates, David W; Rosner, Bernard A; Goonan, Ellen; Bixho, Ida; Tanasijevic, Milenko J

    2017-08-01

    Patient satisfaction in outpatient phlebotomy settings typically depends on wait time and venipuncture experience, and many patients equate their experiences with their overall satisfaction with the hospital. We compared patient service times and preanalytical errors pre- and postimplementation of an integrated electronic health record (EHR)-laboratory information system (LIS) and electronic specimen collection module. We also measured patient wait time and assessed patient satisfaction using a 5-question survey. The percentage of patients waiting less than 10 minutes increased from 86% preimplementation to 93% postimplementation of the EHR-LIS (P ≤.001). The median total service time decreased significantly, from 6 minutes (IQR, 4-8 minutes), to 5 minutes (IQR, 3-6 minutes) (P = .005). The preanalytical errors decreased significantly, from 3.20 to 1.93 errors per 1000 specimens (P ≤.001). Overall patient satisfaction improved, with an increase in excellent responses for all 5 questions (P ≤.001). We found several benefits of implementing an electronic specimen collection module, including decreased wait and service times, improved patient satisfaction, and a reduction in preanalytical errors. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  15. Effects of Exponential Trends on Correlations of Stock Markets

    Directory of Open Access Journals (Sweden)

    Ai-Jing Lin

    2014-01-01

    Full Text Available Detrended fluctuation analysis (DFA is a scaling analysis method used to estimate long-range power-law correlation exponents in time series. In this paper, DFA is employed to discuss the long-range correlations of stock market. The effects of exponential trends on correlations of Hang Seng Index (HSI are investigated with emphasis. We find that the long-range correlations and the positions of the crossovers of lower order DFA appear to have no immunity to the additive exponential trends. Further, our analysis suggests that an increase in the DFA order increases the efficiency of eliminating on exponential trends. In addition, the empirical study shows that the correlations and crossovers are associated with DFA order and magnitude of exponential trends.

  16. Science in an Exponential World

    Science.gov (United States)

    Szalay, Alexander

    The amount of scientific information is doubling every year. This exponential growth is fundamentally changing every aspect of the scientific process - the collection, analysis and dissemination of scientific information. Our traditional paradigm for scientific publishing assumes a linear world, where the number of journals and articles remains approximately constant. The talk presents the challenges of this new paradigm and shows examples of how some disciplines are trying to cope with the data avalanche. In astronomy, the Virtual Observatory is emerging as a way to do astronomy in the 21st century. Other disciplines are also in the process of creating their own Virtual Observatories, on every imaginable scale of the physical world. We will discuss how long this exponential growth can continue.

  17. Exponential stability in a scalar functional differential equation

    Directory of Open Access Journals (Sweden)

    Pituk Mihály

    2006-01-01

    Full Text Available We establish a criterion for the global exponential stability of the zero solution of the scalar retarded functional differential equation whose linear part generates a monotone semiflow on the phase space with respect to the exponential ordering, and the nonlinearity has at most linear growth.

  18. Exponential H(infinity) synchronization of general discrete-time chaotic neural networks with or without time delays.

    Science.gov (United States)

    Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin

    2010-08-01

    This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.

  19. Exponential integrators in time-dependent density-functional calculations

    Science.gov (United States)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  20. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  1. Confronting quasi-exponential inflation with WMAP seven

    International Nuclear Information System (INIS)

    Pal, Barun Kumar; Pal, Supratik; Basu, B.

    2012-01-01

    We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK

  2. Blowing-up semilinear wave equation with exponential nonlinearity ...

    Indian Academy of Sciences (India)

    H1-norm. Hence, it is legitimate to consider an exponential nonlinearity. Moreover, the choice of an exponential nonlinearity emerges from a possible control of solutions via a. Moser–Trudinger type inequality [1, 16, 19]. In fact, Nakamura and Ozawa [17] proved global well-posedness and scattering for small Cauchy data in ...

  3. Viète's Formula and an Error Bound without Taylor's Theorem

    Science.gov (United States)

    Boucher, Chris

    2018-01-01

    This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.

  4. Error characterization and quantum control benchmarking in liquid state NMR using quantum information processing techniques

    Science.gov (United States)

    Laforest, Martin

    Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for

  5. Reduction of digital errors of digital charge division type position-sensitive detectors

    International Nuclear Information System (INIS)

    Uritani, A.; Yoshimura, K.; Takenaka, Y.; Mori, C.

    1994-01-01

    It is well known that ''digital errors'', i.e. differential non-linearity, appear in a position profile of radiation interactions when the profile is obtained with a digital charge-division-type position-sensitive detector. Two methods are presented to reduce the digital errors. They are the methods using logarithmic amplifiers and a weighting function. The validities of these two methods have been evaluated mainly by computer simulation. These methods can considerably reduce the digital errors. The best results are obtained when both methods are applied. ((orig.))

  6. (Anti)symmetric multivariate exponential functions and corresponding Fourier transforms

    International Nuclear Information System (INIS)

    Klimyk, A U; Patera, J

    2007-01-01

    We define and study symmetrized and antisymmetrized multivariate exponential functions. They are defined as determinants and antideterminants of matrices whose entries are exponential functions of one variable. These functions are eigenfunctions of the Laplace operator on the corresponding fundamental domains satisfying certain boundary conditions. To symmetric and antisymmetric multivariate exponential functions there correspond Fourier transforms. There are three types of such Fourier transforms: expansions into the corresponding Fourier series, integral Fourier transforms and multivariate finite Fourier transforms. Eigenfunctions of the integral Fourier transforms are found

  7. The many faces of the quantum Liouville exponentials

    Science.gov (United States)

    Gervais, Jean-Loup; Schnittger, Jens

    1994-01-01

    First, it is proven that the three main operator approaches to the quantum Liouville exponentials—that is the one of Gervais-Neveu (more recently developed further by Gervais), Braaten-Curtright-Ghandour-Thorn, and Otto-Weigt—are equivalent since they are related by simple basis transformations in the Fock space of the free field depending upon the zero-mode only. Second, the GN-G expressions for quantum Liouville exponentials, where the U q( sl(2)) quantum-group structure is manifest, are shown to be given by q-binomial sums over powers of the chiral fields in the J = {1}/{2} representation. Third, the Liouville exponentials are expressed as operator tau functions, whose chiral expansion exhibits a q Gauss decomposition, which is the direct quantum analogue of the classical solution of Leznov and Saveliev. It involves q exponentials of quantum-group generators with group "parameters" equal to chiral components of the quantum metric. Fourth, we point out that the OPE of the J = {1}/{2} Liouville exponential provides the quantum version of the Hirota bilinear equation.

  8. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  9. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    Science.gov (United States)

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.

  10. Daily Orthogonal Kilovoltage Imaging Using a Gantry-Mounted On-Board Imaging System Results in a Reduction in Radiation Therapy Delivery Errors

    Energy Technology Data Exchange (ETDEWEB)

    Russo, Gregory A., E-mail: gregory.russo@bmc.org [Department of Radiation Oncology, Boston Medical Center and Boston University School of Medicine, Boston, Massachusetts (United States); Qureshi, Muhammad M.; Truong, Minh-Tam; Hirsch, Ariel E.; Orlina, Lawrence; Bohrs, Harry; Clancy, Pauline; Willins, John; Kachnic, Lisa A. [Department of Radiation Oncology, Boston Medical Center and Boston University School of Medicine, Boston, Massachusetts (United States)

    2012-11-01

    Purpose: To determine whether the use of routine image guided radiation therapy (IGRT) using pretreatment on-board imaging (OBI) with orthogonal kilovoltage X-rays reduces treatment delivery errors. Methods and Materials: A retrospective review of documented treatment delivery errors from 2003 to 2009 was performed. Following implementation of IGRT in 2007, patients received daily OBI with orthogonal kV X-rays prior to treatment. The frequency of errors in the pre- and post-IGRT time frames was compared. Treatment errors (TEs) were classified as IGRT-preventable or non-IGRT-preventable. Results: A total of 71,260 treatment fractions were delivered to 2764 patients. A total of 135 (0.19%) TEs occurred in 39 (1.4%) patients (3.2% in 2003, 1.1% in 2004, 2.5% in 2005, 2% in 2006, 0.86% in 2007, 0.24% in 2008, and 0.22% in 2009). In 2007, the TE rate decreased by >50% and has remained low (P = .00007, compared to before 2007). Errors were classified as being potentially preventable with IGRT (e.g., incorrect site, patient, or isocenter) vs. not. No patients had any IGRT-preventable TEs from 2007 to 2009, whereas there were 9 from 2003 to 2006 (1 in 2003, 2 in 2004, 2 in 2005, and 4 in 2006; P = .0058) before the implementation of IGRT. Conclusions: IGRT implementation has a patient safety benefit with a significant reduction in treatment delivery errors. As such, we recommend the use of IGRT in routine practice to complement existing quality assurance measures.

  11. Is the basic law of radioactive decay exponential?

    International Nuclear Information System (INIS)

    Gopych, P.M.; Zalyubovskii, I.I.

    1988-01-01

    Basic theoretical approaches to the explanation of the observed exponential nature of the decay law are discussed together with the hypothesis that it is not exponential. The significance of this question and its connection with fundamental problems of modern physics are considered. The results of experiments relating to investigation of the form of the decay law are given

  12. Non-linear quantization error reduction for the temperature measurement subsystem on-board LISA Pathfinder

    Science.gov (United States)

    Sanjuan, J.; Nofrarias, M.

    2018-04-01

    Laser Interferometer Space Antenna (LISA) Pathfinder is a mission to test the technology enabling gravitational wave detection in space and to demonstrate that sub-femto-g free fall levels are possible. To do so, the distance between two free falling test masses is measured to unprecedented sensitivity by means of laser interferometry. Temperature fluctuations are one of the noise sources limiting the free fall accuracy and the interferometer performance and need to be known at the ˜10 μK Hz-1/2 level in the sub-millihertz frequency range in order to validate the noise models for the future space-based gravitational wave detector LISA. The temperature measurement subsystem on LISA Pathfinder is in charge of monitoring the thermal environment at key locations with noise levels of 7.5 μK Hz-1/2 at the sub-millihertz. However, its performance worsens by one to two orders of magnitude when slowly changing temperatures are measured due to errors introduced by analog-to-digital converter non-linearities. In this paper, we present a method to reduce this effect by data post-processing. The method is applied to experimental data available from on-ground validation tests to demonstrate its performance and the potential benefit for in-flight data. The analog-to-digital converter effects are reduced by a factor between three and six in the frequencies where the errors play an important role. An average 2.7 fold noise reduction is demonstrated in the 0.3 mHz-2 mHz band.

  13. Dynamic Transcriptional Regulation of Fis in Salmonella During the Exponential Phase.

    Science.gov (United States)

    Wang, Hui; Wang, Lei; Li, Ping; Hu, Yilang; Zhang, Wei; Tang, Bo

    2015-12-01

    Fis is one of the most important global regulators and has attracted extensive research attention. Many studies have focused on comparing the Fis global regulatory networks for exploring Fis function during different growth stages, such as the exponential and stationary stages. Although the Fis protein in bacteria is mainly expressed in the exponential phase, the dynamic transcriptional regulation of Fis during the exponential phase remains poorly understood. To address this question, we used RNA-seq technology to identify the Fis-regulated genes in the S. enterica serovar Typhimurium during the early exponential phase, and qRT-PCR was performed to validate the transcriptional data. A total of 1495 Fis-regulated genes were successfully identified, including 987 Fis-repressed genes and 508 Fis-activated genes. Comparing the results of this study with those of our previous study, we found that the transcriptional regulation of Fis was diverse during the early- and mid-exponential phases. The results also showed that the strong positive regulation of Fis on Salmonella pathogenicity island genes in the mid-exponential phase transitioned into insignificant effect in the early exponential phase. To validate these results, we performed a cell infection assay and found that Δfis only exhibited a 1.49-fold decreased capacity compared with the LT2 wild-type strain, indicating a large difference from the 6.31-fold decrease observed in the mid-exponential phase. Our results provide strong evidence for a need to thoroughly understand the dynamic transcriptional regulation of Fis in Salmonella during the exponential phase.

  14. Laminar phase flow for an exponentially tapered Josephson oscillator

    DEFF Research Database (Denmark)

    Benabdallah, A.; Caputo, J. G.; Scott, Alwyn C.

    2000-01-01

    Exponential tapering and inhomogeneous current feed were recently proposed as means to improve the performance of a Josephson flux flow oscillator. Extensive numerical results backed up by analysis are presented here that support this claim and demonstrate that exponential tapering reduces...... the small current instability region and leads to a laminar flow regime where the voltage wave form is periodic giving the oscillator minimal spectral width. Tapering also leads to an increased output power. Since exponential tapering is not expected to increase the difficulty of fabricating a flux flow...

  15. Medical Error Avoidance in Intraoperative Neurophysiological Monitoring: The Communication Imperative.

    Science.gov (United States)

    Skinner, Stan; Holdefer, Robert; McAuliffe, John J; Sala, Francesco

    2017-11-01

    Error avoidance in medicine follows similar rules that apply within the design and operation of other complex systems. The error-reduction concepts that best fit the conduct of testing during intraoperative neuromonitoring are forgiving design (reversibility of signal loss to avoid/prevent injury) and system redundancy (reduction of false reports by the multiplication of the error rate of tests independently assessing the same structure). However, error reduction in intraoperative neuromonitoring is complicated by the dichotomous roles (and biases) of the neurophysiologist (test recording and interpretation) and surgeon (intervention). This "interventional cascade" can be given as follows: test → interpretation → communication → intervention → outcome. Observational and controlled trials within operating rooms demonstrate that optimized communication, collaboration, and situational awareness result in fewer errors. Well-functioning operating room collaboration depends on familiarity and trust among colleagues. Checklists represent one method to initially enhance communication and avoid obvious errors. All intraoperative neuromonitoring supervisors should strive to use sufficient means to secure situational awareness and trusted communication/collaboration. Face-to-face audiovisual teleconnections may help repair deficiencies when a particular practice model disallows personal operating room availability. All supervising intraoperative neurophysiologists need to reject an insular or deferential or distant mindset.

  16. Calorimeter prediction based on multiple exponentials

    International Nuclear Information System (INIS)

    Smith, M.K.; Bracken, D.S.

    2002-01-01

    Calorimetry allows very precise measurements of nuclear material to be carried out, but it also requires relatively long measurement times to do so. The ability to accurately predict the equilibrium response of a calorimeter would significantly reduce the amount of time required for calorimetric assays. An algorithm has been developed that is effective at predicting the equilibrium response. This multi-exponential prediction algorithm is based on an iterative technique using commercial fitting routines that fit a constant plus a variable number of exponential terms to calorimeter data. Details of the implementation and the results of trials on a large number of calorimeter data sets will be presented

  17. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  18. Exponential convergence on a continuous Monte Carlo transport problem

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-01-01

    For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described

  19. Stochastic B-series and order conditions for exponential integrators

    DEFF Research Database (Denmark)

    Arara, Alemayehu Adugna; Debrabant, Kristian; Kværnø, Anne

    2018-01-01

    We discuss stochastic differential equations with a stiff linear part and their approximation by stochastic exponential integrators. Representing the exact and approximate solutions using B-series and rooted trees, we derive the order conditions for stochastic exponential integrators. The resulting...

  20. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  1. Exponential stability of delayed fuzzy cellular neural networks with diffusion

    International Nuclear Information System (INIS)

    Huang Tingwen

    2007-01-01

    The exponential stability of delayed fuzzy cellular neural networks (FCNN) with diffusion is investigated. Exponential stability, significant for applications of neural networks, is obtained under conditions that are easily verified by a new approach. Earlier results on the exponential stability of FCNN with time-dependent delay, a special case of the model studied in this paper, are improved without using the time-varying term condition: dτ(t)/dt < μ

  2. Accelerating cosmologies from exponential potentials

    International Nuclear Information System (INIS)

    Neupane, Ishwaree P.

    2003-11-01

    It is learnt that exponential potentials of the form V ∼ exp(-2cφ/M p ) arising from the hyperbolic or flux compactification of higher-dimensional theories are of interest for getting short periods of accelerated cosmological expansions. Using a similar potential but derived for the combined case of hyperbolic-flux compactification, we study a four-dimensional flat (or open) FRW cosmologies and give analytic (and numerical) solutions with exponential behavior of scale factors. We show that, for the M-theory motivated potentials, the cosmic acceleration of the universe can be eternal if the spatial curvature of the 4d spacetime is negative, while the acceleration is only transient for a spatially flat universe. We also briefly discuss about the mass of massive Kaluza-Klein modes and the dynamical stabilization of the compact hyperbolic extra dimensions. (author)

  3. Three-Step Predictor-Corrector of Exponential Fitting Method for Nonlinear Schroedinger Equations

    International Nuclear Information System (INIS)

    Tang Chen; Zhang Fang; Yan Haiqing; Luo Tao; Chen Zhanqing

    2005-01-01

    We develop the three-step explicit and implicit schemes of exponential fitting methods. We use the three-step explicit exponential fitting scheme to predict an approximation, then use the three-step implicit exponential fitting scheme to correct this prediction. This combination is called the three-step predictor-corrector of exponential fitting method. The three-step predictor-corrector of exponential fitting method is applied to numerically compute the coupled nonlinear Schroedinger equation and the nonlinear Schroedinger equation with varying coefficients. The numerical results show that the scheme is highly accurate.

  4. Global robust exponential stability analysis for interval recurrent neural networks

    International Nuclear Information System (INIS)

    Xu Shengyuan; Lam, James; Ho, Daniel W.C.; Zou Yun

    2004-01-01

    This Letter investigates the problem of robust global exponential stability analysis for interval recurrent neural networks (RNNs) via the linear matrix inequality (LMI) approach. The values of the time-invariant uncertain parameters are assumed to be bounded within given compact sets. An improved condition for the existence of a unique equilibrium point and its global exponential stability of RNNs with known parameters is proposed. Based on this, a sufficient condition for the global robust exponential stability for interval RNNs is obtained. Both of the conditions are expressed in terms of LMIs, which can be checked easily by various recently developed convex optimization algorithms. Examples are provided to demonstrate the reduced conservatism of the proposed exponential stability condition

  5. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    Science.gov (United States)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  6. Sub-exponential mixing of random billiards driven by thermostats

    International Nuclear Information System (INIS)

    Yarmola, Tatiana

    2013-01-01

    We study the class of open continuous-time mechanical particle systems introduced in the paper by Khanin and Yarmola (2013 Commun. Math. Phys. 320 121–47). Using the discrete-time results from Khanin and Yarmola (2013 Commun. Math. Phys. 320 121–47) we demonstrate rigorously that, in continuous time, a unique steady state exists and is sub-exponentially mixing. Moreover, all initial distributions converge to the steady state and, for a large class of initial distributions, convergence to the steady state is sub-exponential. The main obstacle to exponential convergence is the existence of slow particles in the system. (paper)

  7. Adiabatic approximation with exponential accuracy for many-body systems and quantum computation

    International Nuclear Information System (INIS)

    Lidar, Daniel A.; Rezakhani, Ali T.; Hamma, Alioscia

    2009-01-01

    We derive a version of the adiabatic theorem that is especially suited for applications in adiabatic quantum computation, where it is reasonable to assume that the adiabatic interpolation between the initial and final Hamiltonians is controllable. Assuming that the Hamiltonian is analytic in a finite strip around the real-time axis, that some number of its time derivatives vanish at the initial and final times, and that the target adiabatic eigenstate is nondegenerate and separated by a gap from the rest of the spectrum, we show that one can obtain an error between the final adiabatic eigenstate and the actual time-evolved state which is exponentially small in the evolution time, where this time itself scales as the square of the norm of the time derivative of the Hamiltonian divided by the cube of the minimal gap.

  8. An exact formulation of the time-ordered exponential using path-sums

    International Nuclear Information System (INIS)

    Giscard, P.-L.; Lui, K.; Thwaite, S. J.; Jaksch, D.

    2015-01-01

    We present the path-sum formulation for the time-ordered exponential of a time-dependent matrix. The path-sum formulation gives the time-ordered exponential as a branched continued fraction of finite depth and breadth. The terms of the path-sum have an elementary interpretation as self-avoiding walks and self-avoiding polygons on a graph. Our result is based on a representation of the time-ordered exponential as the inverse of an operator, the mapping of this inverse to sums of walks on a graphs, and the algebraic structure of sets of walks. We give examples demonstrating our approach. We establish a super-exponential decay bound for the magnitude of the entries of the time-ordered exponential of sparse matrices. We give explicit results for matrices with commonly encountered sparse structures

  9. An exact formulation of the time-ordered exponential using path-sums

    Science.gov (United States)

    Giscard, P.-L.; Lui, K.; Thwaite, S. J.; Jaksch, D.

    2015-05-01

    We present the path-sum formulation for the time-ordered exponential of a time-dependent matrix. The path-sum formulation gives the time-ordered exponential as a branched continued fraction of finite depth and breadth. The terms of the path-sum have an elementary interpretation as self-avoiding walks and self-avoiding polygons on a graph. Our result is based on a representation of the time-ordered exponential as the inverse of an operator, the mapping of this inverse to sums of walks on a graphs, and the algebraic structure of sets of walks. We give examples demonstrating our approach. We establish a super-exponential decay bound for the magnitude of the entries of the time-ordered exponential of sparse matrices. We give explicit results for matrices with commonly encountered sparse structures.

  10. Exponential gain of randomness certified by quantum contextuality

    Science.gov (United States)

    Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan

    2017-04-01

    We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.

  11. Thin film thickness measurement error reduction by wavelength selection in spectrophotometry

    International Nuclear Information System (INIS)

    Tsepulin, Vladimir G; Perchik, Alexey V; Tolstoguzov, Victor L; Karasik, Valeriy E

    2015-01-01

    Fast and accurate volumetric profilometry of thin film structures is an important problem in the electronic visual display industry. We propose to use spectrophotometry with a limited number of working wavelengths to achieve high-speed control and an approach to selecting the optimal working wavelengths to reduce the thickness measurement error. A simple expression for error estimation is presented and tested using a Monte Carlo simulation. The experimental setup is designed to confirm the stability of film thickness determination using a limited number of wavelengths

  12. A method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  13. Quantum Zeno effect for exponentially decaying systems

    International Nuclear Information System (INIS)

    Koshino, Kazuki; Shimizu, Akira

    2004-01-01

    The quantum Zeno effect - suppression of decay by frequent measurements - was believed to occur only when the response of the detector is so quick that the initial tiny deviation from the exponential decay law is detectable. However, we show that it can occur even for exactly exponentially decaying systems, for which this condition is never satisfied, by considering a realistic case where the detector has a finite energy band of detection. The conventional theories correspond to the limit of an infinite bandwidth. This implies that the Zeno effect occurs more widely than expected thus far

  14. Application of heterogeneous method for the interpretation of exponential experiments

    International Nuclear Information System (INIS)

    Birkhoff, G.; Bondar, L.

    1977-01-01

    The present paper gives a brief review of a work which was executed mainly during 1967 and 1968 in the field of the application of heterogeneous methods for the interpretation of exponential experiments with ORGEL type lattices (lattices of natural uranium cluster elements with organic coolants moderated by heavy water). In the frame of this work a heterogeneous computer program, in (r,γ) geometry was written which is based on the NORDHEIM method using a uniform moderator, three energy groups and monopol and dipol sources. This code is especially adapted for regular square lattices in a cylindrical tank. Full use of lattice symmetry was made for reducing the numerical job of the theory. A further reduction was obtained by introducing a group averaged extrapolation distance at the external boundary. Channel parameters were evaluated by the PINOCCHIO code. Comparisons of calculated and measured thermal neutron flux showed good agreement. Equivalence of heterogeneous and homogeneous theory was found in cases of lattices comprising a minimum of 32, 24 and 16 fuel elements for respectively under-, well-, and over-moderated lattices. Heterogeneous calculations of high leakage lattices suffered the lack of good methods for the computation of axial and radial streaming parameters. Interpretation of buckling measurements in the subcritical facility EXPO requires already more accurate evaluation of the streaming effects than we made. The potential of heterogeneous theory in the field of exponential experiments is thought to be limited by the precision by which the streaming parameters can be calculated

  15. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  16. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  17. Exponential networked synchronization of master-slave chaotic systems with time-varying communication topologies

    International Nuclear Information System (INIS)

    Yang Dong-Sheng; Liu Zhen-Wei; Liu Zhao-Bing; Zhao Yan

    2012-01-01

    The networked synchronization problem of a class of master-slave chaotic systems with time-varying communication topologies is investigated in this paper. Based on algebraic graph theory and matrix theory, a simple linear state feedback controller is designed to synchronize the master chaotic system and the slave chaotic systems with a time-varying communication topology connection. The exponential stability of the closed-loop networked synchronization error system is guaranteed by applying Lyapunov stability theory. The derived novel criteria are in the form of linear matrix inequalities (LMIs), which are easy to examine and tremendously reduce the computation burden from the feedback matrices. This paper provides an alternative networked secure communication scheme which can be extended conveniently. An illustrative example is given to demonstrate the effectiveness of the proposed networked synchronization method. (general)

  18. Exponential smoothing weighted correlations

    Science.gov (United States)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  19. Forecasting Inflow and Outflow of Money Currency in East Java Using a Hybrid Exponential Smoothing and Calendar Variation Model

    Science.gov (United States)

    Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut

    2018-03-01

    Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.

  20. Residual, restarting and Richardson iteration for the matrix exponential

    NARCIS (Netherlands)

    Bochev, Mikhail A.; Grimm, Volker; Hochbruck, Marlis

    2013-01-01

    A well-known problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be computed.

  1. Residual, restarting and Richardson iteration for the matrix exponential

    NARCIS (Netherlands)

    Bochev, Mikhail A.

    2010-01-01

    A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We

  2. Exponential B-splines and the partition of unity property

    DEFF Research Database (Denmark)

    Christensen, Ole; Massopust, Peter

    2012-01-01

    We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...

  3. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  4. The Matrix exponential, Dynamic Systems and Control

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    The matrix exponential can be found in various connections in analysis and control of dynamic systems. In this short note we are going to list a few examples. The matrix exponential usably pops up in connection to the sampling process, whatever it is in a deterministic or a stochastic setting...... or it is a tool for determining a Gramian matrix. This note is intended to be used in connection to the teaching post the course in Stochastic Adaptive Control (02421) given at Informatics and Mathematical Modelling (IMM), The Technical University of Denmark. This work is a result of a study of the litterature....

  5. Analytical results of variance reduction characteristics of biased Monte Carlo for deep-penetration problems

    International Nuclear Information System (INIS)

    Murthy, K.P.N.; Indira, R.

    1986-01-01

    An analytical formulation is presented for calculating the mean and variance of transmission for a model deep-penetration problem. With this formulation, the variance reduction characteristics of two biased Monte Carlo schemes are studied. The first is the usual exponential biasing wherein it is shown that the optimal biasing parameter depends sensitively on the scattering properties of the shielding medium. The second is a scheme that couples exponential biasing to the scattering angle biasing proposed recently. It is demonstrated that the coupled scheme performs better than exponential biasing

  6. Non-exponential extinction of radiation by fractional calculus modelling

    International Nuclear Information System (INIS)

    Casasanta, G.; Ciani, D.; Garra, R.

    2012-01-01

    Possible deviations from exponential attenuation of radiation in a random medium have been recently studied in several works. These deviations from the classical Beer-Lambert law were justified from a stochastic point of view by Kostinski (2001) . In his model he introduced the spatial correlation among the random variables, i.e. a space memory. In this note we introduce a different approach, including a memory formalism in the classical Beer-Lambert law through fractional calculus modelling. We find a generalized Beer-Lambert law in which the exponential memoryless extinction is only a special case of non-exponential extinction solutions described by Mittag-Leffler functions. We also justify this result from a stochastic point of view, using the space fractional Poisson process. Moreover, we discuss some concrete advantages of this approach from an experimental point of view, giving an estimate of the deviation from exponential extinction law, varying the optical depth. This is also an interesting model to understand the meaning of fractional derivative as an instrument to transmit randomness of microscopic dynamics to the macroscopic scale.

  7. Study of thermal conductivity and thermal rectification in exponential mass graded lattices

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Tejal N. [Bhavan' s Sheth R.A. College of Science, Khanpur, Ahmedabad 380 001, Gujarat (India); Gajjar, P.N., E-mail: pngajjar@rediffmail.com [Department of Physics, University School of Sciences, Gujarat University, Ahmedabad 380 009, Gujarat (India)

    2012-01-09

    Concept of exponential mass variation of oscillators along the chain length of N oscillators is proposed in the present Letter. The temperature profile and thermal conductivity of one-dimensional (1D) exponential mass graded harmonic and anharmonic lattices are studied on the basis of Fermi–Pasta–Ulam (FPU) β model. Present findings conclude that the exponential mass graded chain provide higher conductivity than that of linear mass graded chain. The exponential mass graded anharmonic chain generates the thermal rectification of 70–75% which is better than linear mass graded materials, so far. Thus instead of using linear mass graded material, the use of exponential mass graded material will be a better and genuine choice for controlling the heat flow at nano-scale. -- Highlights: ► In PRE 82 (2010) 040101, use of mass graded material as a thermal devices is explored. ► Concept of exponential mass graded material is proposed. ► The rectification obtained is about 70–75% which is better than linear mass graded materials. ► The exponential mass graded material will be a better choice for the thermal devices at nano-scale.

  8. Exponential Correlation of IQ and the Wealth of Nations

    Science.gov (United States)

    Dickerson, Richard E.

    2006-01-01

    Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…

  9. New Results of Global Exponential Stabilization for BLDCMs System

    OpenAIRE

    Fengxia Tian; Fangchao Zhen; Guopeng Zhou; Xiaoxin Liao

    2015-01-01

    The global exponential stabilization for brushless direct current motor (BLDCM) system is studied. Four linear and simple feedback controllers are proposed to realize the global stabilization of BLDCM with exponential convergence rate; the control law used in each theorem is less conservative and more concise. Finally, an example is given to demonstrate the correctness of the proposed results.

  10. A note on exponential convergence of neural networks with unbounded distributed delays

    Energy Technology Data Exchange (ETDEWEB)

    Chu Tianguang [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)]. E-mail: chutg@pku.edu.cn; Yang Haifeng [Intelligent Control Laboratory, Center for Systems and Control, Department of Mechanics and Engineering Science, Peking University, Beijing 100871 (China)

    2007-12-15

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network.

  11. A note on exponential convergence of neural networks with unbounded distributed delays

    International Nuclear Information System (INIS)

    Chu Tianguang; Yang Haifeng

    2007-01-01

    This note examines issues concerning global exponential convergence of neural networks with unbounded distributed delays. Sufficient conditions are derived by exploiting exponentially fading memory property of delay kernel functions. The method is based on comparison principle of delay differential equations and does not need the construction of any Lyapunov functionals. It is simple yet effective in deriving less conservative exponential convergence conditions and more detailed componentwise decay estimates. The results of this note and [Chu T. An exponential convergence estimate for analog neural networks with delay. Phys Lett A 2001;283:113-8] suggest a class of neural networks whose globally exponentially convergent dynamics is completely insensitive to a wide range of time delays from arbitrary bounded discrete type to certain unbounded distributed type. This is of practical interest in designing fast and reliable neural circuits. Finally, an open question is raised on the nature of delay kernels for attaining exponential convergence in an unbounded distributed delayed neural network

  12. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    Science.gov (United States)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  13. Audit of medication errors by anesthetists in North Western Nigeria ...

    African Journals Online (AJOL)

    ... errors do occur in the everyday practice of anesthetists in Nigeria as in other countries and can lead to morbidity and mortality in our patients. Routine audit and reporting of critical incidents including errors in drug administration should be encouraged. Reduction of medication errors is an important aspect of patient safety, ...

  14. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Pituk Mihály

    2005-01-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  15. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Mihály Pituk

    2005-03-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  16. Late-time acceleration with steep exponential potentials

    Energy Technology Data Exchange (ETDEWEB)

    Shahalam, M. [Zhejiang University of Technology, Institute for Advanced Physics and Mathematics, Hangzhou (China); Yang, Weiqiang [Liaoning Normal University, Department of Physics, Dalian (China); Myrzakulov, R. [Eurasian National University, Department of General and Theoretical Physics, Eurasian International Center for Theoretical Physics, Astana (Kazakhstan); Wang, Anzhong [Zhejiang University of Technology, Institute for Advanced Physics and Mathematics, Hangzhou (China); Baylor University, GCAP-CASPER, Department of Physics, Waco, TX (United States)

    2017-12-15

    In this letter, we study the cosmological dynamics of steeper potential than exponential. Our analysis shows that a simple extension of an exponential potential allows to capture late-time cosmic acceleration and retain the tracker behavior. We also perform statefinder and Om diagnostics to distinguish dark energy models among themselves and with ΛCDM. In addition, to put the observational constraints on the model parameters, we modify the publicly available CosmoMC code and use an integrated data base of baryon acoustic oscillation, latest Type Ia supernova from Joint Light Curves sample and the local Hubble constant value measured by the Hubble Space Telescope. (orig.)

  17. Late-time acceleration with steep exponential potentials

    International Nuclear Information System (INIS)

    Shahalam, M.; Yang, Weiqiang; Myrzakulov, R.; Wang, Anzhong

    2017-01-01

    In this letter, we study the cosmological dynamics of steeper potential than exponential. Our analysis shows that a simple extension of an exponential potential allows to capture late-time cosmic acceleration and retain the tracker behavior. We also perform statefinder and Om diagnostics to distinguish dark energy models among themselves and with ΛCDM. In addition, to put the observational constraints on the model parameters, we modify the publicly available CosmoMC code and use an integrated data base of baryon acoustic oscillation, latest Type Ia supernova from Joint Light Curves sample and the local Hubble constant value measured by the Hubble Space Telescope. (orig.)

  18. Effect of double-shell structure on reduction of field errors in the STP-3(M) reversed-field pinch

    International Nuclear Information System (INIS)

    Yamada, S.; Masamune, S.; Nagata, A.; Arimoto, H.; Oshiyama, H.; Sato, K.I.

    1988-08-01

    Reversed-field pinch (RFP) operation on STP-3 (M) proved that the adition of a quasistational vertical field B sub(perpendicular) together with large reduction of irregular magnetic field at the shell gap could remarkably improve properties of the plasma confinement. Here, the gaps of a thick shell is wholely covered with the single primary coil having a shell shape. The measured field error at the gap is as small as 7.5 % of the poloidal field. The application of B sub(perpendicular) sets the plasma at a more perfect equilibrium. In this operation, the plasma resistivety much decreased by a factor 2 and the electron temperature rose up to 0.8 keV. (author)

  19. Analytic results for asymmetric random walk with exponential transition probabilities

    International Nuclear Information System (INIS)

    Gutkowicz-Krusin, D.; Procaccia, I.; Ross, J.

    1978-01-01

    We present here exact analytic results for a random walk on a one-dimensional lattice with asymmetric, exponentially distributed jump probabilities. We derive the generating functions of such a walk for a perfect lattice and for a lattice with absorbing boundaries. We obtain solutions for some interesting moment properties, such as mean first passage time, drift velocity, dispersion, and branching ratio for absorption. The symmetric exponential walk is solved as a special case. The scaling of the mean first passage time with the size of the system for the exponentially distributed walk is determined by the symmetry and is independent of the range

  20. Exponential Sensitivity and its Cost in Quantum Physics.

    Science.gov (United States)

    Gilyén, András; Kiss, Tamás; Jex, Igor

    2016-02-10

    State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed.

  1. Exponential and Critical Experiments Vol. II. Proceedings of the Symposium on Exponential and Critical Experiments

    International Nuclear Information System (INIS)

    1964-01-01

    In September 1963 the International Atomic Energy Agency organized the Symposium on Exponential and Critical Experiments in Amsterdam, Netherlands, at the invitation of the Government of the Netherlands. The Symposium enabled scientists from Member States to discuss the results of such experiments which provide the physics data necessary for the design of power reactors. Great advances made in recent years in this field have provided scientists with highly sophisticated and reliable experimental and theoretical methods. This trend is reflected in the presentation, at the Symposium, of many new experimental techniques resulting in more detailed and accurate information and a reduction of costs. Both the number of experimental parameters and their range of variation have been extended, and a closer degree of simulation of the actual power reactor has been achieved, for example, by means of high temperature critical assemblies. Basic types of lattices have continued to be the objective of many investigations, and extensive theoretical analyses have been carried out to provide a more thorough understanding of the neutron physics involved. Twenty nine countries and 3 international organizations were represented by 198 participants. Seventy one papers were presented. These numbers alone show the wide interest which the topic commands in the field of reactor design. We hope that this publication, which includes the papers presented at the Symposium and a record of the discussions, will prove useful as a work of reference to scientists working in this field

  2. On Extended Exponential General Linear Methods PSQ with S>Q ...

    African Journals Online (AJOL)

    This paper is concerned with the construction and Numerical Analysis of Extended Exponential General Linear Methods. These methods, in contrast to other methods in literatures, consider methods with the step greater than the stage order (S>Q).Numerical experiments in this study, indicate that Extended Exponential ...

  3. Inference for exponentiated general class of distributions based on record values

    Directory of Open Access Journals (Sweden)

    Samah N. Sindi

    2017-09-01

    Full Text Available The main objective of this paper is to suggest and study a new exponentiated general class (EGC of distributions. Maximum likelihood, Bayesian and empirical Bayesian estimators of the parameter of the EGC of distributions based on lower record values are obtained. Furthermore, Bayesian prediction of future records is considered. Based on lower record values, the exponentiated Weibull distribution, its special cases of distributions and exponentiated Gompertz distribution are applied to the EGC of distributions.  

  4. Exploring parameter constraints on quintessential dark energy: The exponential model

    International Nuclear Information System (INIS)

    Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael

    2008-01-01

    We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ

  5. Possible stretched exponential parametrization for humidity absorption in polymers.

    Science.gov (United States)

    Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O

    2009-04-01

    Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.

  6. Approaches to reducing photon dose calculation errors near metal implants

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)

    2016-09-15

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  7. Approaches to reducing photon dose calculation errors near metal implants

    International Nuclear Information System (INIS)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  8. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  9. The Existence of Weak D-Pullback Exponential Attractor for Nonautonomous Dynamical System

    Directory of Open Access Journals (Sweden)

    Yongjun Li

    2016-01-01

    Full Text Available First, for a process U(t,τ∣t≥τ, we introduce a new concept, called the weak D-pullback exponential attractor, which is a family of sets M(t∣t≤T, for any T∈R, satisfying the following: (i M(t is compact, (ii M(t is positively invariant, that is, U(t,τM(τ⊂M(t, and (iii there exist k,l>0 such that dist(U(t,τB(τ,M(t≤ke-(t-τ; that is, M(t pullback exponential attracts B(τ. Then we give a method to obtain the existence of weak D-pullback exponential attractors for a process. As an application, we obtain the existence of weak D-pullback exponential attractor for reaction diffusion equation in H01 with exponential growth of the external force.

  10. Novel MGF-based expressions for the average bit error probability of binary signalling over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2014-04-01

    The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.

  11. Global exponential stability for discrete-time neural networks with variable delays

    International Nuclear Information System (INIS)

    Chen Wuhua; Lu Xiaomei; Liang Dongying

    2006-01-01

    This Letter provides new exponential stability criteria for discrete-time neural networks with variable delays. The main technique is to reduce exponential convergence estimation of the neural network solution to that of one component of the corresponding solution by constructing Lyapunov function based on M-matrix. By introducing the tuning parameter diagonal matrix, the delay-independent and delay-dependent exponential stability conditions have been unified in the same mathematical formula. The effectiveness of the new results are illustrated by three examples

  12. New results for exponential synchronization of linearly coupled ordinary differential systems

    International Nuclear Information System (INIS)

    Tong Ping; Chen Shi-Hua

    2017-01-01

    This paper investigates the exponential synchronization of linearly coupled ordinary differential systems. The intrinsic nonlinear dynamics may not satisfy the QUAD condition or weak-QUAD condition. First, it gives a new method to analyze the exponential synchronization of the systems. Second, two theorems and their corollaries are proposed for the local or global exponential synchronization of the coupled systems. Finally, an application to the linearly coupled Hopfield neural networks and several simulations are provided for verifying the effectiveness of the theoretical results. (paper)

  13. Exponential decay and exponential recovery of modal gains in high count rate channel electron multipliers

    International Nuclear Information System (INIS)

    Hahn, S.F.; Burch, J.L.

    1980-01-01

    A series of data on high count rate channel electron multipliers revealed an initial drop and subsequent recovery of gains in exponential fashion. The FWHM of the pulse height distribution at the initial stage of testing can be used as a good criterion for the selection of operating bias voltage of the channel electron multiplier

  14. The exponential critical state of high-Tc ceramics

    International Nuclear Information System (INIS)

    Castro, H.; Rinderer, L.

    1994-01-01

    The critical current in high-Tc materials is strongly reduced by a magnetic field. We studied this dependency for tubular YBCO samples. We find an exponential drop as the field is increased from zero up to some tens of oersted. This behavior was already observed by others, however little work has been done in this direction. We define what we call the ''exponential critical state'' of HTSC and compare the prediction for the magnetization with experimental data. Furthermore, the ''Kim critical state'' is obtained as the small field limit. (orig.)

  15. Review of "Going Exponential: Growing the Charter School Sector's Best"

    Science.gov (United States)

    Garcia, David

    2011-01-01

    This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…

  16. Testable Implications of Quasi-Hyperbolic and Exponential Time Discounting

    OpenAIRE

    Echenique, Federico; Imai, Taisuke; Saito, Kota

    2014-01-01

    We present the first revealed-preference characterizations of the models of exponential time discounting, quasi-hyperbolic time discounting, and other time-separable models of consumers’ intertemporal decisions. The characterizations provide non-parametric revealed-preference tests, which we take to data using the results of a recent experiment conducted by Andreoni and Sprenger (2012). For such data, we find that less than half the subjects are consistent with exponential discounting, and on...

  17. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  18. Exponential stability of switched linear systems with time-varying delay

    Directory of Open Access Journals (Sweden)

    Satiracoo Pairote

    2007-11-01

    Full Text Available We use a Lyapunov-Krasovskii functional approach to establish the exponential stability of linear systems with time-varying delay. Our delay-dependent condition allows to compute simultaneously the two bounds that characterize the exponential stability rate of the solution. A simple procedure for constructing switching rule is also presented.

  19. Stretched versus compressed exponential kinetics in α-helix folding

    International Nuclear Information System (INIS)

    Hamm, Peter; Helbing, Jan; Bredenbeck, Jens

    2006-01-01

    In a recent paper (J. Bredenbeck, J. Helbing, J.R. Kumita, G.A. Woolley, P. Hamm, α-helix formation in a photoswitchable peptide tracked from picoseconds to microseconds by time resolved IR spectroscopy, Proc. Natl. Acad. Sci USA 102 (2005) 2379), we have investigated the folding of a photo-switchable α-helix with a kinetics that could be fit by a stretched exponential function exp(-(t/τ) β ). The stretching factor β became smaller as the temperature was lowered, a result which has been interpreted in terms of activated diffusion on a rugged energy surface. In the present paper, we discuss under which conditions diffusion problems occur with stretched exponential kinetics (β 1). We show that diffusion problems do have a strong tendency to yield stretched exponential kinetics, yet, that there are conditions (strong perturbation from equilibrium, performing the experiment in the folding direction) under which compressed exponential kinetics would be expected instead. We discuss the kinetics on free energy surfaces predicted by simple initiation-propagation models (zipper models) of α-helix folding, as well as by folding funnel models. We show that our recent experiment has been performed under condition for which models with strong downhill driving force, such as the zipper model, would predict compressed, rather than stretched exponential kinetics, in disagreement with the experimental observation. We therefore propose that the free energy surface along a reaction coordinate that governs the folding kinetics must be relatively flat and has a shape similar to a 1D golf course. We discuss how this conclusion can be unified with the thermodynamically well established zipper model by introducing an additional kinetic reaction coordinate

  20. Design of a 9-loop quasi-exponential waveform generator.

    Science.gov (United States)

    Banerjee, Partha; Shukla, Rohit; Shyam, Anurag

    2015-12-01

    We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.

  1. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    Science.gov (United States)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  2. Exponential Potential versus Dark Matter

    Science.gov (United States)

    1993-10-15

    scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the

  3. Demonstration of the exponential decay law using beer froth

    International Nuclear Information System (INIS)

    Leike, A.

    2002-01-01

    The volume of beer froth decays exponentially with time. This property is used to demonstrate the exponential decay law in the classroom. The decay constant depends on the type of beer and can be used to differentiate between different beers. The analysis shows in a transparent way the techniques of data analysis commonly used in science - consistency checks of theoretical models with the data, parameter estimation and determination of confidence intervals. (author)

  4. THE ATKINSON INDEX, THE MORAN STATISTIC, AND TESTING EXPONENTIALITY

    OpenAIRE

    Nao, Mimoto; Ricardas, Zitikis; Department of Statistics and Probability, Michigan State University; Department of Statistical and Actuarial Sciences, University of Western Ontario

    2008-01-01

    Constructing tests for exponentiality has been an active and fruitful research area, with numerous applications in engineering, biology and other sciences concerned with life-time data. In the present paper, we construct and investigate powerful tests for exponentiality based on two well known quantities: the Atkinson index and the Moran statistic. We provide an extensive study of the performance of the tests and compare them with those already available in the literature.

  5. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  6. Noise suppress or express exponential growth for hybrid Hopfield neural networks

    International Nuclear Information System (INIS)

    Zhu Song; Shen Yi; Chen Guici

    2010-01-01

    In this Letter, we will show that noise can make the given hybrid Hopfield neural networks whose solution may grows exponentially become the new stochastic hybrid Hopfield neural networks whose solution will grows at most polynomially. On the other hand, we will also show that noise can make the given hybrid Hopfield neural networks whose solution grows at most polynomially become the new stochastic hybrid Hopfield neural networks whose solution will grows at exponentially. In other words, we will reveal that the noise can suppress or express exponential growth for hybrid Hopfield neural networks.

  7. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Science.gov (United States)

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  8. Recent developments in exponential random graph (p*) models for social networks

    NARCIS (Netherlands)

    Robins, Garry; Snijders, Tom; Wang, Peng; Handcock, Mark; Pattison, Philippa

    This article reviews new specifications for exponential random graph models proposed by Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handcock, M., 2006. New specifications for exponential random graph models. Sociological Methodology] and demonstrates their improvement over

  9. EXPALS, Least Square Fit of Linear Combination of Exponential Decay Function

    International Nuclear Information System (INIS)

    Douglas Gardner, C.

    1980-01-01

    1 - Description of problem or function: This program fits by least squares a function which is a linear combination of real exponential decay functions. The function is y(k) = summation over j of a(j) * exp(-lambda(j) * k). Values of the independent variable (k) and the dependent variable y(k) are specified as input data. Weights may be specified as input information or set by the program (w(k) = 1/y(k)). 2 - Method of solution: The Prony-Householder iteration method is used. For unequally-spaced data, a number of interpolation options are provided. This revision includes an option to call a differential correction subroutine REFINE to improve the approximation to unequally-spaced data when equal-interval interpolation is faulty. If convergence is achieved, the probable errors in the computed parameters are calculated also. 3 - Restrictions on the complexity of the problem: Generally, it is desirable to have at least 10n observations where n equals the number of terms and to input k+n significant figures if k significant figures are expected

  10. Global exponential stability for nonautonomous cellular neural networks with delays

    International Nuclear Information System (INIS)

    Zhang Qiang; Wei Xiaopeng; Xu Jin

    2006-01-01

    In this Letter, by utilizing Lyapunov functional method and Halanay inequalities, we analyze global exponential stability of nonautonomous cellular neural networks with delay. Several new sufficient conditions ensuring global exponential stability of the network are obtained. The results given here extend and improve the earlier publications. An example is given to demonstrate the effectiveness of the obtained results

  11. BAYESIAN ESTIMATION OF THE SHAPE PARAMETER OF THE GENERALISED EXPONENTIAL DISTRIBUTION UNDER DIFFERENT LOSS FUNCTIONS

    Directory of Open Access Journals (Sweden)

    SANKU DEY

    2010-11-01

    Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of  non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.

  12. Necessary and Sufficient Condition for Local Exponential Synchronization of Nonlinear Systems

    NARCIS (Netherlands)

    Andrieu, Vincent; Jayawardhana, Bayu; Tarbouriech, Sophie

    2015-01-01

    Based on recent works on transverse exponential stability, some necessary and sufficient conditions for the existence of a (locally) exponential synchronizer are established. We show that the existence of a structured synchronizer is equivalent to the existence of a stabilizer for the individual

  13. Academic Sacred Cows and Exponential Growth.

    Science.gov (United States)

    Heterick, Robert C., Jr.

    1991-01-01

    The speech notes the linear growth of resources versus the exponential growth of costs in higher education. It identifies opportunities arising from information technology to transform teaching and learning through creation of a new scholarly information delivery system. An integrated triad of communications, computing, and library organizations…

  14. The evolution of stellar exponential discs

    NARCIS (Netherlands)

    Ferguson, AMN; Clarke, CJ

    2001-01-01

    Models of disc galaxies which invoke viscosity-driven radial flows have long been known to provide a natural explanation for the origin of stellar exponential discs, under the assumption that the star formation and viscous time-scales are comparable. We present models which invoke simultaneous star

  15. Exponential Lower Bounds For Policy Iteration

    OpenAIRE

    Fearnley, John

    2010-01-01

    We study policy iteration for infinite-horizon Markov decision processes. It has recently been shown policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total reward and average-reward optimality criteria.

  16. Exponential synchronization of complex networks with nonidentical time-delayed dynamical nodes

    International Nuclear Information System (INIS)

    Cai Shuiming; He Qinbin; Hao Junjun; Liu Zengrong

    2010-01-01

    In this Letter, exponential synchronization of a complex network with nonidentical time-delayed dynamical nodes is considered. Two effective control schemes are proposed to drive the network to synchronize globally exponentially onto any smooth goal dynamics. By applying open-loop control to all nodes and adding some intermittent controllers to partial nodes, some simple criteria for exponential synchronization of such network are established. Meanwhile, a pinning scheme deciding which nodes need to be pinned and a simply approximate formula for estimating the least number of pinned nodes are also provided. By introducing impulsive effects to the open-loop controlled network, another synchronization scheme is developed for the network with nonidentical time-delayed dynamical nodes, and an estimate of the upper bound of impulsive intervals ensuring global exponential stability of the synchronization process is also given. Numerical simulations are presented finally to demonstrate the effectiveness of the theoretical results.

  17. Statistical estimation for truncated exponential families

    CERN Document Server

    Akahira, Masafumi

    2017-01-01

    This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considere...

  18. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  19. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    Science.gov (United States)

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  20. Heterogeneous dipolar theory of the exponential pile

    International Nuclear Information System (INIS)

    Mastrangelo, P.V.

    1981-01-01

    We present a heterogeneous theory of the exponential pile, closely related to NORDHEIM-SCALETTAR's. It is well adapted to lattice whose pitch is relatively large (D-2O, grahpite) and the dimensions of whose channels are not negligible. The anisotropy of neutron diffusion is taken into account by the introduction of dipolar parameters. We express the contribution of each channel to the total flux in the moderator by means of multipolar coefficients. In order to be able to apply conditions of continuity between the flux and their derivatives, on the side of the moderator, we develop in a Fourier series the fluxes found at the periphery of each channel. Using Wronski's relations of Bessel's functions, we express the multipolar coefficients of the surfaces of each channel, on the side of the moderator, by means of the harmonics of each flux and their derivatives. We retain only monopolar (A 0 sub(g)) and dipolar (A 1 sub(g)) coefficients; those of a higher order are ignored. We deduce from these coefficients the systems of homogeneous equations of the exponential pile with monopoles on their own and monopoles plus dipoles. It should be noted that the systems of homogeneous equations of the critical pile are contained in those of the exponential pile. In another article, we develop the calculation of monopolar and dipolar heterogeneous parameters. (orig.)

  1. Exponentiation and deformations of Lie-admissible algebras

    International Nuclear Information System (INIS)

    Myung, H.C.

    1982-01-01

    The exponential function is defined for a finite-dimensional real power-associative algebra with unit element. The application of the exponential function is focused on the power-associative (p,q)-mutation of a real or complex associative algebra. Explicit formulas are computed for the (p,q)-mutation of the real envelope of the spin 1 algebra and the Lie algebra so(3) of the rotation group, in light of earlier investigations of the spin 1/2. A slight variant of the mutated exponential is interpreted as a continuous function of the Lie algebra into some isotope of the corresponding linear Lie group. The second part of this paper is concerned with the representation and deformation of a Lie-admissible algebra. The second cohomology group of a Lie-admissible algebra is introduced as a generalization of those of associative and Lie algebras in the Hochschild and Chevalley-Eilenberg theory. Some elementary theory of algebraic deformation of Lie-admissible algebras is discussed in view of generalization of that of associative and Lie algebras. Lie-admissible deformations are also suggested by the representation of Lie-admissible algebras. Some explicit examples of Lie-admissible deformation are given in terms of the (p,q)-mutation of associative deformation of an associative algebra. Finally, we discuss Lie-admissible deformations of order one

  2. Unwrapped phase inversion with an exponential damping

    KAUST Repository

    Choi, Yun Seok

    2015-07-28

    Full-waveform inversion (FWI) suffers from the phase wrapping (cycle skipping) problem when the frequency of data is not low enough. Unless we obtain a good initial velocity model, the phase wrapping problem in FWI causes a result corresponding to a local minimum, usually far away from the true solution, especially at depth. Thus, we have developed an inversion algorithm based on a space-domain unwrapped phase, and we also used exponential damping to mitigate the nonlinearity associated with the reflections. We construct the 2D phase residual map, which usually contains the wrapping discontinuities, especially if the model is complex and the frequency is high. We then unwrap the phase map and remove these cycle-based jumps. However, if the phase map has several residues, the unwrapping process becomes very complicated. We apply a strong exponential damping to the wavefield to eliminate much of the residues in the phase map, thus making the unwrapping process simple. We finally invert the unwrapped phases using the back-propagation algorithm to calculate the gradient. We progressively reduce the damping factor to obtain a high-resolution image. Numerical examples determined that the unwrapped phase inversion with a strong exponential damping generated convergent long-wavelength updates without low-frequency information. This model can be used as a good starting model for a subsequent inversion with a reduced damping, eventually leading to conventional waveform inversion.

  3. Fast Modular Exponentiation and Elliptic Curve Group Operation in Maple

    Science.gov (United States)

    Yan, S. Y.; James, G.

    2006-01-01

    The modular exponentiation, y[equivalent to]x[superscript k](mod n) with x,y,k,n integers and n [greater than] 1; is the most fundamental operation in RSA and ElGamal public-key cryptographic systems. Thus the efficiency of RSA and ElGamal depends entirely on the efficiency of the modular exponentiation. The same situation arises also in elliptic…

  4. Globally exponential stability of neural network with constant and variable delays

    International Nuclear Information System (INIS)

    Zhao Weirui; Zhang Huanshui

    2006-01-01

    This Letter presents new sufficient conditions of globally exponential stability of neural networks with delays. We show that these results generalize recently published globally exponential stability results. In particular, several different globally exponential stability conditions in the literatures which were proved using different Lyapunov functionals are generalized and unified by using the same Lyapunov functional and the technique of inequality of integral. A comparison between our results and the previous results admits that our results establish a new set of stability criteria for delayed neural networks. Those conditions are less restrictive than those given in the earlier references

  5. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  6. Stability of the Exponential Functional Equation in Riesz Algebras

    Directory of Open Access Journals (Sweden)

    Bogdan Batko

    2014-01-01

    Full Text Available We deal with the stability of the exponential Cauchy functional equation F(x+y=F(xF(y in the class of functions F:G→L mapping a group (G, + into a Riesz algebra L. The main aim of this paper is to prove that the exponential Cauchy functional equation is stable in the sense of Hyers-Ulam and is not superstable in the sense of Baker. To prove the stability we use the Yosida Spectral Representation Theorem.

  7. Collisional avalanche exponentiation of runaway electrons in electrified plasmas

    International Nuclear Information System (INIS)

    Jayakumar, R.; Fleischmann, H.H.; Zweben, S.J.

    1993-01-01

    In contrast to earlier expectations, it is estimated that generation of runaway electrons from close collisions of existing runaways with cold plasma electrons can be significant even for small electric fields, whenever runaways can gain energies of about 20 MeV or more. In that case, the runaway population will grow exponentially with the energy spectrum showing an exponential decrease towards higher energies. Energy gains of the required magnitude may occur in large tokamak devices as well as in cosmic-ray generation. (orig.)

  8. Non-exponential dynamic relaxation in strongly nonequilibrium nonideal plasmas

    International Nuclear Information System (INIS)

    Morozov, I V; Norman, G E

    2003-01-01

    Relaxation of kinetic energy to the equilibrium state is simulated by the molecular dynamics method for nonideal two-component non-degenerate plasmas. Three limiting examples of initial states of strongly nonequilibrium plasma are considered: zero electron velocities, zero ion velocities and zero velocities of both electrons and ions. The initial non-exponential stage, its duration τ nB and subsequent exponential stages of the relaxation process are studied for a wide range of the nonideality parameter and the ion mass

  9. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes

    Science.gov (United States)

    Elhelw, Amr M.; Badran, Ehab F.

    2015-01-01

    High peak to average power ratio (PAPR) is one of the major problems of OFDM systems. Selected mapping (SLM) is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI) index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI) for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate. PMID:26018504

  10. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.

    Directory of Open Access Journals (Sweden)

    Amr M Elhelw

    Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.

  11. Method for numerical simulation of two-term exponentially correlated colored noise

    International Nuclear Information System (INIS)

    Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.

    2006-01-01

    A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications

  12. Exponentiated Lomax Geometric Distribution: Properties and Applications

    Directory of Open Access Journals (Sweden)

    Amal Soliman Hassan

    2017-09-01

    Full Text Available In this paper, a new four-parameter lifetime distribution, called the exponentiated Lomax geometric (ELG is introduced. The new lifetime distribution contains the Lomax geometric and exponentiated Pareto geometric as new sub-models. Explicit algebraic formulas of probability density function, survival and hazard functions are derived. Various structural properties of the new model are derived including; quantile function, Re'nyi entropy, moments, probability weighted moments, order statistic, Lorenz and Bonferroni curves. The estimation of the model parameters is performed by maximum likelihood method and inference for a large sample is discussed. The flexibility and potentiality of the new model in comparison with some other distributions are shown via an application to a real data set. We hope that the new model will be an adequate model for applications in various studies.

  13. The Exponentiated Gumbel Type-2 Distribution: Properties and Application

    Directory of Open Access Journals (Sweden)

    I. E. Okorie

    2016-01-01

    Full Text Available We introduce a generalized version of the standard Gumble type-2 distribution. The new lifetime distribution is called the Exponentiated Gumbel (EG type-2 distribution. The EG type-2 distribution has three nested submodels, namely, the Gumbel type-2 distribution, the Exponentiated Fréchet (EF distribution, and the Fréchet distribution. Some statistical and reliability properties of the new distribution were given and the method of maximum likelihood estimates was proposed for estimating the model parameters. The usefulness and flexibility of the Exponentiated Gumbel (EG type-2 distribution were illustrated with a real lifetime data set. Results based on the log-likelihood and information statistics values showed that the EG type-2 distribution provides a better fit to the data than the other competing distributions. Also, the consistency of the parameters of the new distribution was demonstrated through a simulation study. The EG type-2 distribution is therefore recommended for effective modelling of lifetime data.

  14. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    Science.gov (United States)

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  16. Exponential attractors for a nonclassical diffusion equation

    Directory of Open Access Journals (Sweden)

    Qiaozhen Ma

    2009-01-01

    Full Text Available In this article, we prove the existence of exponential attractors for a nonclassical diffusion equation in ${H^{2}(Omega}cap{H}^{1}_{0}(Omega$ when the space dimension is less than 4.

  17. Global robust exponential stability for interval neural networks with delay

    International Nuclear Information System (INIS)

    Cui Shihua; Zhao Tao; Guo Jie

    2009-01-01

    In this paper, new sufficient conditions for globally robust exponential stability of neural networks with either constant delays or time-varying delays are given. We show the sufficient conditions for the existence, uniqueness and global robust exponential stability of the equilibrium point by employing Lyapunov stability theory and linear matrix inequality (LMI) technique. Numerical examples are given to show the approval of our results.

  18. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    Science.gov (United States)

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  19. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    Science.gov (United States)

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  20. Defining near misses : towards a sharpened definition based on empirical data about error handling processes

    NARCIS (Netherlands)

    Kessels-Habraken, M.M.P.; Schaaf, van der T.W.; Jonge, de J.; Rutte, C.G.

    2010-01-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and

  1. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    International Nuclear Information System (INIS)

    Baidillah, Marlin R; Takei, Masahiro

    2017-01-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution. (paper)

  2. Meet and Join Matrices in the Poset of Exponential Divisors

    Indian Academy of Sciences (India)

    ... exponential divisor ( G C E D ) and the least common exponential multiple ( L C E M ) do not always exist. In this paper we embed this poset in a lattice. As an application we study the G C E D and L C E M matrices, analogues of G C D and L C M matrices, which are both special cases of meet and join matrices on lattices.

  3. Research on Copy-Move Image Forgery Detection Using Features of Discrete Polar Complex Exponential Transform

    Science.gov (United States)

    Gan, Yanfen; Zhong, Junliu

    2015-12-01

    With the aid of sophisticated photo-editing software, such as Photoshop, copy-move image forgery operation has been widely applied and has become a major concern in the field of information security in the modern society. A lot of work on detecting this kind of forgery has gained great achievements, but the detection results of geometrical transformations of copy-move regions are not so satisfactory. In this paper, a new method based on the Polar Complex Exponential Transform is proposed. This method addresses issues in image geometric moment, focusing on constructing rotation invariant moment and extracting features of the rotation invariant moment. In order to reduce rounding errors of the transform from the Polar coordinate system to the Cartesian coordinate system, a new transformation method is presented and discussed in detail at the same time. The new method constructs a 9 × 9 shrunk template to transform the Cartesian coordinate system back to the Polar coordinate system. It can reduce transform errors to a much greater degree. Forgery detection, such as copy-move image forgery detection, is a difficult procedure, but experiments prove our method is a great improvement in detecting and identifying forgery images affected by the rotated transform.

  4. THE EXPONENTIAL STABILIZATION FOR A SEMILINEAR WAVE EQUATION WITH LOCALLY DISTRIBUTED FEEDBACK

    Institute of Scientific and Technical Information of China (English)

    JIA CHAOHUA; FENG DEXING

    2005-01-01

    This paper considers the exponential decay of the solution to a damped semilinear wave equation with variable coefficients in the principal part by Riemannian multiplier method. A differential geometric condition that ensures the exponential decay is obtained.

  5. Exponential stability of neural networks with asymmetric connection weights

    International Nuclear Information System (INIS)

    Yang Jinxiang; Zhong Shouming

    2007-01-01

    This paper investigates the exponential stability of a class of neural networks with asymmetric connection weights. By dividing the network state variables into various parts according to the characters of the neural networks, some new sufficient conditions of exponential stability are derived via constructing a Lyapunov function and using the method of the variation of constant. The new conditions are associated with the initial values and are described by some blocks of the interconnection matrix, and do not depend on other blocks. Examples are given to further illustrate the theory

  6. Fuel elements assembling for the DON project exponential experience

    International Nuclear Information System (INIS)

    Anca Abati, R. de

    1966-01-01

    It is described the fuel unit used in the DON exponential experience, the manufacturing installments and tools as well as the stages in the fabrication.These 74 elements contain each 19 cartridges loaded with synterized urania, uranium carbide and indium, gold, and manganese probes. They were arranged in calandria-like tubes and the process-tube. This last one containing a cooling liquid simulating the reactor organic. Besides being used in the DON reactor exponential experience they were used in critic essays by the substitution method in the French reactor AQUILON II. (Author) 6 refs

  7. Limit laws for exponential families

    OpenAIRE

    Balkema, August A.; Klüppelberg, Claudia; Resnick, Sidney I.

    1999-01-01

    For a real random variable [math] with distribution function [math] , define ¶ [math] ¶ The distribution [math] generates a natural exponential family of distribution functions [math] , where ¶ [math] ¶ We study the asymptotic behaviour of the distribution functions [math] as [math] increases to [math] . If [math] then [math] pointwise on [math] . It may still be possible to obtain a non-degenerate weak limit law [math] by choosing suitable scaling and centring constants [math] an...

  8. Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics

    International Nuclear Information System (INIS)

    Maggs, J E; Morales, G J

    2012-01-01

    Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)

  9. Exponential inflation with F (R ) gravity

    Science.gov (United States)

    Oikonomou, V. K.

    2018-03-01

    In this paper, we shall consider an exponential inflationary model in the context of vacuum F (R ) gravity. By using well-known reconstruction techniques, we shall investigate which F (R ) gravity can realize the exponential inflation scenario at leading order in terms of the scalar curvature, and we shall calculate the slow-roll indices and the corresponding observational indices, in the context of slow-roll inflation. We also provide some general formulas of the slow-roll and the corresponding observational indices in terms of the e -foldings number. In addition, for the calculation of the slow-roll and of the observational indices, we shall consider quite general formulas, for which it is not necessary for the assumption that all the slow-roll indices are much smaller than unity to hold true. Finally, we investigate the phenomenological viability of the model by comparing it with the latest Planck and BICEP2/Keck-Array observational data. As we demonstrate, the model is compatible with the current observational data for a wide range of the free parameters of the model.

  10. Background does not significantly affect power-exponential fitting of gastric emptying curves

    International Nuclear Information System (INIS)

    Jonderko, K.

    1987-01-01

    Using a procedure enabling the assessment of background radiation, research was done to elucidate the course of changes in background activity during gastric emptying measurements. Attention was focused on the changes in the shape of power-exponential fitted gastric emptying curves after correction for background was performed. The observed pattern of background counts allowed to explain the shifts of the parameters characterizing power-exponential curves connected with background correction. It was concluded that background had a negligible effect on the power-exponential fitting of gastric emptying curves. (author)

  11. Application of bias factor method with use of virtual experimental value to prediction uncertainty reduction in void reactivity worth of breeding light water reactor

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Kojima, Kensuke; Takeda, Toshikazu

    2007-01-01

    We have carried out the critical experiments for the MOX fueled tight lattice LWR cores using FCA facility and constructed the XXII-1 series cores. Utilizing the critical experiments carried out at FCA, we have evaluated the reduction of prediction uncertainty in the coolant void reactivity worth of the breeding LWR core based on the bias factor method with focusing on the prediction uncertainty due to cross section errors. In the present study, we have introduced a concept of a virtual experimental value into the conventional bias factor method to overcome a problem caused by the conventional bias factor method in which the prediction uncertainty increases in the case that the experimental core has the opposite reactivity worth and the consequent opposite sensitivity coefficients to the real core. To extend the applicability of the bias factor method, we have adopted an exponentiated experimental value as the virtual experimental value and formulated the prediction uncertainty reduction by the use of the bias factor method extended by the concept of the virtual experimental value. From the numerical evaluation, it has been shown that the prediction uncertainty due to cross section errors has been reduced by the use of the concept of the virtual experimental value. It is concluded that the introduction of virtual experimental value can effectively utilize experimental data and extend applicability of the bias factor method. (author)

  12. Exponential rise of dynamical complexity in quantum computing through projections.

    Science.gov (United States)

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-10-10

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

  13. Reduction of errors during practice facilitates fundamental movement skill learning in children with intellectual disabilities.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Eguia, K F; Masters, R S W

    2013-04-01

    Children with intellectual disabilities (ID) have been found to have inferior motor proficiencies in fundamental movement skills (FMS). This study examined the effects of training the FMS of overhand throwing by manipulating the amount of practice errors. Participants included 39 children with ID aged 4-11 years who were allocated into either an error-reduced (ER) training programme or a more typical programme in which errors were frequent (error-strewn, ES). Throwing movement form, throwing accuracy, and throwing frequency during free play were evaluated. The ER programme improved movement form, and increased throwing activity during free play to a greater extent than the ES programme. Furthermore, ER learners were found to be capable of engaging in a secondary cognitive task while manifesting robust throwing accuracy performance. The findings support the use of movement skills training programmes that constrain practice errors in children with ID, suggesting that such approach results in improved performance and heightened movement engagement in free play. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 Blackwell Publishing Ltd.

  14. Spherical Bessel transform via exponential sum approximation of spherical Bessel function

    Science.gov (United States)

    Ikeno, Hidekazu

    2018-02-01

    A new algorithm for numerical evaluation of spherical Bessel transform is proposed in this paper. In this method, the spherical Bessel function is approximately represented as an exponential sum with complex parameters. This is obtained by expressing an integral representation of spherical Bessel function in complex plane, and discretizing contour integrals along steepest descent paths and a contour path parallel to real axis using numerical quadrature rule with the double-exponential transformation. The number of terms in the expression is reduced using the modified balanced truncation method. The residual part of integrand is also expanded by exponential functions using Prony-like method. The spherical Bessel transform can be evaluated analytically on arbitrary points in half-open interval.

  15. Exponential p-stability of impulsive stochastic differential equations with delays

    International Nuclear Information System (INIS)

    Yang Zhiguo; Xu Daoyi; Xiang Li

    2006-01-01

    In this Letter, we establish a method to study the exponential p-stability of the zero solution of impulsive stochastic differential equations with delays. By establishing an L-operator inequality and using the properties of M-cone and stochastic analysis technique, we obtain some new conditions ensuring the exponential p-stability of the zero solution of impulsive stochastic differential equations with delays. Two illustrative examples have been provided to show the effectiveness of our results

  16. Exponential-Polynomial Families and the Term Structure of Interest Rates

    OpenAIRE

    Filipovic, Damir

    2000-01-01

    Exponential-polynomial families like the Nelson-Siegel or Svensson family are widely used to estimate the current forward rate curve. We investigate whether these methods go well with inter-temporal modelling. We characterize the consistent Ito processes which have the property to provide an arbitrage free interest rate model when representing the parameters of some bounded exponential-polynomial type function. This includes in particular diffusion processes. We show that there is a strong li...

  17. Separation of type and grade in cervical tumours using non-mono-exponential models of diffusion-weighted MRI

    International Nuclear Information System (INIS)

    Winfield, Jessica M.; Collins, David J.; Morgan, Veronica A.; DeSouza, Nandita M.; Orton, Matthew R.; Ind, Thomas E.J.; Attygalle, Ayoma; Hazell, Steve

    2017-01-01

    Assessment of empirical diffusion-weighted MRI (DW-MRI) models in cervical tumours to investigate whether fitted parameters distinguish between types and grades of tumours. Forty-two patients (24 squamous cell carcinomas, 14 well/moderately differentiated, 10 poorly differentiated; 15 adenocarcinomas, 13 well/moderately differentiated, two poorly differentiated; three rare types) were imaged at 3 T using nine b-values (0 to 800 s mm -2 ). Mono-exponential, stretched exponential, kurtosis, statistical, and bi-exponential models were fitted. Model preference was assessed using Bayesian Information Criterion analysis. Differences in fitted parameters between tumour types/grades and correlation between fitted parameters were assessed using two-way analysis of variance and Pearson's linear correlation coefficient, respectively. Non-mono-exponential models were preferred by 83 % of tumours with bi-exponential and stretched exponential models preferred by the largest numbers of tumours. Apparent diffusion coefficient (ADC) and diffusion coefficients from non-mono-exponential models were significantly lower in poorly differentiated tumours than well/moderately differentiated tumours. α (stretched exponential), K (kurtosis), f and D* (bi-exponential) were significantly different between tumour types. Strong correlation was observed between ADC and diffusion coefficients from other models. Non-mono-exponential models were preferred to the mono-exponential model in DW-MRI data from cervical tumours. Parameters of non-mono-exponential models showed significant differences between types and grades of tumours. (orig.)

  18. Separation of type and grade in cervical tumours using non-mono-exponential models of diffusion-weighted MRI

    Energy Technology Data Exchange (ETDEWEB)

    Winfield, Jessica M.; Collins, David J.; Morgan, Veronica A.; DeSouza, Nandita M. [The Royal Marsden NHS Foundation Trust, MRI Unit, Sutton, Surrey (United Kingdom); The Institute of Cancer Research, Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, London (United Kingdom); Orton, Matthew R. [The Institute of Cancer Research, Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, London (United Kingdom); Ind, Thomas E.J. [The Royal Marsden NHS Foundation Trust, Gynaecology Unit, London (United Kingdom); Attygalle, Ayoma; Hazell, Steve [The Royal Marsden NHS Foundation Trust, Department of Histopathology, London (United Kingdom)

    2017-02-15

    Assessment of empirical diffusion-weighted MRI (DW-MRI) models in cervical tumours to investigate whether fitted parameters distinguish between types and grades of tumours. Forty-two patients (24 squamous cell carcinomas, 14 well/moderately differentiated, 10 poorly differentiated; 15 adenocarcinomas, 13 well/moderately differentiated, two poorly differentiated; three rare types) were imaged at 3 T using nine b-values (0 to 800 s mm{sup -2}). Mono-exponential, stretched exponential, kurtosis, statistical, and bi-exponential models were fitted. Model preference was assessed using Bayesian Information Criterion analysis. Differences in fitted parameters between tumour types/grades and correlation between fitted parameters were assessed using two-way analysis of variance and Pearson's linear correlation coefficient, respectively. Non-mono-exponential models were preferred by 83 % of tumours with bi-exponential and stretched exponential models preferred by the largest numbers of tumours. Apparent diffusion coefficient (ADC) and diffusion coefficients from non-mono-exponential models were significantly lower in poorly differentiated tumours than well/moderately differentiated tumours. α (stretched exponential), K (kurtosis), f and D* (bi-exponential) were significantly different between tumour types. Strong correlation was observed between ADC and diffusion coefficients from other models. Non-mono-exponential models were preferred to the mono-exponential model in DW-MRI data from cervical tumours. Parameters of non-mono-exponential models showed significant differences between types and grades of tumours. (orig.)

  19. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  20. Error reduction in health care: a systems approach to improving patient safety

    National Research Council Canada - National Science Library

    Spath, Patrice

    2011-01-01

    .... The book pinpoints how to reduce and eliminate medical mistakes that threaten the health and safety of patients and teaches how to identify the root cause of medical errors, implement strategies...

  1. Additivity of statistical moments in the exponentially modified Gaussian model of chromatography

    International Nuclear Information System (INIS)

    Howerton, Samuel B.; Lee Chomin; McGuffin, Victoria L.

    2002-01-01

    A homologous series of saturated fatty acids ranging from C 10 to C 22 was separated by reversed-phase capillary liquid chromatography. The resultant zone profiles were found to be fit best by an exponentially modified Gaussian (EMG) function. To compare the EMG function and statistical moments for the analysis of the experimental zone profiles, a series of simulated profiles was generated by using fixed values for retention time and different values for the symmetrical (σ) and asymmetrical (τ) contributions to the variance. The simulated profiles were modified with respect to the integration limits, the number of points, and the signal-to-noise ratio. After modification, each profile was analyzed by using statistical moments and an iteratively fit EMG equation. These data indicate that the statistical moment method is much more susceptible to error when the degree of asymmetry is large, when the integration limits are inappropriately chosen, when the number of points is small, and when the signal-to-noise ratio is small. The experimental zone profiles were then analyzed by using the statistical moment and EMG methods. Although care was taken to minimize the sources of error discussed above, significant differences were found between the two methods. The differences in the second moment suggest that the symmetrical and asymmetrical contributions to broadening in the experimental zone profiles are not independent. As a consequence, the second moment is not equal to the sum of σ 2 and τ 2 , as is commonly assumed. This observation has important implications for the elucidation of thermodynamic and kinetic information from chromatographic zone profiles

  2. Microbial activity in aquatic environments measured by dimethyl sulfoxide reduction and intercomparison with commonly used methods.

    Science.gov (United States)

    Griebler, C; Slezak, D

    2001-01-01

    A new method to determine microbial (bacterial and fungal) activity in various freshwater habitats is described. Based on microbial reduction of dimethyl sulfoxide (DMSO) to dimethyl sulfide (DMS), our DMSO reduction method allows measurement of the respiratory activity in interstitial water, as well as in the water column. DMSO is added to water samples at a concentration (0.75% [vol/vol] or 106 mM) high enough to compete with other naturally occurring electron acceptors, as determined with oxygen and nitrate, without stimulating or inhibiting microbial activity. Addition of NaN(3), KCN, and formaldehyde, as well as autoclaving, inhibited the production of DMS, which proves that the reduction of DMSO is a biotic process. DMSO reduction is readily detectable via the formation of DMS even at low microbial activities. All water samples showed significant DMSO reduction over several hours. Microbially reduced DMSO is recovered in the form of DMS from water samples by a purge and trap system and is quantified by gas chromatography and detection with a flame photometric detector. The DMSO reduction method was compared with other methods commonly used for assessment of microbial activity. DMSO reduction activity correlated well with bacterial production in predator-free batch cultures. Cell-production-specific DMSO reduction rates did not differ significantly in batch cultures with different nutrient regimes but were different in different growth phases. Overall, a cell-production-specific DMSO reduction rate of 1.26 x 10(-17) +/- 0. 12 x 10(-17) mol of DMS per produced cell (mean +/- standard error; R(2) = 0.78) was calculated. We suggest that the relationship of DMSO reduction rates to thymidine and leucine incorporation is linear (the R(2) values ranged from 0.783 to 0.944), whereas there is an exponential relationship between DMSO reduction rates and glucose uptake, as well as incorporation (the R(2) values ranged from 0.821 to 0.931). Based on our results, we

  3. Harmonic analysis on exponential solvable Lie groups

    CERN Document Server

    Fujiwara, Hidenori

    2015-01-01

    This book is the first one that brings together recent results on the harmonic analysis of exponential solvable Lie groups. There still are many interesting open problems, and the book contributes to the future progress of this research field. As well, various related topics are presented to motivate young researchers. The orbit method invented by Kirillov is applied to study basic problems in the analysis on exponential solvable Lie groups. This method tells us that the unitary dual of these groups is realized as the space of their coadjoint orbits. This fact is established using the Mackey theory for induced representations, and that mechanism is explained first. One of the fundamental problems in the representation theory is the irreducible decomposition of induced or restricted representations. Therefore, these decompositions are studied in detail before proceeding to various related problems: the multiplicity formula, Plancherel formulas, intertwining operators, Frobenius reciprocity, and associated alge...

  4. CMB constraints on β-exponential inflationary models

    Science.gov (United States)

    Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.

    2018-03-01

    We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.

  5. KIOPS: A fast adaptive Krylov subspace solver for exponential integrators

    OpenAIRE

    Gaudreault, Stéphane; Rainwater, Greg; Tokman, Mayya

    2018-01-01

    This paper presents a new algorithm KIOPS for computing linear combinations of $\\varphi$-functions that appear in exponential integrators. This algorithm is suitable for large-scale problems in computational physics where little or no information about the spectrum or norm of the Jacobian matrix is known \\textit{a priori}. We first show that such problems can be solved efficiently by computing a single exponential of a modified matrix. Then our approach is to compute an appropriate basis for ...

  6. Equation for disentangling time-ordered exponentials with arbitrary quadratic generators

    International Nuclear Information System (INIS)

    Budanov, V.G.

    1987-01-01

    In many quantum-mechanical constructions, it is necessary to disentangle an operator-valued time-ordered exponential with time-dependent generators quadratic in the creation and annihilation operators. By disentangling, one understands the finding of the matrix elements of the time-ordered exponential or, in a more general formulation. The solution of the problem can also be reduced to calculation of a matrix time-ordered exponential that solves the corresponding classical problem. However, in either case the evolution equations in their usual form do not enable one to take into account explicitly the symmetry of the system. In this paper the methods of Weyl analysis are used to find an ordinary differential equation on a matrix Lie algebra that is invariant with respect to the adjoint action of the dynamical symmetry group of a quadratic Hamiltonian and replaces the operator evolution equation for the Green's function

  7. The mechanism of double-exponential growth in hyper-inflation

    Science.gov (United States)

    Mizuno, T.; Takayasu, M.; Takayasu, H.

    2002-05-01

    Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.

  8. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  9. A cluster expansion approach to exponential random graph models

    International Nuclear Information System (INIS)

    Yin, Mei

    2012-01-01

    The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region

  10. Quantification and isotopic analysis of intracellular sulfur metabolites in the dissimilatory sulfate reduction pathway

    Science.gov (United States)

    Sim, Min Sub; Paris, Guillaume; Adkins, Jess F.; Orphan, Victoria J.; Sessions, Alex L.

    2017-06-01

    Microbial sulfate reduction exhibits a normal isotope effect, leaving unreacted sulfate enriched in 34S and producing sulfide that is depleted in 34S. However, the magnitude of sulfur isotope fractionation is quite variable. The resulting changes in sulfur isotope abundance have been used to trace microbial sulfate reduction in modern and ancient ecosystems, but the intracellular mechanism(s) underlying the wide range of fractionations remains unclear. Here we report the concentrations and isotopic ratios of sulfur metabolites in the dissimilatory sulfate reduction pathway of Desulfovibrio alaskensis. Intracellular sulfate and APS levels change depending on the growth phase, peaking at the end of exponential phase, while sulfite accumulates in the cell during stationary phase. During exponential growth, intracellular sulfate and APS are strongly enriched in 34S. The fractionation between internal and external sulfate is up to 49‰, while at the same time that between external sulfate and sulfide is just a few permil. We interpret this pattern to indicate that enzymatic fractionations remain large but the net fractionation between sulfate and sulfide is muted by the closed-system limitation of intracellular sulfate. This 'reservoir effect' diminishes upon cessation of exponential phase growth, allowing the expression of larger net sulfur isotope fractionations. Thus, the relative rates of sulfate exchange across the membrane versus intracellular sulfate reduction should govern the overall (net) fractionation that is expressed. A strong reservoir effect due to vigorous sulfate reduction might be responsible for the well-established inverse correlation between sulfur isotope fractionation and the cell-specific rate of sulfate reduction, while at the same time intraspecies differences in sulfate uptake and/or exchange rates could account for the significant scatter in this relationship. Our approach, together with ongoing investigations of the kinetic isotope

  11. Fast Outage Probability Simulation for FSO Links with a Generalized Pointing Error Model

    KAUST Repository

    Ben Issaid, Chaouki

    2017-02-07

    Over the past few years, free-space optical (FSO) communication has gained significant attention. In fact, FSO can provide cost-effective and unlicensed links, with high-bandwidth capacity and low error rate, making it an exciting alternative to traditional wireless radio-frequency communication systems. However, the system performance is affected not only by the presence of atmospheric turbulences, which occur due to random fluctuations in the air refractive index but also by the existence of pointing errors. Metrics, such as the outage probability which quantifies the probability that the instantaneous signal-to-noise ratio is smaller than a given threshold, can be used to analyze the performance of this system. In this work, we consider weak and strong turbulence regimes, and we study the outage probability of an FSO communication system under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results.

  12. A method for searching the possible deviations from exponential decay law

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Tran Vien Ha

    1993-01-01

    A continuous kinetic function approach is proposed for analyzing the experimental decay curves. In the case of purely exponential behaviour, the values of kinetic function are the same at different ages of the investigated radionuclide. The deviation from main decay curve could be found by a comparison of experimental kinetic function values with those obtained in purely exponential case. (author). 12 refs

  13. Reduction of measurement errors in OCT scanning

    Science.gov (United States)

    Morel, E. N.; Tabla, P. M.; Sallese, M.; Torga, J. R.

    2018-03-01

    Optical coherence tomography (OCT) is a non-destructive optical technique, which uses a light source with a wide band width that focuses on a point in the sample to determine the distance (strictly, the optical path difference, OPD) between this point and a reference surface. The point can be superficial or at an interior interface of the sample (transparent or semitransparent), allowing topographies and / or tomographies in different materials. The Michelson interferometer is the traditional experimental scheme for this technique, in which a beam of light is divided into two arms, one the reference and the other the sample. The overlap of reflected light in the sample and in the reference generates an interference signal that gives us information about the OPD between arms. In this work, we work on the experimental configuration in which the reference signal and the reflected signal in the sample travel on the same arm, improving the quality of the interference signal. Among the most important aspects of this improvement we can mention that the noise and errors produced by the relative reference-sample movement and by the dispersion of the refractive index are considerably reduced. It is thus possible to obtain 3D images of surfaces with a spatial resolution in the order of microns. Results obtained on the topography of metallic surfaces, glass and inks printed on paper are presented.

  14. Thermoluminescence under an exponential heating function: I. Theory

    International Nuclear Information System (INIS)

    Kitis, G; Chen, R; Pagonis, V; Carinou, E; Kamenopoulou, V

    2006-01-01

    Constant temperature hot gas readers are widely employed in thermoluminescence dosimetry. In such readers the sample is heated according to an exponential heating function. The single glow-peak shape derived under this heating condition is not described by the TL kinetics equation corresponding to a linear heating rate. In the present work TL kinetics expressions, for first and general order kinetics, describing single glow-peak shapes under an exponential heating function are derived. All expressions were modified from their original form of I(n 0 , E, s, b, T) into I(I m , E, T m , b, T) in order to become more efficient for glow-curve deconvolution analysis. The efficiency of all algorithms was extensively tested using synthetic glow-peaks

  15. Rank-shaping regularization of exponential spectral analysis for application to functional parametric mapping

    International Nuclear Information System (INIS)

    Turkheimer, Federico E; Hinz, Rainer; Gunn, Roger N; Aston, John A D; Gunn, Steve R; Cunningham, Vincent J

    2003-01-01

    Compartmental models are widely used for the mathematical modelling of dynamic studies acquired with positron emission tomography (PET). The numerical problem involves the estimation of a sum of decaying real exponentials convolved with an input function. In exponential spectral analysis (SA), the nonlinear estimation of the exponential functions is replaced by the linear estimation of the coefficients of a predefined set of exponential basis functions. This set-up guarantees fast estimation and attainment of the global optimum. SA, however, is hampered by high sensitivity to noise and, because of the positivity constraints implemented in the algorithm, cannot be extended to reference region modelling. In this paper, SA limitations are addressed by a new rank-shaping (RS) estimator that defines an appropriate regularization over an unconstrained least-squares solution obtained through singular value decomposition of the exponential base. Shrinkage parameters are conditioned on the expected signal-to-noise ratio. Through application to simulated and real datasets, it is shown that RS ameliorates and extends SA properties in the case of the production of functional parametric maps from PET studies

  16. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  17. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  18. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Chen Mingli; Lu Weiguo [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  19. Nonresponse Error in Mail Surveys: Top Ten Problems

    Directory of Open Access Journals (Sweden)

    Jeanette M. Daly

    2011-01-01

    Full Text Available Conducting mail surveys can result in nonresponse error, which occurs when the potential participant is unwilling to participate or impossible to contact. Nonresponse can result in a reduction in precision of the study and may bias results. The purpose of this paper is to describe and make readers aware of a top ten list of mailed survey problems affecting the response rate encountered over time with different research projects, while utilizing the Dillman Total Design Method. Ten nonresponse error problems were identified, such as inserter machine gets sequence out of order, capitalization in databases, and mailing discarded by postal service. These ten mishaps can potentiate nonresponse errors, but there are ways to minimize their frequency. Suggestions offered stem from our own experiences during research projects. Our goal is to increase researchers' knowledge of nonresponse error problems and to offer solutions which can decrease nonresponse error in future projects.

  20. Pilot Error in Air Carrier Mishaps: Longitudinal Trends Among 558 Reports, 1983–2002

    Science.gov (United States)

    Baker, Susan P.; Qiang, Yandong; Rebok, George W.; Li, Guohua

    2009-01-01

    Background Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. Method National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983–2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. Results The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983–87 to 25% in 1998–2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Conclusions Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during push-back have increased and deserve special attention. PMID:18225771

  1. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  2. Collisional avalanche exponentiation of run-away electrons in electrified plasmas

    International Nuclear Information System (INIS)

    Jayakumar, R.; Fleischmann, H.H.; Zweben, S.J.; Cornell Univ., Ithaca, NY

    1992-07-01

    In contrast to earlier expectations, it is estimated that generation of runaway electrons from close collisions of existing runaways with cold plasma electrons can be significant even for small electric fields, whenever runaways can gain energies of about 20 MeV or more. In that case, the runaway population will grow exponentially with the energy spectrum showing an exponential decrease towards higher energies.Energy gains of the required magnitude may occur in large Tokamak devices as well as in cosmic-ray generation

  3. On the non-hyperbolicity of a class of exponential polynomials

    Directory of Open Access Journals (Sweden)

    Gaspar Mora

    2017-10-01

    Full Text Available In this paper we have constructed a class of non-hyperbolic exponential polynomials that contains all the partial sums of the Riemann zeta function. An exponential polynomial been also defined to illustrate the complexity of the structure of the set defined by the closure of the real projections of its zeros. The sensitivity of this set, when the vector of delays is perturbed, has been analysed. These results have immediate implications in the theory of the neutral differential equations.

  4. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    Science.gov (United States)

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  5. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    Science.gov (United States)

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  6. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations

    Directory of Open Access Journals (Sweden)

    Ling Zhang

    2017-10-01

    Full Text Available Abstract The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs. It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order 1 2 $\\frac{1}{2}$ to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  7. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  8. Exponential rate of convergence in current reservoirs

    OpenAIRE

    De Masi, Anna; Presutti, Errico; Tsagkarogiannis, Dimitrios; Vares, Maria Eulalia

    2015-01-01

    In this paper, we consider a family of interacting particle systems on $[-N,N]$ that arises as a natural model for current reservoirs and Fick's law. We study the exponential rate of convergence to the stationary measure, which we prove to be of the order $N^{-2}$.

  9. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  10. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  11. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    Science.gov (United States)

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  12. Truncated exponential-rigid-rotor model for strong electron and ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.; Fleischmann, H.H.

    1979-01-01

    A comprehensive study of exponential-rigid-rotor equilibria for strong electron and ion rings indicates the presence of a sizeable percentage of untrapped particles in all equilibria with aspect-ratios R/a approximately <4. Such aspect-ratios are required in fusion-relevant rings. Significant changes in the equilibria are observed when untrapped particles are excluded by the use of a truncated exponential-rigid-rotor distribution function. (author)

  13. Re-analysis of exponential rigid-rotor astron equilibria

    International Nuclear Information System (INIS)

    Lovelace, R.V.; Larrabee, D.A.; Fleischmann, H.H.

    1978-01-01

    Previous studies of exponential rigid-rotor astron equilibria include particles which are not trapped in the self-field of the configuration. The modification of these studies required to exclude untrapped particles is derived

  14. Recursions of Symmetry Orbits and Reduction without Reduction

    Directory of Open Access Journals (Sweden)

    Andrei A. Malykh

    2011-04-01

    Full Text Available We consider a four-dimensional PDE possessing partner symmetries mainly on the example of complex Monge-Ampère equation (CMA. We use simultaneously two pairs of symmetries related by a recursion relation, which are mutually complex conjugate for CMA. For both pairs of partner symmetries, using Lie equations, we introduce explicitly group parameters as additional variables, replacing symmetry characteristics and their complex conjugates by derivatives of the unknown with respect to group parameters. We study the resulting system of six equations in the eight-dimensional space, that includes CMA, four equations of the recursion between partner symmetries and one integrability condition of this system. We use point symmetries of this extended system for performing its symmetry reduction with respect to group parameters that facilitates solving the extended system. This procedure does not imply a reduction in the number of physical variables and hence we end up with orbits of non-invariant solutions of CMA, generated by one partner symmetry, not used in the reduction. These solutions are determined by six linear equations with constant coefficients in the five-dimensional space which are obtained by a three-dimensional Legendre transformation of the reduced extended system. We present algebraic and exponential examples of such solutions that govern Legendre-transformed Ricci-flat Kähler metrics with no Killing vectors. A similar procedure is briefly outlined for Husain equation.

  15. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  16. Preliminary performance analysis of exponential experimental system for the determination of neutron effective multiplication factor of PWR spent fuel

    International Nuclear Information System (INIS)

    Shin, Heesung; Lee, Sang-Yun; Ro, Seung-Gy; Seo, Gi-Seok; Kim, Ho-Dong

    2002-01-01

    An exponential experiment system which is composed of neutron detector, signal analysis system and neutron source, 10 mCi Cf-252 has been installed in the storage pool of PIEF at KAERI in order to experimentally determining neutron effective multiplication factors of PWR spent fuel assemblies. Preliminary functional characteristic tests of the experimental system are performed for C15, J14 and J44 assemblies loaded in the pool. As a result of preliminary tests, the average neutron counts obtained for 3 minutes in the plateau of the C15, J14 and J44 assemblies are about 1900, 3800 and 3200, respectively. A dip of the neutron flux density distribution is noticed in the spacer grid position. Neutron counts at those positions appear to be reduced to about 70 % in comparison to the fuel position. The measured axial neutron distribution shapes are compared with the result for the P14 assembly and Cs-137 gamma scanning data performed in KAERI. It is revealed that the spacer grid position measured is consistent with the design specifications within a 2.3 % error. The exponential decay constants for the C15 assembly were determined to be 0.152 and 0.165 for detector and source scanning, respectively. (author)

  17. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  18. A nanostructured surface increases friction exponentially at the solid-gas interface.

    Science.gov (United States)

    Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E; Prashanthi, Kovur; Thundat, Thomas

    2016-09-06

    According to Stokes' law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.

  19. A nanostructured surface increases friction exponentially at the solid-gas interface

    Science.gov (United States)

    Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E.; Prashanthi, Kovur; Thundat, Thomas

    2016-09-01

    According to Stokes’ law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.

  20. Decreasing scoring errors on Wechsler Scale Vocabulary, Comprehension, and Similarities subtests: a preliminary study.

    Science.gov (United States)

    Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G

    2007-10-01

    Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.

  1. "First, know thyself": cognition and error in medicine.

    Science.gov (United States)

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  2. Linear, Step by Step Managerial Performance, versus Exponential Performance

    Directory of Open Access Journals (Sweden)

    George MOLDOVEANU

    2011-04-01

    Full Text Available The paper proposes the transition from the potential management concept, which its authors approached by determining its dimension (Roşca, Moldoveanu, 2009b, to the linear, step by step performance concept, as an objective result of management process. In this way, we “answer” the theorists and practitioners, who support exponential management performance. The authors, as detractors of the exponential performance, are influenced by the current crisis (Roşca, Moldoveanu, 2009a, by the lack of organizational excellence in many companies, particularly in Romanian ones and also reaching “the finality” in the evolved companies, developed into an uncontrollable speed.

  3. Generator of an exponential function with respect to time

    International Nuclear Information System (INIS)

    Janin, Paul; Puyal, Claude.

    1981-01-01

    This invention deals with an exponential function generator, and an application of this generator to simulating the criticality of a nuclear reactor for reactimeter calibration purposes. This generator, which is particularly suitable for simulating the criticality of a nuclear reactor to calibrate a reactimeter, can also be used in any field of application necessitating the generation of an exponential function in real time. In certain fields of thermodynamics, it is necessary to represent temperature gradients as a function of time. The generator might find applications here. Another application is nuclear physics where it is necessary to represent the attenuation of a neutron flux density with respect to time [fr

  4. When economic growth is less than exponential

    DEFF Research Database (Denmark)

    Groth, Christian; Koch, Karl-Josef; Steger, Thomas

    2010-01-01

    This paper argues that growth theory needs a more general notion of "regularity" than that of exponential growth. We suggest that paths along which the rate of decline of the growth rate is proportional to the growth rate itself deserve attention. This opens up for considering a richer set...

  5. When Economic Growth is Less than Exponential

    DEFF Research Database (Denmark)

    Groth, Christian; Koch, Karl-Josef; Steger, Thomas M.

    This paper argues that growth theory needs a more general notion of "regularity" than that of exponential growth. We suggest that paths along which the rate of decline of the growth rate is proportional to the growth rate itself deserve attention. This opens up for considering a richer set...

  6. Students' Understanding of Exponential and Logarithmic Functions.

    Science.gov (United States)

    Weber, Keith

    Exponential, and logarithmic functions are pivotal mathematical concepts that play central roles in advanced mathematics. Unfortunately, these are also concepts that give students serious difficulty. This report describe a theory of how students acquire an understanding of these functions by prescribing a set of mental constructions that a student…

  7. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    Science.gov (United States)

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  9. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  10. Test Exponential Pile

    Science.gov (United States)

    Fermi, Enrico

    The Patent contains an extremely detailed description of an atomic pile employing natural uranium as fissile material and graphite as moderator. It starts with the discussion of the theory of the intervening phenomena, in particular the evaluation of the reproduction or multiplication factor, K, that is the ratio of the number of fast neutrons produced in one generation by the fissions to the original number of fast neutrons, in a system of infinite size. The possibility of having a self-maintaining chain reaction in a system of finite size depends both on the facts that K is greater than unity and the overall size of the system is sufficiently large to minimize the percentage of neutrons escaping from the system. After the description of a possible realization of such a pile (with many detailed drawings), the various kinds of neutron losses in a pile are depicted. Particularly relevant is the reported "invention" of the exponential experiment: since theoretical calculations can determine whether or not a chain reaction will occur in a give system, but can be invalidated by uncertainties in the parameters of the problem, an experimental test of the pile is proposed, aimed at ascertaining if the pile under construction would be divergent (i.e. with a neutron multiplication factor K greater than 1) by making measurements on a smaller pile. The idea is to measure, by a detector containing an indium foil, the exponential decrease of the neutron density along the length of a column of uranium-graphite lattice, where a neutron source is placed near its base. Such an exponential decrease is greater or less than that expected due to leakage, according to whether the K factor is less or greater than 1, so that this experiment is able to test the criticality of the pile, its accuracy increasing with the size of the column. In order to perform this measure a mathematical description of the effect of neutron production, diffusion, and absorption on the neutron density in the

  11. Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays

    International Nuclear Information System (INIS)

    Zhao Hongyong; Ding Nan; Chen Ling

    2009-01-01

    This paper is concerned with the problem of exponential stability analysis for fuzzy cellular neural network with delays. By constructing suitable Lyapunov functional and using stochastic analysis we present some sufficient conditions ensuring almost sure exponential stability for the network. Moreover, an example is given to demonstrate the advantages of our method.

  12. Afrika Statistika ISSN 2316-090X A note on a new exponential ...

    African Journals Online (AJOL)

    Let {Xn,n ≥ 1} be a sequence of random variables defined on a fixed probability space. (Ω, F, P). An exponential inequality ... Now as to the probability inequality field, as pointed out by Sung et al. (2011), exponential inequalities for ..... associated random variables. Journal of Statistical Planning and inference, 138, 4132-.

  13. Errors in laboratory medicine: practical lessons to improve patient safety.

    Science.gov (United States)

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification

  14. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  15. Probabilistic error bounds for reduced order modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)

    2015-07-01

    Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)

  16. Flowshop Scheduling Problems with a Position-Dependent Exponential Learning Effect

    Directory of Open Access Journals (Sweden)

    Mingbao Cheng

    2013-01-01

    Full Text Available We consider a permutation flowshop scheduling problem with a position-dependent exponential learning effect. The objective is to minimize the performance criteria of makespan and the total flow time. For the two-machine flow shop scheduling case, we show that Johnson’s rule is not an optimal algorithm for minimizing the makespan given the exponential learning effect. Furthermore, by using the shortest total processing times first (STPT rule, we construct the worst-case performance ratios for both criteria. Finally, a polynomial-time algorithm is proposed for special cases of the studied problem.

  17. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.

    Science.gov (United States)

    Gustman, Alan L; Steinmeier, Thomas L

    2012-06-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.

  18. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with

  19. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    2001-01-01

    Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with

  20. Bias Errors due to Leakage Effects When Estimating Frequency Response Functions

    Directory of Open Access Journals (Sweden)

    Andreas Josefsson

    2012-01-01

    Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.

  1. Delay-dependent exponential stability of cellular neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Zhang Qiang; Wei Xiaopeng; Xu Jin

    2005-01-01

    The global exponential stability of cellular neural networks (CNNs) with time-varying delays is analyzed. Two new sufficient conditions ensuring global exponential stability for delayed CNNs are obtained. The conditions presented here are related to the size of delay. The stability results improve the earlier publications. Two examples are given to demonstrate the effectiveness of the obtained results

  2. Nonlinear control of ships minimizing the position tracking errors

    Directory of Open Access Journals (Sweden)

    Svein P. Berge

    1999-07-01

    Full Text Available In this paper, a nonlinear tracking controller with integral action for ships is presented. The controller is based on state feedback linearization. Exponential convergence of the vessel-fixed position and velocity errors are proven by using Lyapunov stability theory. Since we only have two control devices, a rudder and a propeller, we choose to control the longship and the sideship position errors to zero while the heading is stabilized indirectly. A Virtual Reference Point (VRP is defined at the bow or ahead of the ship. The VRP is used for tracking control. It is shown that the distance from the center of rotation to the VRP will influence on the stability of the zero dynamics. By selecting the VRP at the bow or even ahead of the bow, the damping in yaw can be increased and the zero dynamics is stabilized. Hence, the heading angle will be less sensitive to wind, currents and waves. The control law is simulated by using a nonlinear model of the Japanese training ship Shiojimaru with excellent results. Wind forces are added to demonstrate the robustness and performance of the integral controller.

  3. Blood transfusion sampling and a greater role for error recovery.

    Science.gov (United States)

    Oldham, Jane

    Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.

  4. Sub-exponential spin-boson decoherence in a finite bath

    International Nuclear Information System (INIS)

    Wong, V.; Gruebele, M.

    2002-01-01

    We investigate the decoherence of a two-level system coupled to harmonic baths of 4-21 degrees of freedom, to baths with internal anharmonic couplings, and to baths with an additional 'solvent shell' (modes coupled to other bath modes, but not to the system). The discrete spectral densities are chosen to mimic the highly fluctuating spectral densities computed for real systems such as proteins. System decoherence is computed by exact quantum dynamics. With realistic parameter choices (finite temperature, reasonably large couplings), sub-exponential decoherence of the two-level system is observed. Empirically, the time-dependence of decoherence can be fitted by power laws with small exponents. Intrabath anharmonic couplings are more effective at smoothing the spectral density and restoring exponential dynamics, than additional bath modes or solvent shells. We conclude that at high temperature, the most important physical basis for exponential decays is anharmonicity of those few bath modes interacting most strongly with the system, not a large number of oscillators interacting with the system. We relate the current numerical simulations to models of anharmonically coupled oscillators, which also predict power law dynamics. The potential utility of power law decays in quantum computation and condensed phase coherent control are also discussed

  5. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  6. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  7. Reduced heme levels underlie the exponential growth defect of the Shewanella oneidensis hfq mutant.

    Directory of Open Access Journals (Sweden)

    Christopher M Brennan

    Full Text Available The RNA chaperone Hfq fulfills important roles in small regulatory RNA (sRNA function in many bacteria. Loss of Hfq in the dissimilatory metal reducing bacterium Shewanella oneidensis strain MR-1 results in slow exponential phase growth and a reduced terminal cell density at stationary phase. We have found that the exponential phase growth defect of the hfq mutant in LB is the result of reduced heme levels. Both heme levels and exponential phase growth of the hfq mutant can be completely restored by supplementing LB medium with 5-aminolevulinic acid (5-ALA, the first committed intermediate synthesized during heme synthesis. Increasing expression of gtrA, which encodes the enzyme that catalyzes the first step in heme biosynthesis, also restores heme levels and exponential phase growth of the hfq mutant. Taken together, our data indicate that reduced heme levels are responsible for the exponential growth defect of the S. oneidensis hfq mutant in LB medium and suggest that the S. oneidensis hfq mutant is deficient in heme production at the 5-ALA synthesis step.

  8. Subcriticality determination of low-enriched UO2 lattices in water by exponential experiment

    International Nuclear Information System (INIS)

    Suzaki, Takenori

    1991-01-01

    To determine the static k (effective neutron multiplication factor) ranging from the critical to an extremely subcritical states, the exponential experiments were performed using various sizes of light-water moderated and reflected low-enriched UO 2 lattice cores. For comparison, the pulsed neutron source experiments were also carried out. In the manner of the Gozani's bracketing method applied to the pulsed source experiment, a formula to obtain k from the measured spatial-decay constant was derived on the basis of diffusion theory. Parameters in the formulas needed to obtain k from the respective experiments were evaluated by 4-group neutron diffusion calculations. The results of the exponential experiments agreed well with those of the pulsed source experiments, the 4-group diffusion calculations and the 137-group Monte Carlo calculations. Therefore, the present data-processing method developed for the exponential experiment was demonstrated to be valid. Besides, through the examination on the parameters used in the data processing, it was found that the dependence of parameter value upon k is weak in the exponential experiment compared with that in the pulsed source experiment. This indicates the superiority of the exponential experiment over the pulsed source experiment for the subcriticality determination of a wide range. (author)

  9. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Virotta, Francesco

    2012-02-21

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as {tau}{sub exp}(a){proportional_to}a{sup -5}, where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10){tau}{sub exp}. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N{sub f}=2 simulations using the Kaon decay constant f{sub K} as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  10. Critical slowing down and error analysis in lattice QCD simulations

    International Nuclear Information System (INIS)

    Virotta, Francesco

    2012-01-01

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as τ exp (a)∝a -5 , where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10)τ exp . This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N f =2 simulations using the Kaon decay constant f K as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  11. Double-exponential decay of orientational correlations in semiflexible polyelectrolytes.

    Science.gov (United States)

    Bačová, P; Košovan, P; Uhlík, F; Kuldová, J; Limpouchová, Z; Procházka, K

    2012-06-01

    In this paper we revisited the problem of persistence length of polyelectrolytes. We performed a series of Molecular Dynamics simulations using the Debye-Hückel approximation for electrostatics to test several equations which go beyond the classical description of Odijk, Skolnick and Fixman (OSF). The data confirm earlier observations that in the limit of large contour separations the decay of orientational correlations can be described by a single-exponential function and the decay length can be described by the OSF relation. However, at short countour separations the behaviour is more complex. Recent equations which introduce more complicated expressions and an additional length scale could describe the results very well on both the short and the long length scale. The equation of Manghi and Netz when used without adjustable parameters could capture the qualitative trend but deviated in a quantitative comparison. Better quantitative agreement within the estimated error could be obtained using three equations with one adjustable parameter: 1) the equation of Manghi and Netz; 2) the equation proposed by us in this paper; 3) the equation proposed by Cannavacciuolo and Pedersen. Two characteristic length scales can be identified in the data: the intrinsic or bare persistence length and the electrostatic persistence length. All three equations use a single parameter to describe a smooth crossover from the short-range behaviour dominated by the intrinsic stiffness of the chain to the long-range OSF-like behaviour.

  12. Esscher transforms and the minimal entropy martingale measure for exponential Lévy models

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Sgarra, C.

    In this paper we offer a systematic survey and comparison of the Esscher martingale transform for linear processes, the Esscher martingale transform for exponential processes, and the minimal entropy martingale measure for exponential lévy models and present some new results in order to give...

  13. Singularity-Free Neural Control for the Exponential Trajectory Tracking in Multiple-Input Uncertain Systems with Unknown Deadzone Nonlinearities

    Directory of Open Access Journals (Sweden)

    J. Humberto Pérez-Cruz

    2014-01-01

    Full Text Available The trajectory tracking for a class of uncertain nonlinear systems in which the number of possible states is equal to the number of inputs and each input is preceded by an unknown symmetric deadzone is considered. The unknown dynamics is identified by means of a continuous time recurrent neural network in which the control singularity is conveniently avoided by guaranteeing the invertibility of the coupling matrix. Given this neural network-based mathematical model of the uncertain system, a singularity-free feedback linearization control law is developed in order to compel the system state to follow a reference trajectory. By means of Lyapunov-like analysis, the exponential convergence of the tracking error to a bounded zone can be proven. Likewise, the boundedness of all closed-loop signals can be guaranteed.

  14. Exponential Synchronization of Uncertain Complex Dynamical Networks with Delay Coupling

    International Nuclear Information System (INIS)

    Wang Lifu; Kong Zhi; Jing Yuanwei

    2010-01-01

    This paper studies the global exponential synchronization of uncertain complex delayed dynamical networks. The network model considered is general dynamical delay networks with unknown network structure and unknown coupling functions but bounded. Novel delay-dependent linear controllers are designed via the Lyapunov stability theory. Especially, it is shown that the controlled networks are globally exponentially synchronized with a given convergence rate. An example of typical dynamical network of this class, having the Lorenz system at each node, has been used to demonstrate and verify the novel design proposed. And, the numerical simulation results show the effectiveness of proposed synchronization approaches. (general)

  15. Minimizing the effect of exponential trends in detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Xu Na; Shang Pengjian; Kamae, Santi

    2009-01-01

    The detrended fluctuation analysis (DFA) and its extensions (MF-DFA) have been used extensively to determine possible long-range correlations in time series. However, recent studies have reported the susceptibility of DFA to trends which give rise to spurious crossovers and prevent reliable estimation of the scaling exponents. In this report, a smoothing algorithm based on the discrete laplace transform (DFT) is proposed to minimize the effect of exponential trends and distortion in the log-log plots obtained by MF-DFA techniques. The effectiveness of the technique is demonstrated on monofractal and multifractal data corrupted with exponential trends.

  16. On exponential stability and periodic solutions of CNNs with delays

    Science.gov (United States)

    Cao, Jinde

    2000-03-01

    In this Letter, the author analyses further problems of global exponential stability and the existence of periodic solutions of cellular neural networks with delays (DCNNs). Some simple and new sufficient conditions are given ensuring global exponential stability and the existence of periodic solutions of DCNNs by applying some new analysis techniques and constructing suitable Lyapunov functionals. These conditions have important leading significance in the design and applications of globally stable DCNNs and periodic oscillatory DCNNs and are weaker than those in the earlier works [Phys. Rev. E 60 (1999) 3244], [J. Comput. Syst. Sci. 59 (1999)].

  17. Intersection of the Exponential and Logarithmic Curves

    Science.gov (United States)

    Boukas, Andreas; Valahas, Theodoros

    2009-01-01

    The study of the number of intersection points of y = a[superscript x] and y = log[subscript a]x can be an interesting topic to present in a single-variable calculus class. In this article, the authors present a classroom presentation outline involving the basic algebra and the elementary calculus of the exponential and logarithmic functions. The…

  18. Errors in chest x-ray interpretation

    International Nuclear Information System (INIS)

    Woznitza, N.; Piper, K.

    2015-01-01

    Full text: Reporting of adult chest x-rays by appropriately trained radiographers is frequently used in the United Kingdom as one method to maintain a patient focused radiology service in times of increasing workload. With models of advanced practice being developed in Australia, New Zealand and Canada, the spotlight is on the evidence base which underpins radiographer reporting. It is essential that any radiographer who extends their scope of practice to incorporate definitive clinical reporting perform at a level comparable to a consultant radiologist. In any analysis of performance it is important to quantify levels of sensitivity and specificity and to evaluate areas of error and variation. A critical review of the errors made by reporting radiographers in the interpretation of adult chest x-rays will be performed, examining performance in structured clinical examinations, clinical audit and a diagnostic accuracy study from research undertaken by the authors, and including studies which have compared the performance of reporting radiographers and consultant radiologists. overall performance will be examined and common errors discussed using a case based approach. Methods of error reduction, including multidisciplinary team meetings and ongoing learning will be considered

  19. Circuit simulation of exponential transmission line for petawatt Z-pinch plasma drivers

    International Nuclear Information System (INIS)

    Zeng Zhengzhong

    2011-01-01

    It was demonstrated, based on the PSPICE circuit simulation, that the sectioning number for the circuit simulation of an exponential transmission line should be determined as twice the line's one-way electromagnetic wave transport time(electric length) divided by the wave-front of input pulse, owing to elimination of the wave reflections caused by artificial impedance discontinuity in the line's circuit simulation model, which employs a serial and sectional transmission line with impedances constant in each section but stair-step-varied between sections, and with total electric length the same as that of the exponential line under simulation. A pulse of 112.2 ns wave-front propagates through an exponential water transmission line of 1 234.2 ns one-way transport time will give the best sectioning number of 22, when the constant impedance of each section is given by the geometric mean of the two ends' impedances of the corresponding section on the exponential line under simulation. This sectioning rule is equivalent to the statement that the two-way transport time of each section should be equal to the input pulse's wave-front. (authors)

  20. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    Energy Technology Data Exchange (ETDEWEB)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method of reconciling any conflicting results from the two leakage tests.

  1. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib; Galassi, R. Malpica; Valorani, M.

    2016-01-01

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  2. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib

    2016-01-05

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  3. Error in the delivery of radiation therapy: Results of a quality assurance review

    International Nuclear Information System (INIS)

    Huang, Grace; Medlam, Gaylene; Lee, Justin; Billingsley, Susan; Bissonnette, Jean-Pierre; Ringash, Jolie; Kane, Gabrielle; Hodgson, David C.

    2005-01-01

    Purpose: To examine error rates in the delivery of radiation therapy (RT), technical factors associated with RT errors, and the influence of a quality improvement intervention on the RT error rate. Methods and materials: We undertook a review of all RT errors that occurred at the Princess Margaret Hospital (Toronto) from January 1, 1997, to December 31, 2002. Errors were identified according to incident report forms that were completed at the time the error occurred. Error rates were calculated per patient, per treated volume (≥1 volume per patient), and per fraction delivered. The association between tumor site and error was analyzed. Logistic regression was used to examine the association between technical factors and the risk of error. Results: Over the study interval, there were 555 errors among 28,136 patient treatments delivered (error rate per patient = 1.97%, 95% confidence interval [CI], 1.81-2.14%) and among 43,302 treated volumes (error rate per volume = 1.28%, 95% CI, 1.18-1.39%). The proportion of fractions with errors from July 1, 2000, to December 31, 2002, was 0.29% (95% CI, 0.27-0.32%). Patients with sarcoma or head-and-neck tumors experienced error rates significantly higher than average (5.54% and 4.58%, respectively); however, when the number of treated volumes was taken into account, the head-and-neck error rate was no longer higher than average (1.43%). The use of accessories was associated with an increased risk of error, and internal wedges were more likely to be associated with an error than external wedges (relative risk = 2.04; 95% CI, 1.11-3.77%). Eighty-seven errors (15.6%) were directly attributed to incorrect programming of the 'record and verify' system. Changes to planning and treatment processes aimed at reducing errors within the head-and-neck site group produced a substantial reduction in the error rate. Conclusions: Errors in the delivery of RT are uncommon and usually of little clinical significance. Patient subgroups and

  4. Errors in dual x-ray beam differential absorptiometry

    International Nuclear Information System (INIS)

    Bolin, F.; Preuss, L.; Gilbert, K.; Bugenis, C.

    1977-01-01

    Errors pertinent to the dual beam absorptiometry system have been studied and five areas are given in detail: (1) scattering, in which a computer analysis of multiple scattering shows little error due to this effect; (2) geometrical configuration effects, in which the slope of the sample is shown to influence the accuracy of the measurement; (3) Poisson variations, wherein it is shown that a simultaneous reduction can be obtained in both dosage and statistical error; (4) absorption coefficients, in which the effect of variation in absorption coefficient compilations is shown to have a critical effect on the interpretations of experimental data; and (5) filtering, wherein is shown the need for filters on dual beam systems using a characteristic x-ray output. A zero filter system is outlined

  5. Textbook Error: Short Circuiting on Electrochemical Cell

    Science.gov (United States)

    Bonicamp, Judith M.; Clark, Roy W.

    2007-01-01

    Short circuiting an electrochemical cell is an unreported but persistent error in the electrochemistry textbooks. It is suggested that diagrams depicting a cell delivering usable current to a load be postponed, the theory of open-circuit galvanic cells is explained, the voltages from the tables of standard reduction potentials is calculated and…

  6. Einstein, the exponential metric, and a proposed gravitational Michelson-Morley experiment

    International Nuclear Information System (INIS)

    Yilmaz, H.

    1979-01-01

    An early but potentially important remark of Einstein on the exponential nature of time-dilation is discussed. Using the same argument for the length-contraction, plus two alternative kinematical assumptions, the Schwarzschild and exponential metrics are derived. A gravitational Michelson-Morley experiment with one arm directed along the vertical is proposed to test the metrics. The experiment may be considered as a laboratory test of the Schwarzschild field and possibly a test of the black-hole interpretation of collapsed matter

  7. Forecasting Financial Extremes: A Network Degree Measure of Super-exponential Growth

    OpenAIRE

    Wanfeng Yan; Edgar van Tuyll van Serooskerken

    2015-01-01

    Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous over reaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential grow...

  8. A modified exponential behavioral economic demand model to better describe consumption data.

    Science.gov (United States)

    Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K

    2015-12-01

    Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  9. Exponential stability of fuzzy cellular neural networks with constant and time-varying delays

    International Nuclear Information System (INIS)

    Liu Yanqing; Tang Wansheng

    2004-01-01

    In this Letter, the global stability of delayed fuzzy cellular neural networks (FCNN) with either constant delays or time varying delays is proposed. Firstly, we give the existence and uniqueness of the equilibrium point by using the theory of topological degree and the properties of nonsingular M-matrix and the sufficient conditions for ascertaining the global exponential stability by constructing a suitable Lyapunov functional. Secondly, the criteria for guaranteeing the global exponential stability of FCNN with time varying delays are given and the estimation of exponential convergence rate with regard to speed of vary of delays is presented by constructing a suitable Lyapunov functional

  10. A perturbational h4 exponential finite difference scheme for the convective diffusion equation

    International Nuclear Information System (INIS)

    Chen, G.Q.; Gao, Z.; Yang, Z.F.

    1993-01-01

    A perturbational h 4 compact exponential finite difference scheme with diagonally dominant coefficient matrix and upwind effect is developed for the convective diffusion equation. Perturbations of second order are exerted on the convective coefficients and source term of an h 2 exponential finite difference scheme proposed in this paper based on a transformation to eliminate the upwind effect of the convective diffusion equation. Four numerical examples including one- to three-dimensional model equations of fluid flow and a problem of natural convective heat transfer are given to illustrate the excellent behavior of the present exponential schemes. Besides, the h 4 accuracy of the perturbational scheme is verified using double precision arithmetic

  11. Mean Square Exponential Stability of Stochastic Switched System with Interval Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Manlika Rajchakit

    2012-01-01

    Full Text Available This paper is concerned with mean square exponential stability of switched stochastic system with interval time-varying delays. The time delay is any continuous function belonging to a given interval, but not necessary to be differentiable. By constructing a suitable augmented Lyapunov-Krasovskii functional combined with Leibniz-Newton’s formula, a switching rule for the mean square exponential stability of switched stochastic system with interval time-varying delays and new delay-dependent sufficient conditions for the mean square exponential stability of the switched stochastic system are first established in terms of LMIs. Numerical example is given to show the effectiveness of the obtained result.

  12. Progressive Exponential Clustering-Based Steganography

    Directory of Open Access Journals (Sweden)

    Li Yue

    2010-01-01

    Full Text Available Cluster indexing-based steganography is an important branch of data-hiding techniques. Such schemes normally achieve good balance between high embedding capacity and low embedding distortion. However, most cluster indexing-based steganographic schemes utilise less efficient clustering algorithms for embedding data, which causes redundancy and leaves room for increasing the embedding capacity further. In this paper, a new clustering algorithm, called progressive exponential clustering (PEC, is applied to increase the embedding capacity by avoiding redundancy. Meanwhile, a cluster expansion algorithm is also developed in order to further increase the capacity without sacrificing imperceptibility.

  13. Power-law versus exponential relaxation of {sup 29}Si nucleus spins in Si:B crystals

    Energy Technology Data Exchange (ETDEWEB)

    Koplak, O.V. [Institute of Problems of Chemical Physics, 142432 Chernogolovka, Moscow (Russian Federation); Taras Shevchenko Kiev National University and National Academy of Sciences, 01033 Kiev (Ukraine); Talantsev, A.D., E-mail: adt@icp.ac.ru [Institute of Problems of Chemical Physics, 142432 Chernogolovka, Moscow (Russian Federation); Morgunov, R.B. [Institute of Problems of Chemical Physics, 142432 Chernogolovka, Moscow (Russian Federation); Sholokhov Moscow State University for the Humanities, 109240 Moscow (Russian Federation)

    2016-02-15

    The Si:B micro-crystals enriched with {sup 29}Si isotope have been studied by high resolution nuclear magnetic resonance (NMR) in the 300–800 K temperature range. The recovery of nuclear magnetization saturated by radiofrequency impulses follows pure power-law kinetics at 300 K, while admixture of exponential relaxation takes place at 500 K. The power-law relaxation corresponds to direct electron–nuclear relaxation due to the inhomogeneous distribution of paramagnetic centers, while exponential kinetics corresponds to the nuclear spin diffusion mechanism. The inhomogeneous distribution of deformation defects is a most probable reason of the power-law kinetics of nuclear spin relaxation. - Highlights: • {sup 29}Si nuclear magnetization relaxation follows mixed power-exponential law. • Power-law corresponds to direct electron–nuclear relaxation. • Admixture of exponential relaxation corresponds to the nuclear spin diffusion. • Inhomogeneously distributed deformation defects are responsible for power low. • Homogeneously distributed Boron acceptors are responsible for exponential part.

  14. The systems approach to error reduction: factors influencing inoculation injury reporting in the operating theatre.

    Science.gov (United States)

    Cutter, Jayne; Jordan, Sue

    2013-11-01

    To examine the frequency of, and factors influencing, reporting of mucocutaneous and percutaneous injuries in operating theatres. Surgeons and peri-operative nurses risk acquiring blood-borne viral infections during surgical procedures. Appropriate first-aid and prophylactic treatment after an injury can significantly reduce the risk of infection. However, studies indicate that injuries often go unreported. The 'systems approach' to error reduction relies on reporting incidents and near misses. Failure to report will compromise safety. A postal survey of all surgeons and peri-operative nurses engaged in exposure prone procedures in nine Welsh hospitals, face-to-face interviews with selected participants and telephone interviews with Infection Control Nurses. The response rate was 51.47% (315/612). Most respondents reported one or more percutaneous (183/315, 58.1%) and/or mucocutaneous injuries (68/315, 21.6%) in the 5 years preceding the study. Only 54.9% (112/204) reported every injury. Surgeons were poorer at reporting: 70/133 (52.6%) reported all or >50% of their injuries compared with 65/71 nurses (91.5%). Injuries are frequently under-reported, possibly compromising safety in operating theatres. A significant number of inoculation injuries are not reported. Factors influencing under-reporting were identified. This knowledge can assist managers in improving reporting and encouraging a robust safety culture within operating departments. © 2012 John Wiley & Sons Ltd.

  15. Exponential stability of delayed recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Zidong; Liu Yurong; Yu Li; Liu Xiaohui

    2006-01-01

    In this Letter, the global exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with time delays and Markovian jumping parameters. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. The purpose of the problem addressed is to derive some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions, and therefore the global exponential stability in the mean square for the delayed RNNs can be easily checked by utilizing the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions

  16. Fractal-based exponential distribution of urban density and self-affine fractal forms of cities

    International Nuclear Information System (INIS)

    Chen Yanguang; Feng Jian

    2012-01-01

    Highlights: ► The model of urban population density differs from the common exponential function. ► Fractal landscape influences the exponential distribution of urban density. ► The exponential distribution of urban population suggests a self-affine fractal. ► Urban space can be divided into three layers with scaling and non-scaling regions. ► The dimension of urban form with characteristic scale can be treated as 2. - Abstract: Urban population density always follows the exponential distribution and can be described with Clark’s model. Because of this, the spatial distribution of urban population used to be regarded as non-fractal pattern. However, Clark’s model differs from the exponential function in mathematics because that urban population is distributed on the fractal support of landform and land-use form. By using mathematical transform and empirical evidence, we argue that there are self-affine scaling relations and local power laws behind the exponential distribution of urban density. The scale parameter of Clark’s model indicating the characteristic radius of cities is not a real constant, but depends on the urban field we defined. So the exponential model suggests local fractal structure with two kinds of fractal parameters. The parameters can be used to characterize urban space filling, spatial correlation, self-affine properties, and self-organized evolution. The case study of the city of Hangzhou, China, is employed to verify the theoretical inference. Based on the empirical analysis, a three-ring model of cities is presented and a city is conceptually divided into three layers from core to periphery. The scaling region and non-scaling region appear alternately in the city. This model may be helpful for future urban studies and city planning.

  17. Exponential relationship between DMIPP uptake and blood flow in normal and ischemic canine myocardium

    Energy Technology Data Exchange (ETDEWEB)

    Comans, E.F.I.; Lingen, A. van; Bax, J.J.; Sloof, G.W. [Free Univ. Hospital, Amsterdam (Netherlands). Dept. of Nuclear Medicine; Visser, F.C. [Free Univ. Hospital, Amsterdam (Netherlands). Dept. of Cardiology; Vusse, G.J. van der [Limburg Univ., Maastricht (Netherlands). Cardiovascular Research Inst.; Knapp, F.F. Jun. [Oak Ridge Lab., TN (United States). Nuclear Medicine Group

    1998-12-31

    In 10 open-chest dogs the left anterior descending coronary artery was cannulated and extracorporally bypass (ECB) perfused at reduced flow. Myocardial blood flow (MBF) was assessed with Scandium-46 labeled microspheres. Forty minutes after i.v. injection of DMIPP, the heart was excised and cut into 120 samples. In each sample MBF (ml/g*min) and DMIPP uptake (percentage of the injected dose per gram: %id/g) were assessed. The relation between normalized MBF and DMIPP uptake was assessed using a linear, with a zero and with a non-zero intercept, and an exponential model function: A[1-e{sup -MBF/Fc}], where A and Fc are the amplitude and flow constant, respectively. The goodness of fit for all models was expressed as the standard error of estimate (SEE). In all individual dogs the relation between DMIPP uptake and MBF was significantly better (p<0.001) represented by an exponential model than a linear model with zero intercept. In 8 of 10 dogs the exponential model showed a better fit than the linear model with a non-zero intercept. The difference was significant (p<0.05) in 5 dogs. For Pooled data, linear regression analysis with a non-zero intercept revealed: DMIPP=0.54+0.44*MBF (SEE: 0.18) and with a zero intercept: DMIPP=0.97*MBF (SEE: 0.27). The goodness of fit of the exponential model: DMIPP=1.07[1-e{sup -MBF/0.35}] (SEE: 0.15) was significantly better (p<0.0001) than the linear models. In the normal to low MBF range, uptake of the dimenthyl branched fatty acid analogue DMIPP shows an exponential relationship, which is more appropriate than a linear relationship from a physiological point of view. (orig./MG) [Deutsch] An 10 Hunden mit eroeffnetem Brustkorb wurde der Ramus interventricularis anterior der linken Koronararterie kanueliert und ueber einen extrakorporalen Bypass (ECB) mit einem reduzierten Fluss perfundiert. Der myokardiale Blutfluss (MBF) wurde ueber Scandium-46-markierte Mikrosphaeren erfasst. Vierzig Minuten nach der iv. Injektion von DMIPP wurde

  18. Conditionally exponential convex functions on locally compact groups

    International Nuclear Information System (INIS)

    Okb El-Bab, A.S.

    1992-09-01

    The main results of the thesis are: 1) The construction of a compact base for the convex cone of all conditionally exponential convex functions. 2) The determination of the extreme parts of this cone. Some supplementary lemmas are proved for this purpose. (author). 8 refs

  19. On root mean square approximation by exponential functions

    OpenAIRE

    Sharipov, Ruslan

    2014-01-01

    The problem of root mean square approximation of a square integrable function by finite linear combinations of exponential functions is considered. It is subdivided into linear and nonlinear parts. The linear approximation problem is solved. Then the nonlinear problem is studied in some particular example.

  20. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  1. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  2. Equivalent Method of Solving Quantum Efficiency of Reflection-Mode Exponential Doping GaAs Photocathode

    International Nuclear Information System (INIS)

    Jun, Niu; Zhi, Yang; Ben-Kang, Chang

    2009-01-01

    The mathematical expression of the electron diffusion and drift length L DE of exponential doping photocathode is deduced. In the quantum efficiency equation of the reffection-mode uniform doping cathode, substituting L DE for L D , the equivalent quantum efficiency equation of the reffection-mode exponential doping cathode is obtained. By using the equivalent equation, theoretical simulation and experimental analysis shows that the equivalent index formula and formula-doped cathode quantum efficiency results in line. The equivalent equation avoids complicated calculation, thereby simplifies the process of solving the quantum efficiency of exponential doping photocathode

  3. The true quantum face of the "exponential" decay: Unstable systems in rest and in motion

    Science.gov (United States)

    Urbanowski, K.

    2017-12-01

    Results of theoretical studies and numerical calculations presented in the literature suggest that the survival probability P0(t) has the exponential form starting from times much smaller than the lifetime τ up to times t ⪢τ and that P0(t) exhibits inverse power-law behavior at the late time region for times longer than the so-called crossover time T ⪢ τ (The crossover time T is the time when the late time deviations of P0(t) from the exponential form begin to dominate). More detailed analysis of the problem shows that in fact the survival probability P0(t) can not take the pure exponential form at any time interval including times smaller than the lifetime τ or of the order of τ and it has has an oscillating form. We also study the survival probability of moving relativistic unstable particles with definite momentum . These studies show that late time deviations of the survival probability of these particles from the exponential-like form of the decay law, that is the transition times region between exponential-like and non-exponential form of the survival probability, should occur much earlier than it follows from the classical standard considerations.

  4. Special deformed exponential functions leading to more consistent Klauder's coherent states

    International Nuclear Information System (INIS)

    El Baz, M.; Hassouni, Y.

    2001-08-01

    We give a general approach for the construction of deformed oscillators. These ones could be seen as describing deformed bosons. Basing on new definitions of certain quantum series, we demonstrate that they are nothing but the ordinary exponential functions in the limit when the deformation parameters goes to one. We also prove that these series converge to a complex function, in a given convergence radius that we calculate. Klauder's Coherent States are explicitly found through these functions that we design by deformed exponential functions. (author)

  5. Validation of predicted exponential concentration profiles of chemicals in soils

    International Nuclear Information System (INIS)

    Hollander, Anne; Baijens, Iris; Ragas, Ad; Huijbregts, Mark; Meent, Dik van de

    2007-01-01

    Multimedia mass balance models assume well-mixed homogeneous compartments. Particularly for soils, this does not correspond to reality, which results in potentially large uncertainties in estimates of transport fluxes from soils. A theoretically expected exponential decrease model of chemical concentrations with depth has been proposed, but hardly tested against empirical data. In this paper, we explored the correspondence between theoretically predicted soil concentration profiles and 84 field measured profiles. In most cases, chemical concentrations in soils appear to decline exponentially with depth, and values for the chemical specific soil penetration depth (d p ) are predicted within one order of magnitude. Over all, the reliability of multimedia models will improve when they account for depth-dependent soil concentrations, so we recommend to take into account the described theoretical exponential decrease model of chemical concentrations with depth in chemical fate studies. In this model the d p -values should estimated be either based on local conditions or on a fixed d p -value, which we recommend to be 10 cm for chemicals with a log K ow > 3. - Multimedia mass model predictions will improve when taking into account depth dependent soil concentrations

  6. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Exponential potentials, scaling solutions and inflation

    International Nuclear Information System (INIS)

    Wands, D.; Copeland, E.J.; Liddle, A.R.

    1993-01-01

    The goal of driving a period of rapid inflation in the early universe in a model motivated by grand unified theories has been given new life in recent years in the context of extended gravity theories. Extended inflation is one model based on a Brans-Dicke type gravity which can allow a very general first-order phase transition to complete by changing the expansion of the false vacuum dominated universe from an exponential to a power law expansion. This inflation is conformally equivalent to general relativity where the vacuum energy density is exponentially dependent upon a dilaton field. With this in mind, the authors consider in this paper the evolution of a scalar field σ with a potential V(σ) = V 0 exp(-λκ 1/2 σ) in a spatially flat (κ = 0) Friedmann-Robertson-Walker metric in the presence of a barotropic (P = (γ - 1)ρ) fluid. Here κ = 8πG, and λ is a dimensionless constant describing the steepness of the potential. It is well known that if the potential is sufficiently flat (λ small), the energy density of the scalar field dominated and the universe undergoes power law inflation. The behavior of fields with a steep potential seems to be less well known, although the results the authors present here are not new. 11 refs., 2 figs

  8. Smith-Purcell oscillator in an exponential gain regime

    International Nuclear Information System (INIS)

    Schachter, L.; Ron, A.

    1988-01-01

    A Smith-Purcell oscillator with a thick electron beam is analyzed in its exponential gain regime. A threshold current less than 1[A] is found for a 1 mm wavelength; this threshold is much lower than that of a similar oscillator operating in a linear gain regime

  9. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  10. Discrete-ordinate method with matrix exponential for a pseudo-spherical atmosphere: Vector case

    International Nuclear Information System (INIS)

    Doicu, A.; Trautmann, T.

    2009-01-01

    The paper is devoted to the extension of the matrix-exponential formalism for the scalar radiative transfer to the vector case. Using basic results of the theory of matrix-exponential functions we provide a compact and versatile formulation of the vector radiative transfer. As in the scalar case, we operate with the concept of the layer equation incorporating the level values of the Stokes vector. The matrix exponentials which enter in the expression of the layer equation are computed by using the matrix eigenvalue method and the Pade approximation. A discussion of the computational efficiency of the proposed method for both an aerosol-loaded atmosphere as well as a cloudy atmosphere is also provided

  11. Estimating exponential scheduling preferences

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Börjesson, Maria; Engelson, Leonid

    2015-01-01

    of car drivers' route and mode choice under uncertain travel times. Our analysis exposes some important methodological issues related to complex non-linear scheduling models: One issue is identifying the point in time where the marginal utility of being at the destination becomes larger than the marginal......Different assumptions about travelers' scheduling preferences yield different measures of the cost of travel time variability. Only few forms of scheduling preferences provide non-trivial measures which are additive over links in transport networks where link travel times are arbitrarily...... utility of being at the origin. Another issue is that models with the exponential marginal utility formulation suffer from empirical identification problems. Though our results are not decisive, they partly support the constant-affine specification, in which the value of travel time variability...

  12. Exponentially-convergent Monte Carlo via finite-element trial spaces

    International Nuclear Information System (INIS)

    Morel, Jim E.; Tooley, Jared P.; Blamer, Brandon J.

    2011-01-01

    Exponentially-Convergent Monte Carlo (ECMC) methods, also known as adaptive Monte Carlo and residual Monte Carlo methods, were the subject of intense research over a decade ago, but they never became practical for solving the realistic problems. We believe that the failure of previous efforts may be related to the choice of trial spaces that were global and thus highly oscillatory. As an alternative, we consider finite-element trial spaces, which have the ability to treat fully realistic problems. As a first step towards more general methods, we apply piecewise-linear trial spaces to the spatially-continuous two-stream transport equation. Using this approach, we achieve exponential convergence and computationally demonstrate several fundamental properties of finite-element based ECMC methods. Finally, our results indicate that the finite-element approach clearly deserves further investigation. (author)

  13. Financing exponential growth at H3

    OpenAIRE

    Silva, João Ricardo Ferreira Hipolito da

    2012-01-01

    H3 is a fast-food chain that introduced the concept of gourmet hamburgers in the Portuguese market. This case-study illustrates its financing strategy that supported an exponential growth represented by opening 33 restaurants within approximately 3 years of its inception. H3 is now faced with the challenge of structuring its foreign ventures and change its financial approach. The main covered topics are the options an entrepreneur has for financing a new venture and how it evolves along th...

  14. Exponentially Light Dark Matter from Coannihilation

    OpenAIRE

    D'Agnolo, Raffaele Tito; Mondino, Cristina; Ruderman, Joshua T.; Wang, Po-Jen

    2018-01-01

    Dark matter may be a thermal relic whose abundance is set by mutual annihilations among multiple species. Traditionally, this coannihilation scenario has been applied to weak scale dark matter that is highly degenerate with other states. We show that coannihilation among states with split masses points to dark matter that is exponentially lighter than the weak scale, down to the keV scale. We highlight the regime where dark matter does not participate in the annihilations that dilute its numb...

  15. Fitting of alpha-efficiency versus quenching parameter by exponential functions in liquid scintillation counting

    International Nuclear Information System (INIS)

    Sosa, M.; Manjón, G.; Mantero, J.; García-Tenorio, R.

    2014-01-01

    The objective of this work is to propose an exponential fit for the low alpha-counting efficiency as a function of a sample quenching parameter using a Quantulus liquid scintillation counter. The sample quenching parameter in a Quantulus is the Spectral Quench Parameter of the External Standard (SQP(E)), which is defined as the number of channel under which lies the 99% of Compton spectrum generated by a gamma emitter ( 152 Eu). Although in the literature one usually finds a polynomial fitting of the alpha counting efficiency, it is shown here that an exponential function is a better description. - Highlights: • We have studied the quenching in alpha measurement by liquid scintillation counting. • We have reviewed typical fitting of alpha counting efficiency versus quenching parameter. • Exponential fitting of data is proposed as better fitting. • We consider exponential fitting has a physical basis

  16. Fitting of alpha-efficiency versus quenching parameter by exponential functions in liquid scintillation counting

    Energy Technology Data Exchange (ETDEWEB)

    Sosa, M. [Departamento de Ingeniería Física, Campus León, Universidad de Guanajuato, 37150 León, Guanajuato (Mexico); Universidad de Sevilla, Departamento de Física Aplicada II, E.T.S. Arquitectura, Av. Reina Mercedes, 2, 41012 Sevilla (Spain); Manjón, G., E-mail: manjon@us.es [Universidad de Sevilla, Departamento de Física Aplicada II, E.T.S. Arquitectura, Av. Reina Mercedes, 2, 41012 Sevilla (Spain); Mantero, J.; García-Tenorio, R. [Universidad de Sevilla, Departamento de Física Aplicada II, E.T.S. Arquitectura, Av. Reina Mercedes, 2, 41012 Sevilla (Spain)

    2014-05-01

    The objective of this work is to propose an exponential fit for the low alpha-counting efficiency as a function of a sample quenching parameter using a Quantulus liquid scintillation counter. The sample quenching parameter in a Quantulus is the Spectral Quench Parameter of the External Standard (SQP(E)), which is defined as the number of channel under which lies the 99% of Compton spectrum generated by a gamma emitter ({sup 152}Eu). Although in the literature one usually finds a polynomial fitting of the alpha counting efficiency, it is shown here that an exponential function is a better description. - Highlights: • We have studied the quenching in alpha measurement by liquid scintillation counting. • We have reviewed typical fitting of alpha counting efficiency versus quenching parameter. • Exponential fitting of data is proposed as better fitting. • We consider exponential fitting has a physical basis.

  17. Adaptive Outlier-tolerant Exponential Smoothing Prediction Algorithms with Applications to Predict the Temperature in Spacecraft

    OpenAIRE

    Hu Shaolin; Zhang Wei; Li Ye; Fan Shunxi

    2011-01-01

    The exponential smoothing prediction algorithm is widely used in spaceflight control and in process monitoring as well as in economical prediction. There are two key conundrums which are open: one is about the selective rule of the parameter in the exponential smoothing prediction, and the other is how to improve the bad influence of outliers on prediction. In this paper a new practical outlier-tolerant algorithm is built to select adaptively proper parameter, and the exponential smoothing pr...

  18. Parameter estimation of the zero inflated negative binomial beta exponential distribution

    Science.gov (United States)

    Sirichantra, Chutima; Bodhisuwan, Winai

    2017-11-01

    The zero inflated negative binomial-beta exponential (ZINB-BE) distribution is developed, it is an alternative distribution for the excessive zero counts with overdispersion. The ZINB-BE distribution is a mixture of two distributions which are Bernoulli and negative binomial-beta exponential distributions. In this work, some characteristics of the proposed distribution are presented, such as, mean and variance. The maximum likelihood estimation is applied to parameter estimation of the proposed distribution. Finally some results of Monte Carlo simulation study, it seems to have high-efficiency when the sample size is large.

  19. Exponential formula for the reachable sets of quantum stochastic differential inclusions

    International Nuclear Information System (INIS)

    Ayoola, E.O.

    2001-07-01

    We establish an exponential formula for the reachable sets of quantum stochastic differential inclusions (QSDI) which are locally Lipschitzian with convex values. Our main results partially rely on an auxiliary result concerning the density, in the topology of the locally convex space of solutions, of the set of trajectories whose matrix elements are continuously differentiable By applying the exponential formula, we obtain results concerning convergence of the discrete approximations of the reachable set of the QSDI. This extends similar results of Wolenski for classical differential inclusions to the present noncommutative quantum setting. (author)

  20. Adaptive exponential synchronization of delayed neural networks with reaction-diffusion terms

    International Nuclear Information System (INIS)

    Sheng Li; Yang Huizhong; Lou Xuyang

    2009-01-01

    This paper presents an exponential synchronization scheme for a class of neural networks with time-varying and distributed delays and reaction-diffusion terms. An adaptive synchronization controller is derived to achieve the exponential synchronization of the drive-response structure of neural networks by using the Lyapunov stability theory. At the same time, the update laws of parameters are proposed to guarantee the synchronization of delayed neural networks with all parameters unknown. It is shown that the approaches developed here extend and improve the ideas presented in recent literatures.

  1. On the relation between Lyapunov exponents and exponential decay of correlations

    International Nuclear Information System (INIS)

    Slipantschuk, Julia; Bandtlow, Oscar F; Just, Wolfram

    2013-01-01

    Chaotic dynamics with sensitive dependence on initial conditions may result in exponential decay of correlation functions. We show that for one-dimensional interval maps the corresponding quantities, that is, Lyapunov exponents and exponential decay rates, are related. More specifically, for piecewise linear expanding Markov maps observed via piecewise analytic functions, we show that the decay rate is bounded above by twice the Lyapunov exponent, that is, we establish lower bounds for the subleading eigenvalue of the corresponding Perron–Frobenius operator. In addition, we comment on similar relations for general piecewise smooth expanding maps. (paper)

  2. Stretched exponential distributions in Nature and Economy: ``Fat tails'' with characteristic scales

    OpenAIRE

    Laherrère, Jean; Sornette, D.

    1998-01-01

    To account quantitatively for many reported ``natural'' fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributi...

  3. Analysis of friction autofluctuations of a drilling string with exponential resistance law

    Energy Technology Data Exchange (ETDEWEB)

    Belokobyl' skiy, S.V.; Prokopov, V.K.

    1981-01-01

    An analysis is made of the friction autofluctuations of a drilling string with exponential resistance law. A spasmodic resistance law is obtained from it as a particular case. It is demonstrated that for definite parameters, the amplitude of autofluctuations with the exponential resistance law exceeds the scope of fluctuations with the spasmodic law. Dependences are constructed for the period of autofluctuations and movement time on the parameters. Dangerous regimes of autofluctuations are defined.

  4. The PoET (Prevention of Error-Based Transfers) Project.

    Science.gov (United States)

    Oliver, Jill; Chidwick, Paula

    2017-01-01

    The PoET (Prevention of Error-based Transfers) Project is one of the Ethics Quality Improvement Projects (EQIPs) taking place at William Osler Health System. This specific project is designed to reduce transfers from long-term care to hospital that are caused by legal and ethical errors related to consent, capacity and substitute decision-making. The project is currently operating in eight long-term care homes in the Central West Local Health Integration Network and has seen a 56% reduction in multiple transfers before death in hospital.

  5. W-transform for exponential stability of second order delay differential equations without damping terms.

    Science.gov (United States)

    Domoshnitsky, Alexander; Maghakyan, Abraham; Berezansky, Leonid

    2017-01-01

    In this paper a method for studying stability of the equation [Formula: see text] not including explicitly the first derivative is proposed. We demonstrate that although the corresponding ordinary differential equation [Formula: see text] is not exponentially stable, the delay equation can be exponentially stable.

  6. Global exponential stability of fuzzy cellular neural networks with delays and reaction-diffusion terms

    International Nuclear Information System (INIS)

    Wang Jian; Lu Junguo

    2008-01-01

    In this paper, we study the global exponential stability of fuzzy cellular neural networks with delays and reaction-diffusion terms. By constructing a suitable Lyapunov functional and utilizing some inequality techniques, we obtain a sufficient condition for the uniqueness and global exponential stability of the equilibrium solution for a class of fuzzy cellular neural networks with delays and reaction-diffusion terms. The result imposes constraint conditions on the network parameters independently of the delay parameter. The result is also easy to check and plays an important role in the design and application of globally exponentially stable fuzzy neural circuits

  7. Globally exponential stability condition of a class of neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Liao, T.-L.; Yan, J.-J.; Cheng, C.-J.; Hwang, C.-C.

    2005-01-01

    In this Letter, the globally exponential stability for a class of neural networks including Hopfield neural networks and cellular neural networks with time-varying delays is investigated. Based on the Lyapunov stability method, a novel and less conservative exponential stability condition is derived. The condition is delay-dependent and easily applied only by checking the Hamiltonian matrix with no eigenvalues on the imaginary axis instead of directly solving an algebraic Riccati equation. Furthermore, the exponential stability degree is more easily assigned than those reported in the literature. Some examples are given to demonstrate validity and excellence of the presented stability condition herein

  8. Diffusion-weighted MR imaging of pancreatic cancer: A comparison of mono-exponential, bi-exponential and non-Gaussian kurtosis models.

    Science.gov (United States)

    Kartalis, Nikolaos; Manikis, Georgios C; Loizou, Louiza; Albiin, Nils; Zöllner, Frank G; Del Chiaro, Marco; Marias, Kostas; Papanikolaou, Nikolaos

    2016-01-01

    To compare two Gaussian diffusion-weighted MRI (DWI) models including mono-exponential and bi-exponential, with the non-Gaussian kurtosis model in patients with pancreatic ductal adenocarcinoma. After written informed consent, 15 consecutive patients with pancreatic ductal adenocarcinoma underwent free-breathing DWI (1.5T, b-values: 0, 50, 150, 200, 300, 600 and 1000 s/mm 2 ). Mean values of DWI-derived metrics ADC, D, D*, f, K and D K were calculated from multiple regions of interest in all tumours and non-tumorous parenchyma and compared. Area under the curve was determined for all metrics. Mean ADC and D K showed significant differences between tumours and non-tumorous parenchyma (both P  < 0.001). Area under the curve for ADC, D, D*, f, K, and D K were 0.77, 0.52, 0.53, 0.62, 0.42, and 0.84, respectively. ADC and D K could differentiate tumours from non-tumorous parenchyma with the latter showing a higher diagnostic accuracy. Correction for kurtosis effects has the potential to increase the diagnostic accuracy of DWI in patients with pancreatic ductal adenocarcinoma.

  9. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications.

    Science.gov (United States)

    Kim, Changhwa; Shin, DongHyun

    2017-05-12

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss.

  10. Strictly local one-dimensional topological quantum error correction with symmetry-constrained cellular automata

    Directory of Open Access Journals (Sweden)

    Nicolai Lang, Hans Peter Büchler

    2018-01-01

    Full Text Available Active quantum error correction on topological codes is one of the most promising routes to long-term qubit storage. In view of future applications, the scalability of the used decoding algorithms in physical implementations is crucial. In this work, we focus on the one-dimensional Majorana chain and construct a strictly local decoder based on a self-dual cellular automaton. We study numerically and analytically its performance and exploit these results to contrive a scalable decoder with exponentially growing decoherence times in the presence of noise. Our results pave the way for scalable and modular designs of actively corrected one-dimensional topological quantum memories.

  11. Field error reduction experiment on the REPUTE-1 RFP device

    International Nuclear Information System (INIS)

    Toyama, H.; Shinohara, S.; Yamagishi, K.

    1989-01-01

    The vacuum chamber of the RFP device REPUTE-1 is a welded structure using 18 sets of 1 mm thick Inconel bellows (inner minor radius 22 cm) and 2.4 mm thick port segments arranged in toroidal geometry as shown in Fig. 1. The vacuum chamber is surrounded by 5 mm thick stainless steel shells. The time constant of the shell is 1 ms for vertical field penetration. The pulse length in REPUTE-1 is so far 3.2 ms (about 3 times longer than shell skin time). The port bypass plates have been attached as shown in Fig. 2 to reduce field errors so that the pulse length becomes longer and the loop voltage becomes lower. (author) 5 refs., 4 figs

  12. Unsteady MHD flow in porous media past over exponentially ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... rotation and magnetic field on the flow past an exponentially accelerated vertical plate with ... Let (u, v, w) be the components of the velocity vector V. Then using the equation.

  13. Construction of extended exponential general linear methods 524 ...

    African Journals Online (AJOL)

    This paper introduces a new approach for constructing higher order of EEGLM which have become very popular and novel due to its enviable stability properties. This paper also shows that methods 524 is stable with its characteristics root lies in a unit circle. Numerical experiments indicate that Extended Exponential ...

  14. Double-Exponentially Decayed Photoionization in CREI Effect: Numerical Experiment on 3D H2+

    International Nuclear Information System (INIS)

    Feng, Li; Ting-Ying, Wang; Gui-Zhong, Zhang; Wang-Hua, Xiang; III, W. T. Hill

    2008-01-01

    On the platform of the 3D H 2 + system, we perform a numerical simulation of its photoionization rate under excitation of weak to intense laser intensities with varying pulse durations and wavelengths. A novel method is proposed for calculating the photoionization rate: a double exponential decay of ionization probability is best suited for fitting this rate. Confirmation of the well-documented charge-resonance-enhanced ionization (CREI) effect at medium laser intensity and finding of ionization saturation at high light intensity corroborate the robustness of the suggested double-exponential decay process. Surveying the spatial and temporal variations of electron wavefunctions uncovers a mechanism for the double-exponentially decayed photoionization probability as onset of electron ionization along extra degree of freedom. Henceforth, the new method makes clear the origins of peak features in photoionization rate versus internuclear separation. It is believed that this multi-exponentially decayed ionization mechanism is applicable to systems with more degrees of motion

  15. Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis

    Science.gov (United States)

    Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.

    2013-04-01

    We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.

  16. Extended q -Gaussian and q -exponential distributions from gamma random variables

    Science.gov (United States)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  17. Delay-range-dependent exponential H∞ synchronization of a class of delayed neural networks

    International Nuclear Information System (INIS)

    Karimi, Hamid Reza; Maass, Peter

    2009-01-01

    This article aims to present a multiple delayed state-feedback control design for exponential H ∞ synchronization problem of a class of delayed neural networks with multiple time-varying discrete delays. On the basis of the drive-response concept and by introducing a descriptor technique and using Lyapunov-Krasovskii functional, new delay-range-dependent sufficient conditions for exponential H ∞ synchronization of the drive-response structure of neural networks are driven in terms of linear matrix inequalities (LMIs). The explicit expression of the controller gain matrices are parameterized based on the solvability conditions such that the drive system and the response system can be exponentially synchronized. A numerical example is included to illustrate the applicability of the proposed design method.

  18. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  19. Global exponential stability of mixed discrete and distributively delayed cellular neural network

    International Nuclear Information System (INIS)

    Yao Hong-Xing; Zhou Jia-Yan

    2011-01-01

    This paper concernes analysis for the global exponential stability of a class of recurrent neural networks with mixed discrete and distributed delays. It first proves the existence and uniqueness of the balance point, then by employing the Lyapunov—Krasovskii functional and Young inequality, it gives the sufficient condition of global exponential stability of cellular neural network with mixed discrete and distributed delays, in addition, the example is provided to illustrate the applicability of the result. (general)

  20. Exponential convergence rate estimation for uncertain delayed neural networks of neutral type

    International Nuclear Information System (INIS)

    Lien, C.-H.; Yu, K.-W.; Lin, Y.-F.; Chung, Y.-J.; Chung, L.-Y.

    2009-01-01

    The global exponential stability for a class of uncertain delayed neural networks (DNNs) of neutral type is investigated in this paper. Delay-dependent and delay-independent criteria are proposed to guarantee the robust stability of DNNs via LMI and Razumikhin-like approaches. For a given delay, the maximal allowable exponential convergence rate will be estimated. Some numerical examples are given to illustrate the effectiveness of our results. The simulation results reveal significant improvement over the recent results.

  1. Robust exponential stabilization of nonholonomic wheeled mobile robots with unknown visual parameters

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The visual servoing stabilization of nonholonomic mobile robot with unknown camera parameters is investigated.A new kind of uncertain chained model of nonholonomic kinemetic system is obtained based on the visual feedback and the standard chained form of type (1,2) mobile robot.Then,a novel time-varying feedback controller is proposed for exponentially stabilizing the position and orientation of the robot using visual feedback and switching strategy when the camera parameters are not known.The exponential s...

  2. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. The Use of Modeling Approach for Teaching Exponential Functions

    Science.gov (United States)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  4. The scientific foundation for tobacco harm reduction, 2006-2011

    OpenAIRE

    Rodu, Brad

    2011-01-01

    Abstract Over the past five years there has been exponential expansion of interest in tobacco harm reduction (THR), with a concomitant increase in the number of published studies. The purpose of this manuscript is to review and analyze influential contributions to the scientific and medical literature relating to THR, and to discuss issues that continue to stimulate debate. Numerous epidemiologic studies and subsequent meta-analyses confirm that smokeless tobacco (ST) use is associated with m...

  5. Sleep-Dependent Reductions in Reality-Monitoring Errors Arise from More Conservative Decision Criteria

    Science.gov (United States)

    Westerberg, Carmen E.; Hawkins, Christopher A.; Rendon, Lauren

    2018-01-01

    Reality-monitoring errors occur when internally generated thoughts are remembered as external occurrences. We hypothesized that sleep-dependent memory consolidation could reduce them by strengthening connections between items and their contexts during an afternoon nap. Participants viewed words and imagined their referents. Pictures of the…

  6. Mono-Exponential Fitting in T2-Relaxometry: Relevance of Offset and First Echo.

    Directory of Open Access Journals (Sweden)

    David Milford

    Full Text Available T2 relaxometry has become an important tool in quantitative MRI. Little focus has been put on the effect of the refocusing flip angle upon the offset parameter, which was introduced to account for a signal floor due to noise or to long T2 components. The aim of this study was to show that B1 imperfections contribute significantly to the offset. We further introduce a simple method to reduce the systematic error in T2 by discarding the first echo and using the offset fitting approach.Signal curves of T2 relaxometry were simulated based on extended phase graph theory and evaluated for 4 different methods (inclusion and exclusion of the first echo, while fitting with and without the offset. We further performed T2 relaxometry in a phantom at 9.4T magnetic resonance imaging scanner and used the same methods for post-processing as in the extended phase graph simulated data. Single spin echo sequences were used to determine the correct T2 time.The simulation data showed that the systematic error in T2 and the offset depends on the refocusing pulse, the echo spacing and the echo train length. The systematic error could be reduced by discarding the first echo. Further reduction of the systematic T2 error was reached by using the offset as fitting parameter. The phantom experiments confirmed these findings.The fitted offset parameter in T2 relaxometry is influenced by imperfect refocusing pulses. Using the offset as a fitting parameter and discarding the first echo is a fast and easy method to minimize the error in T2, particularly for low to intermediate echo train length.

  7. The Dickey-Fuller test for exponential random walks

    NARCIS (Netherlands)

    Davies, P.L.; Krämer, W.

    2003-01-01

    A common test in econometrics is the Dickey–Fuller test, which is based on the test statistic . We investigate the behavior of the test statistic if the data yt are given by an exponential random walk exp(Zt) where Zt = Zt-1 + [sigma][epsilon]t and the [epsilon]t are independent and identically

  8. Exponential Family Techniques for the Lognormal Left Tail

    DEFF Research Database (Denmark)

    Asmussen, Søren; Jensen, Jens Ledet; Rojas-Nandayapa, Leonardo

    [Xe−θX]/L(θ)=x. The asymptotic formulas involve the Lambert W function. The established relations are used to provide two different numerical methods for evaluating the left tail probability of lognormal sum Sn=X1+⋯+Xn: a saddlepoint approximation and an exponential twisting importance sampling estimator. For the latter we...

  9. Memory Reduction via Delayed Simulation

    Directory of Open Access Journals (Sweden)

    Michael Holtmann

    2011-02-01

    Full Text Available We address a central (and classical issue in the theory of infinite games: the reduction of the memory size that is needed to implement winning strategies in regular infinite games (i.e., controllers that ensure correct behavior against actions of the environment, when the specification is a regular omega-language. We propose an approach which attacks this problem before the construction of a strategy, by first reducing the game graph that is obtained from the specification. For the cases of specifications represented by "request-response"-requirements and general "fairness" conditions, we show that an exponential gain in the size of memory is possible.

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  11. Robust exponential stability and domains of attraction in a class of interval neural networks

    International Nuclear Information System (INIS)

    Yang Xiaofan; Liao Xiaofeng; Bai Sen; Evans, David J

    2005-01-01

    This paper addresses robust exponential stability as well as domains of attraction in a class of interval neural networks. A sufficient condition for an equilibrium point to be exponentially stable is established. And an estimate on the domains of attraction of exponentially stable equilibrium points is presented. Both the condition and the estimate are formulated in terms of the parameter intervals, the neurons' activation functions and the equilibrium point. Hence, they are easily checkable. In addition, our results neither depend on monotonicity of the activation functions nor on coupling conditions between the neurons. Consequently, these results are of practical importance in evaluating the performance of interval associative memory networks

  12. Exponential decay rate of the power spectrum for solutions of the Navier--Stokes equations

    International Nuclear Information System (INIS)

    Doering, C.R.; Titi, E.S.

    1995-01-01

    Using a method developed by Foias and Temam [J. Funct. Anal. 87, 359 (1989)], exponential decay of the spatial Fourier power spectrum for solutions of the incompressible Navier--Stokes equations is established and explicit rigorous lower bounds on a small length scale defined by the exponential decay rate are obtained

  13. On exponential stability of bidirectional associative memory neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Park, Ju H.; Lee, S.M.; Kwon, O.M.

    2009-01-01

    For bidirectional associate memory neural networks with time-varying delays, the problems of determining the exponential stability and estimating the exponential convergence rate are investigated by employing the Lyapunov functional method and linear matrix inequality (LMI) technique. A novel criterion for the stability, which give information on the delay-dependent property, is derived. A numerical example is given to demonstrate the effectiveness of the obtained results.

  14. Instanton-based techniques for analysis and reduction of error floors of LDPC codes

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Stepanov, Mikhail G.; Vasic, Bane

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  15. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  16. Sustaining the Exponential Growth of Embedded Digital Signal Processing Capability

    National Research Council Canada - National Science Library

    Shaw, Gary A; Richards, Mark A

    2004-01-01

    .... We conjecture that as IC shrinkage and attendant performance improvements begin to slow, the exponential rate of improvement we have become accustomed to for embedded applications will be sustainable...

  17. Image enhancement by spectral-error correction for dual-energy computed tomography.

    Science.gov (United States)

    Park, Kyung-Kook; Oh, Chang-Hyun; Akay, Metin

    2011-01-01

    Dual-energy CT (DECT) was reintroduced recently to use the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between low and high energy images or measurements, so that it is difficult to acquire accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, an image enhancement technique for DECT is proposed, based on the fact that the attenuation of a higher density material decreases more rapidly as X-ray energy increases. We define as spectral error the case when a pixel pair of low and high energy images deviates far from the expected attenuation trend. After analyzing the spectral-error sources of DECT images, we propose a DECT image enhancement method, which consists of three steps: water-reference offset correction, spectral-error correction, and anti-correlated noise reduction. It is the main idea of this work that makes spectral errors distributed like random noise over the true attenuation and suppressed by the well-known anti-correlated noise reduction. The proposed method suppressed noise of liver lesions and improved contrast between liver lesions and liver parenchyma in DECT contrast-enhanced abdominal images and their two-material decomposition.

  18. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  19. Mean-value identities as an opportunity for Monte Carlo error reduction.

    Science.gov (United States)

    Fernandez, L A; Martin-Mayor, V

    2009-05-01

    In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the two-dimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.

  20. State of the art report of exponential experiments with PWR spent nuclear fuel

    International Nuclear Information System (INIS)

    Ro, Seung Gy; Park, Sung Won; Park, Kwang Joon; Kim, Jong Hoon; Hong, Kwon Pyo; Shin, Hee Sung

    2000-09-01

    Exponential experiment method is discussed for verifying the computer code system of the nuclear criticality analysis which makes it possible to apply for the burnup credit in storage, transportation, and handling of spent nuclear fuel. In this report, it is described that the neutron flux density distribution in the exponential experiment system which consists of a PWR spent fuel in a water pool is measured by using 252 Cf neutron source and a mini-fission chamber, and therefrom the exponential decay coefficient is determined. Besides, described is a method for determining the absolute thermal neutron flux density by means of the Cd cut-off technique in association with a gold foil. Also a method is described for analyzing the energy distribution of γ-ray from the gold foil activation detector in detail