Masuyama, Hiroyuki
2014-01-01
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...
Angular truncation errors in integrating nephelometry
International Nuclear Information System (INIS)
Moosmueller, Hans; Arnott, W. Patrick
2003-01-01
Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error
Masuyama, Hiroyuki
2015-01-01
This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...
Repair for scattering expansion truncation errors in transport calculations
International Nuclear Information System (INIS)
Emmett, M.B.; Childs, R.L.; Rhoades, W.A.
1980-01-01
Legendre expansion of angular scattering distributions is usually limited to P 3 in practical transport calculations. This truncation often results in non-trivial errors, especially alternating negative and positive lateral scattering peaks. The effect is especially prominent in forward-peaked situations such as the within-group component of the Compton Scattering of gammas. Increasing the expansion to P 7 often makes the peaks larger and narrower. Ward demonstrated an accurate repair, but his method requires special cross section sets and codes. The DOT IV code provides fully-compatible, but heuristic, repair of the erroneous scattering. An analytical Klein-Nishina estimator, newly available in the MORSE code, allows a test of this method. In the MORSE calculation, particle scattering histories are calculated in the usual way, with scoring by an estimator routine at each collision site. Results for both the conventional P 3 estimator and the analytical estimator were obtained. In the DOT calculation, the source moments are expanded into the directional representation at each iteration. Optionally a sorting procedure removes all negatives, and removes enough small positive values to restore particle conservation. The effect of this is to replace the alternating positive and negative values with positive values of plausible magnitude. The accuracy of those values is examined herein
Local and accumulated truncation errors in a class of perturbative numerical methods
International Nuclear Information System (INIS)
Adam, G.; Adam, S.; Corciovei, A.
1980-01-01
The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)
Truncated predictor feedback for time-delay systems
Zhou, Bin
2014-01-01
This book provides a systematic approach to the design of predictor based controllers for (time-varying) linear systems with either (time-varying) input or state delays. Differently from those traditional predictor based controllers, which are infinite-dimensional static feedback laws and may cause difficulties in their practical implementation, this book develops a truncated predictor feedback (TPF) which involves only finite dimensional static state feedback. Features and topics: A novel approach referred to as truncated predictor feedback for the stabilization of (time-varying) time-delay systems in both the continuous-time setting and the discrete-time setting is built systematically Semi-global and global stabilization problems of linear time-delay systems subject to either magnitude saturation or energy constraints are solved in a systematic manner Both stabilization of a single system and consensus of a group of systems (multi-agent systems) are treated in a unified manner by applying the truncated pre...
Frequency interval balanced truncation of discrete-time bilinear systems
DEFF Research Database (Denmark)
Jazlan, Ahmad; Sreeram, Victor; Shaker, Hamid Reza
2016-01-01
This paper presents the development of a new model reduction method for discrete-time bilinear systems based on the balanced truncation framework. In many model reduction applications, it is advantageous to analyze the characteristics of the system with emphasis on particular frequency intervals...... are the solution to a pair of new generalized Lyapunov equations. The conditions for solvability of these new generalized Lyapunov equations are derived and a numerical solution method for solving these generalized Lyapunov equations is presented. Numerical examples which illustrate the usage of the new...... generalized frequency interval controllability and observability gramians as part of the balanced truncation framework are provided to demonstrate the performance of the proposed method....
International Nuclear Information System (INIS)
Barros, R.C. de; Larsen, E.W.
1991-01-01
A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy
MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION
Directory of Open Access Journals (Sweden)
Ashit Chakraborty
2013-09-01
Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.
DEFF Research Database (Denmark)
Martini, Enrica; Breinbjerg, Olav; Maci, Stefano
2006-01-01
A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...
DEFF Research Database (Denmark)
Martini, Enrica; Breinbjerg, Olav; Maci, Stefano
2008-01-01
A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...
Accurate evolutions of inspiralling neutron-star binaries: assessment of the truncation error
International Nuclear Information System (INIS)
Baiotti, Luca; Giacomazzo, Bruno; Rezzolla, Luciano
2009-01-01
We have recently presented an investigation in full general relativity of the dynamics and gravitational-wave emission from binary neutron stars which inspiral and merge, producing a black hole surrounded by a torus (Baiotti et al 2008 Phys. Rev. D 78 084033). We discuss here in more detail the convergence properties of the results presented in Baiotti et al (2008 Phys. Rev. D 78 084033) and, in particular, the deterioration of the convergence rate at the merger and during the survival of the merged object, when strong shocks are formed and turbulence develops. We also show that physically reasonable and numerically convergent results obtained at low resolution suffer however from large truncation errors and hence are of little physical use. We summarize our findings in an 'error budget', which includes the different sources of possible inaccuracies we have investigated and provides a first quantitative assessment of the precision in the modelling of compact fluid binaries.
International Nuclear Information System (INIS)
Abreu, M.P.; Filho, H.A.; Barros, R.C.
1993-01-01
The authors describe a new nodal method for multigroup slab-geometry discrete ordinates S N eigenvalue problems that is completely free from all spatial truncation errors. The unknowns in the method are the node-edge angular fluxes, the node-average angular fluxes, and the effective multiplication factor k eff . The numerical values obtained for these quantities are exactly those of the dominant analytic solution of the S N eigenvalue problem apart from finite arithmetic considerations. This method is based on the use of the standard balance equation and two nonstandard auxiliary equations. In the nonmultiplying regions, e.g., the reflector, we use the multigroup spectral Green's function (SGF) auxiliary equations. In the fuel regions, we use the multigroup spectral diamond (SD) auxiliary equations. The SD auxiliary equation is an extension of the conventional auxiliary equation used in the diamond difference (DD) method. This hybrid characteristic of the SD-SGF method improves both the numerical stability and the convergence rate
Directory of Open Access Journals (Sweden)
Hesham Mostafa
2017-09-01
Full Text Available Artificial neural networks (ANNs trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.
Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert
2017-01-01
Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.
A one-time truncate and encode multiresolution stochastic framework
Energy Technology Data Exchange (ETDEWEB)
Abgrall, R.; Congedo, P.M.; Geraci, G., E-mail: gianluca.geraci@inria.fr
2014-01-15
In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.
Truncation of the many body hierarchy and relaxation times in the McKean model
International Nuclear Information System (INIS)
Schmitt, K.J.
1987-01-01
In the McKean model the BBGKY-hierarchy is equivalent to a simple hierarchy of coupled equations for the p-particle correlation functions. Truncation effects and the convergence of the one-particle distribution towards its exact shape have been studied. In the long time limit the equations can be solved in a closed form. It turns out that the p-particle correlation decays p-times faster than the non-equilibrium one-particle distribution
TRUNCATION OF THE MANY BODY HIERARCHY AND RELAXATION TIMES IN THE McKEAN MODEL
Schmitt , K.-J.
1987-01-01
In the McKean model the BBGKY-hierarchy is equivalent to a simple hierarchy of coupled equations for the p-particle correlation functions. Truncation effects and the convergence of the one-particle distribution towards its exact shape have been studied. In the long time limit the equations can be solved in a closed form. It turns out that the p-particle correlation decays p-times faster than the non-equilibrium one-particle distribution.
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Time Error Analysis of SOE System Using Network Time Protocol
International Nuclear Information System (INIS)
Keum, Jong Yong; Park, Geun Ok; Park, Heui Youn
2005-01-01
To find the accuracy of time in the fully digitalized SOE (Sequence of Events) system, we used a formal specification of the Network Time Protocol (NTP) Version 3, which is used to synchronize time keeping among a set of distributed computers. Through constructing a simple experimental environments and experimenting internet time synchronization, we analyzed the time errors of local clocks of SOE system synchronized with a time server via computer networks
Hazelbag, Christijan M; Klungel, Olaf H; van Staa, Tjeerd P; de Boer, Anthonius; Groenwold, Rolf H H
PURPOSE: To assess the impact of random left truncation of data on the estimation of time-dependent exposure effects. METHODS: A simulation study was conducted in which the relation between exposure and outcome was based on an immediate exposure effect, a first-time exposure effect, or a cumulative
Hazelbag, Christijan M.; Klungel, Olaf H.; van Staa, Tjeerd P.; de Boer, Anthonius; Groenwold, Rolf H H
2015-01-01
PURPOSE: To assess the impact of random left truncation of data on the estimation of time-dependent exposure effects. METHODS: A simulation study was conducted in which the relation between exposure and outcome was based on an immediate exposure effect, a first-time exposure effect, or a cumulative
Truncation scheme of time-dependent density-matrix approach II
Energy Technology Data Exchange (ETDEWEB)
Tohyama, Mitsuru [Kyorin University School of Medicine, Mitaka, Tokyo (Japan); Schuck, Peter [Institut de Physique Nucleaire, IN2P3-CNRS, Universite Paris-Sud, Orsay (France); Laboratoire de Physique et de Modelisation des Milieux Condenses, CNRS et Universite Joseph Fourier, Grenoble (France)
2017-09-15
A truncation scheme of the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy for reduced density matrices, where a three-body density matrix is approximated by two-body density matrices, is improved to take into account a normalization effect. The truncation scheme is tested for the Lipkin model. It is shown that the obtained results are in good agreement with the exact solutions. (orig.)
International Nuclear Information System (INIS)
Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee
2003-01-01
In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where
International Nuclear Information System (INIS)
Lydia, Emilio J.; Barros, Ricardo C.
2011-01-01
In this paper we describe a response matrix method for one-speed slab-geometry discrete ordinates (SN) neutral particle transport problems that is completely free from spatial truncation errors. The unknowns in the method are the cell-edge angular fluxes of particles. The numerical results generated for these quantities are exactly those obtained from the analytic solution of the SN problem apart from finite arithmetic considerations. Our method is based on a spectral analysis that we perform in the SN equations with scattering inside a discretization cell of the spatial grid set up on the slab. As a result of this spectral analysis, we are able to obtain an expression for the local general solution of the SN equations. With this local general solution, we determine the response matrix and use the prescribed boundary conditions and continuity conditions to sweep across the discretization cells from left to right and from right to left across the slab, until a prescribed convergence criterion is satisfied. (author)
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
Zero-Error Capacity of a Class of Timing Channels
DEFF Research Database (Denmark)
Kovacevic, M.; Popovski, Petar
2014-01-01
We analyze the problem of zero-error communication through timing channels that can be interpreted as discrete-time queues with bounded waiting times. The channel model includes the following assumptions: 1) time is slotted; 2) at most N particles are sent in each time slot; 3) every particle is ...
Structural damage detection robust against time synchronization errors
International Nuclear Information System (INIS)
Yan, Guirong; Dyke, Shirley J
2010-01-01
Structural damage detection based on wireless sensor networks can be affected significantly by time synchronization errors among sensors. Precise time synchronization of sensor nodes has been viewed as crucial for addressing this issue. However, precise time synchronization over a long period of time is often impractical in large wireless sensor networks due to two inherent challenges. First, time synchronization needs to be performed periodically, requiring frequent wireless communication among sensors at significant energy cost. Second, significant time synchronization errors may result from node failures which are likely to occur during long-term deployment over civil infrastructures. In this paper, a damage detection approach is proposed that is robust against time synchronization errors in wireless sensor networks. The paper first examines the ways in which time synchronization errors distort identified mode shapes, and then proposes a strategy for reducing distortion in the identified mode shapes. Modified values for these identified mode shapes are then used in conjunction with flexibility-based damage detection methods to localize damage. This alternative approach relaxes the need for frequent sensor synchronization and can tolerate significant time synchronization errors caused by node failures. The proposed approach is successfully demonstrated through numerical simulations and experimental tests in a lab
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
Hyvärinen, A
1985-01-01
). This indicates that a substantial part of the variation comes from intralaboratory variation with time rather than from constant interlaboratory differences. Normality and consistency of statistical distributions were best achieved in the long-term intralaboratory sets of the data, under which conditions the statistical estimates of error variability were also most characteristic of the individual laboratories rather than necessarily being similar to one another. Mixing of data from different laboratories may give heterogeneous and nonparametric distributions and hence is not advisable.(ABSTRACT TRUNCATED AT 400 WORDS)
ANTI-CORRELATED TIME LAGS IN THE Z SOURCE GX 5-1: POSSIBLE EVIDENCE FOR A TRUNCATED ACCRETION DISK
Energy Technology Data Exchange (ETDEWEB)
Sriram, K.; Choi, C. S. [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Rao, A. R., E-mail: astrosriram@yahoo.co.in [Tata Institute of Fundamental Research, Mumbai 400005 (India)
2012-06-01
We investigate the nature of the inner accretion disk in the neutron star source GX 5-1 by making a detailed study of time lags between X-rays of different energies. Using the cross-correlation analysis, we found anti-correlated hard and soft time lags of the order of a few tens to a few hundred seconds and the corresponding intensity states were mostly the horizontal branch (HB) and upper normal branch. The model independent and dependent spectral analysis showed that during these time lags the structure of the accretion disk significantly varied. Both eastern and western approaches were used to unfold the X-ray continuum and systematic changes were observed in soft and hard spectral components. These changes along with a systematic shift in the frequency of quasi-periodic oscillations (QPOs) made it substantially evident that the geometry of the accretion disk is truncated. Simultaneous energy spectral and power density spectral study shows that the production of the horizontal branch oscillations (HBOs) is closely related to the Comptonizing region rather than the disk component in the accretion disk. We found that as the HBO frequency decreases from the hard apex to upper HB, the disk temperature increases along with an increase in the coronal temperature, which is in sharp contrast with the changes found in black hole binaries where the decrease in the QPO frequency is accompanied by a decrease in the disk temperature and a simultaneous increase in the coronal temperature. We discuss the results in the context of re-condensation of coronal material in the inner region of the disk.
Error Analysis Of Clock Time (T), Declination (*) And Latitude ...
African Journals Online (AJOL)
), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...
Heat conduction errors and time lag in cryogenic thermometer installations
Warshawsky, I.
1973-01-01
Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.
Directory of Open Access Journals (Sweden)
Laura Marchal-Crespo
2017-06-01
Full Text Available Research on motor learning suggests that training with haptic guidance enhances learning of the timing components of motor tasks, whereas error amplification is better for learning the spatial components. We present a novel mixed guidance controller that combines haptic guidance and error amplification to simultaneously promote learning of the timing and spatial components of complex motor tasks. The controller is realized using a force field around the desired position. This force field has a stable manifold tangential to the trajectory that guides subjects in velocity-related aspects. The force field has an unstable manifold perpendicular to the trajectory, which amplifies the perpendicular (spatial error. We also designed a controller that applies randomly varying, unpredictable disturbing forces to enhance the subjects’ active participation by pushing them away from their “comfort zone.” We conducted an experiment with thirty-two healthy subjects to evaluate the impact of four different training strategies on motor skill learning and self-reported motivation: (i No haptics, (ii mixed guidance, (iii perpendicular error amplification and tangential haptic guidance provided in sequential order, and (iv randomly varying disturbing forces. Subjects trained two motor tasks using ARMin IV, a robotic exoskeleton for upper limb rehabilitation: follow circles with an ellipsoidal speed profile, and move along a 3D line following a complex speed profile. Mixed guidance showed no detectable learning advantages over the other groups. Results suggest that the effectiveness of the training strategies depends on the subjects’ initial skill level. Mixed guidance seemed to benefit subjects who performed the circle task with smaller errors during baseline (i.e., initially more skilled subjects, while training with no haptics was more beneficial for subjects who created larger errors (i.e., less skilled subjects. Therefore, perhaps the high functional
Distance error correction for time-of-flight cameras
Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian
2017-06-01
The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.
Applications of Fast Truncated Multiplication in Cryptography
Directory of Open Access Journals (Sweden)
Laszlo Hars
2006-12-01
Full Text Available Truncated multiplications compute truncated products, contiguous subsequences of the digits of integer products. For an n-digit multiplication algorithm of time complexity O(nÃŽÂ±, with 1<ÃŽÂ±Ã¢Â‰Â¤2, there is a truncated multiplication algorithm, which is constant times faster when computing a short enough truncated product. Applying these fast truncated multiplications, several cryptographic long integer arithmetic algorithms are improved, including integer reciprocals, divisions, Barrett and Montgomery multiplications, 2n-digit modular multiplication on hardware for n-digit half products. For example, Montgomery multiplication is performed in 2.6 Karatsuba multiplication time.
Indirect inference with time series observed with error
DEFF Research Database (Denmark)
Rossi, Eduardo; Santucci de Magistris, Paolo
estimation. We propose to solve this inconsistency by jointly estimating the nuisance and the structural parameters. Under standard assumptions, this estimator is consistent and asymptotically normal. A condition for the identification of ARMA plus noise is obtained. The proposed methodology is used......We analyze the properties of the indirect inference estimator when the observed series are contaminated by measurement error. We show that the indirect inference estimates are asymptotically biased when the nuisance parameters of the measurement error distribution are neglected in the indirect...... to estimate the parameters of continuous-time stochastic volatility models with auxiliary specifications based on realized volatility measures. Monte Carlo simulations shows the bias reduction of the indirect estimates obtained when the microstructure noise is explicitly modeled. Finally, an empirical...
How Truncating Are 'Truncating Languages'? Evidence from Russian and German.
Rathcke, Tamara V
Russian and German have pr eviously been described as 'truncating', or cutting off target frequencies of the phrase-final pitch trajectories when the time available for voicing is compromised. However, supporting evidence is rare and limited to only a few pitch categories. This paper reports a production study conducted to document pitch adjustments to linguistic materials, in which the amount of voicing available for the realization of a pitch pattern varies from relatively long to extremely short. Productions of nuclear H+L*, H* and L*+H pitch accents followed by a low boundary tone were investigated in the two languages. The results of the study show that speakers of both 'truncating languages' do not utilize truncation exclusively when accommodating to different segmental environments. On the contrary, they employ several strategies - among them is truncation but also compression and temporal re-alignment - to produce the target pitch categories under increasing time pressure. Given that speakers can systematically apply all three adjustment strategies to produce some pitch patterns (H* L% in German and Russian) while not using truncation in others (H+L* L% particularly in Russian), we question the effectiveness of the typological classification of these two languages as 'truncating'. Moreover, the phonetic detail of truncation varies considerably, both across and within the two languages, indicating that truncation cannot be easily modeled as a unified phenomenon. The results further suggest that the phrase-final pitch adjustments are sensitive to the phonological composition of the tonal string and the status of a particular tonal event (associated vs. boundary tone), and do not apply to falling vs. rising pitch contours across the board, as previously put forward for German. Implications for the intonational phonology and prosodic typology are addressed in the discussion. © 2017 S. Karger AG, Basel.
Relationship between Brazilian airline pilot errors and time of day
Directory of Open Access Journals (Sweden)
M.T. de Mello
2008-12-01
Full Text Available Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 copilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts during which incidents occurred, we divided the light-dark cycle (24:00 in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46. For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue.
Relationship between Brazilian airline pilot errors and time of day.
de Mello, M T; Esteves, A M; Pires, M L N; Santos, D C; Bittencourt, L R A; Silva, R S; Tufik, S
2008-12-01
Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 co-pilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts) during which incidents occurred, we divided the light-dark cycle (24:00) in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46). For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling
Dopamine reward prediction errors reflect hidden state inference across time
Starkweather, Clara Kwon; Babayan, Benedicte M.; Uchida, Naoshige; Gershman, Samuel J.
2017-01-01
Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a ‘belief state’). In this work, we asked whether dopaminergic signaling supports a TD learning framework that operates over hidden states. We found that dopamine signaling exhibited a striking difference between two tasks that differed only with respect to whether reward was delivered deterministically. Our results favor an associative learning rule that combines cached values with hidden state inference. PMID:28263301
New results to BDD truncation method for efficient top event probability calculation
International Nuclear Information System (INIS)
Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang
2012-01-01
A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.
Silva-Romo, Gilberto; Mendoza-Rosales, Claudia Cristina; Campos-Madrigal, Emiliano; Hernández-Marmolejo, Yoalli Bianii; de la Rosa-Mora, Orestes Antonio; de la Torre-González, Alam Israel; Bonifacio-Serralde, Carlos; López-García, Nallely; Nápoles-Valenzuela, Juan Ivan
2018-04-01
In the central sector of the Sierra Madre del Sur in Southern Mexico, between approximately 36 and 16 Ma ago and in the west to east direction, a diachronic process of the formation of ∼north-south trending fault-bounded basins occurred. No tectono-sedimentary event in the period between 25 and 20 Ma is recognized in the study region. A period during which subduction erosion truncated the continental crust of southern Mexico has been proposed. The chronology, geometry and style of the formation of the Eocene Miocene fault-bounded basins are more congruent with crustal truncation by the detachment of the Chortís block, thus bringing into question the crustal truncation hypothesis of the Southern Mexico margin. Between Taxco and Tehuacán, using seven new Laser Ablation- Inductively-coupled plasma mass spectrometry (LA-ICP-MS) U-Pb ages in magmatic zircons, we refine the stratigraphy of the Tepenene, Tehuitzingo, Atzumba and Tepelmeme basins. The analyzed basins present similar tectono-sedimentary evolutions as follows: Stage 1, depocenter formation and filling by clastic rocks accumulated as alluvial fans and Stage 2, lacustrine sedimentation characterized by calcareous and/or evaporite beds. Based on our results, we propose the following hypothesis: in Southern Mexico, during Eocene-Miocene times, the diachronic formation of fault-bounded basins with general north-south trend occurred within the framework of the convergence between the plates of North and South America, and once the Chortís block had slipped towards the east, the basins formed in the cortical crust were recently left behind. On the other hand, the beginning of the basins' formation process related to left strike slip faults during Eocene-Oligocene times can be associated with the thermomechanical maturation cortical process that caused the brittle/ductile transition level in the continental crust to shallow.
Space, time, and the third dimension (model error)
Moss, Marshall E.
1979-01-01
The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.
Directory of Open Access Journals (Sweden)
Qin Guo-jie
2014-08-01
Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.
The timing of spontaneous detection and repair of naming errors in aphasia.
Schuchard, Julia; Middleton, Erica L; Schwartz, Myrna F
2017-08-01
This study examined the timing of spontaneous self-monitoring in the naming responses of people with aphasia. Twelve people with aphasia completed a 615-item naming test twice, in separate sessions. Naming attempts were scored for accuracy and error type, and verbalizations indicating detection were coded as negation (e.g., "no, not that") or repair attempts (i.e., a changed naming attempt). Focusing on phonological and semantic errors, we measured the timing of the errors and of the utterances that provided evidence of detection. The effects of error type and detection response type on error-to-detection latencies were analyzed using mixed-effects regression modeling. We first asked whether phonological errors and semantic errors differed in the timing of the detection process or repair planning. Results suggested that the two error types primarily differed with respect to repair planning. Specifically, repair attempts for phonological errors were initiated more quickly than repair attempts for semantic errors. We next asked whether this difference between the error types could be attributed to the tendency for phonological errors to have a high degree of phonological similarity with the subsequent repair attempts, thereby speeding the programming of the repairs. Results showed that greater phonological similarity between the error and the repair was associated with faster repair times for both error types, providing evidence of error-to-repair priming in spontaneous self-monitoring. When controlling for phonological overlap, significant effects of error type and repair accuracy on repair times were also found. These effects indicated that correct repairs of phonological errors were initiated particularly quickly, whereas repairs of semantic errors were initiated relatively slowly, regardless of their accuracy. We discuss the implications of these findings for theoretical accounts of self-monitoring and the role of speech error repair in learning. Copyright
R Programs for Truncated Distributions
Directory of Open Access Journals (Sweden)
Saralees Nadarajah
2006-08-01
Full Text Available Truncated distributions arise naturally in many practical situations. In this note, we provide programs for computing six quantities of interest (probability density function, mean, variance, cumulative distribution function, quantile function and random numbers for any truncated distribution: whether it is left truncated, right truncated or doubly truncated. The programs are written in R: a freely downloadable statistical software.
de Kruijf, Marcel; Coffey, Aidan; O'Mahony, Jim
2017-05-01
The inability of Mycobacterium avium subspecies paratuberculosis (MAP) to produce endogenous mycobactin in-vitro is most likely due to the presence of a truncated mbtA gene within the mycobactin cluster of MAP. The main goal of this study was to investigate this unique mbtA truncation as a potential novel PCR diagnostic marker for MAP. Novel primers were designed that were located within the truncated region and the contiguous MAP2179 gene. Primers were evaluated against non-MAP isolates and no amplicons were generated. The detection limit of this mbtA-MAP2179 target was evaluated using a range of MAP DNA concentrations, MAP inoculated faecal material and 20 MAP isolates. The performance of mbtA-MAP2179 was compared to the established f57 target. The detection limits recorded for MAP K-10 DNA and from MAP K-10 inoculated faecal samples were 0.34pg and 10 4 CFU/g respectively for both f57 and mbtA-MAP2179. A detection limit of 10 3 CFU/g was recorded for both targets, but not achieved consistently. The detection limit of MAP from inoculated faecal material was successful at 10 3 CFU/g for mbtA-MAP2179 when FAM probe real-time PCR was used. A MAP cell concentration of 10 2 CFU/g was detected successfully, but again not consistently achieved. All 20 mycobacterial isolates were successfully identified as MAP by f57 and mbtA-MAP2179. Interestingly, the mbtA-MAP2179 real-time PCR assay resulted in the formation of a unique melting curve profile that contained two melting curve peaks rather than one single peak. This melting curve phenomenon was attributed towards the asymmetrical GC% distribution within the mbtA-MAP2179 amplicon. This study investigated the implementation of the mbtA-MAP2179 target as a novel diagnostic marker and the detection limits obtained with mbtA-MAP2179 were comparable to the established f57 target, making the mbtA-MAP2179 an adequate confirmatory target. Moreover, the mbtA-MAP2179 target could be implemented in multiplex real-time PCR assays and
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...
CLIM : A cross-level workload-aware timing error prediction model for functional units
Jiao, Xun; Rahimi, Abbas; Jiang, Yu; Wang, Jianguo; Fatemi, Hamed; De Gyvez, Jose Pineda; Gupta, Rajesh K.
2018-01-01
Timing errors that are caused by the timing violations of sensitized circuit paths, have emerged as an important threat to the reliability of synchronous digital circuits. To protect circuits from these timing errors, designers typically use a conservative timing margin, which leads to operational
Anticipating cognitive effort: roles of perceived error-likelihood and time demands.
Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F
2017-11-13
Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.
Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.
Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J
2012-08-01
Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....
Design Margin Elimination Through Robust Timing Error Detection at Ultra-Low Voltage
Reyserhove, Hans; Dehaene, Wim
2017-01-01
This paper discusses a timing error masking-aware ARM Cortex M0 microcontroller system. Through in-path timing error detection, operation at the point-of-first-failure is possi- ble without corrupting the pipeline state, effectively eliminat- ing traditional timing margins. Error events are flagged and gathered to allow dynamic voltage scaling. The error-aware microcontroller was implemented in a 40nm CMOS process and realizes ultra-low voltage operation down to 0.29V at 5MHz consuming 12.90p...
Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements
Deeg, H. J.
2015-06-01
Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.
Clustered survival data with left-truncation
DEFF Research Database (Denmark)
Eriksson, Frank; Martinussen, Torben; Scheike, Thomas H.
2015-01-01
Left-truncation occurs frequently in survival studies, and it is well known how to deal with this for univariate survival times. However, there are few results on how to estimate dependence parameters and regression effects in semiparametric models for clustered survival data with delayed entry....... Surprisingly, existing methods only deal with special cases. In this paper, we clarify different kinds of left-truncation and suggest estimators for semiparametric survival models under specific truncation schemes. The large-sample properties of the estimators are established. Small-sample properties...
On the determinants of measurement error in time-driven costing
Cardinaels, E.; Labro, E.
2008-01-01
Although time estimates are used extensively for costing purposes, they are prone to measurement error. In an experimental setting, we research how measurement error in time estimates varies with: (1) the level of aggregation in the definition of costing system activities (aggregated or
Analysis of truncation limit in probabilistic safety assessment
International Nuclear Information System (INIS)
Cepin, Marko
2005-01-01
A truncation limit defines the boundaries of what is considered in the probabilistic safety assessment and what is neglected. The truncation limit that is the focus here is the truncation limit on the size of the minimal cut set contribution at which to cut off. A new method was developed, which defines truncation limit in probabilistic safety assessment. The method specifies truncation limits with more stringency than presenting existing documents dealing with truncation criteria in probabilistic safety assessment do. The results of this paper indicate that the truncation limits for more complex probabilistic safety assessments, which consist of larger number of basic events, should be more severe than presently recommended in existing documents if more accuracy is desired. The truncation limits defined by the new method reduce the relative errors of importance measures and produce more accurate results for probabilistic safety assessment applications. The reduced relative errors of importance measures can prevent situations, where the acceptability of change of equipment under investigation according to RG 1.174 would be shifted from region, where changes can be accepted, to region, where changes cannot be accepted, if the results would be calculated with smaller truncation limit
The importance of time-stepping errors in ocean models
Williams, P. D.
2011-12-01
Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Real-time detection and elimination of nonorthogonality error in interference fringe processing
International Nuclear Information System (INIS)
Hu Haijiang; Zhang Fengdeng
2011-01-01
In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.
Stochastic goal-oriented error estimation with memory
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Clinical relevance of and risk factors associated with medication administration time errors
Teunissen, R.; Bos, J.; Pot, H.; Pluim, M.; Kramers, C.
2013-01-01
PURPOSE: The clinical relevance of and risk factors associated with errors related to medication administration time were studied. METHODS: In this explorative study, 66 medication administration rounds were studied on two wards (surgery and neurology) of a hospital. Data on medication errors were
FEM for time-fractional diffusion equations, novel optimal error analyses
Mustapha, Kassem
2016-01-01
A semidiscrete Galerkin finite element method applied to time-fractional diffusion equations with time-space dependent diffusivity on bounded convex spatial domains will be studied. The main focus is on achieving optimal error results with respect to both the convergence order of the approximate solution and the regularity of the initial data. By using novel energy arguments, for each fixed time $t$, optimal error bounds in the spatial $L^2$- and $H^1$-norms are derived for both cases: smooth...
International Nuclear Information System (INIS)
Sanghavi, Suniti; Stephens, Graeme
2015-01-01
In the presence of aerosol and/or clouds, the use of appropriate truncation methods becomes indispensable for accurate but cost-efficient radiative transfer computations. Truncation methods allow the reduction of the large number (usually several hundreds) of Fourier components associated with particulate scattering functions to a more manageable number, thereby making it possible to carry out radiative transfer computations with a modest number of streams. While several truncation methods have been discussed for scalar radiative transfer, few rigorous studies have been made of truncation methods for the vector case. Here, we formally derive the vector form of Wiscombe's delta-m truncation method. Two main sources of error associated with delta-m truncation are identified as the delta-separation error (DSE) and the phase-truncation error (PTE). The view angles most affected by truncation error occur in the vicinity of the direction of exact backscatter. This view geometry occurs commonly in satellite based remote sensing applications, and is hence of considerable importance. In order to deal with these errors, we adapt the δ-fit approach of Hu et al. (2000) [17] to vector radiative transfer. The resulting δBGE-fit is compared with the vectorized delta-m method. For truncation at l=25 of an original phase matrix consisting of over 300 Fourier components, the use of the δBGE-fit minimizes the error due to truncation at these view angles, while practically eliminating error at other angles. We also show how truncation errors have a distorting effect on hyperspectral absorption line shapes. The choice of the δBGE-fit method over delta-m truncation minimizes errors in absorption line depths, thus affording greater accuracy for sensitive retrievals such as those of XCO 2 from OCO-2 or GOSAT measurements. - Highlights: • Derives vector form for delta-m truncation method. • Adapts δ-fit truncation approach to vector RTE as δBGE-fit. • Compares truncation
Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems
Asner, Liya; Tavener, Simon; Kay, David
2012-01-01
We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.
Tracking errors in a prototype real-time tumour tracking system
International Nuclear Information System (INIS)
Sharp, Gregory C; Jiang, Steve B; Shimizu, Shinichi; Shirato, Hiroki
2004-01-01
In motion-compensated radiation therapy, radio-opaque markers can be implanted in or near a tumour and tracked in real-time using fluoroscopic imaging. Tracking these implanted markers gives highly accurate position information, except when tracking fails due to poor or ambiguous imaging conditions. This study investigates methods for automatic detection of tracking errors, and assesses the frequency and impact of tracking errors on treatments using the prototype real-time tumour tracking system. We investigated four indicators for automatic detection of tracking errors, and found that the distance between corresponding rays was most effective. We also found that tracking errors cause a loss of gating efficiency of between 7.6 and 10.2%. The incidence of treatment beam delivery during tracking errors was estimated at between 0.8% and 1.25%
Time-discrete higher order ALE formulations: a priori error analysis
Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.
2013-01-01
We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our
Supervised learning based model for predicting variability-induced timing errors
Jiao, X.; Rahimi, A.; Narayanaswamy, B.; Fatemi, H.; Pineda de Gyvez, J.; Gupta, R.K.
2015-01-01
Circuit designers typically combat variations in hardware and workload by increasing conservative guardbanding that leads to operational inefficiency. Reducing this excessive guardband is highly desirable, but causes timing errors in synchronous circuits. We propose a methodology for supervised
Error Recovery in the Time-Triggered Paradigm with FTT-CAN.
Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís
2018-01-11
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
Identifying and Correcting Timing Errors at Seismic Stations in and around Iran
International Nuclear Information System (INIS)
Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; Begnaud, Michael Lee
2017-01-01
A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14 stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.
Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.
Directory of Open Access Journals (Sweden)
Waddah Waheeb
Full Text Available Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN and the Dynamic Ridge Polynomial Neural Network (DRPNN. Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.
A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates
Hagedorn, G A
2004-01-01
We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2014-01-01
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
DEFF Research Database (Denmark)
Kertzscher, Gustavo; Andersen, Claus Erik; Siebert, Frank-André
2011-01-01
treatment errors, including interchanged pairs of afterloader guide tubes and 2–20mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al2O3:C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated...
International Nuclear Information System (INIS)
Kertzscher, Gustavo; Andersen, Claus E.; Siebert, Frank-Andre; Nielsen, Soren Kynde; Lindegaard, Jacob C.; Tanderup, Kari
2011-01-01
Background and purpose: The feasibility of a real-time in vivo dosimeter to detect errors has previously been demonstrated. The purpose of this study was to: (1) quantify the sensitivity of the dosimeter to detect imposed treatment errors under well controlled and clinically relevant experimental conditions, and (2) test a new statistical error decision concept based on full uncertainty analysis. Materials and methods: Phantom studies of two gynecological cancer PDR and one prostate cancer HDR patient treatment plans were performed using tandem ring applicators or interstitial needles. Imposed treatment errors, including interchanged pairs of afterloader guide tubes and 2-20 mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al 2 O 3 :C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated at three dose levels: dwell position, source channel, and fraction. The error criterion incorporated the correlated source position uncertainties and other sources of uncertainty, and it was applied both for the specific phantom patient plans and for a general case (source-detector distance 5-90 mm and position uncertainty 1-4 mm). Results: Out of 20 interchanged guide tube errors, time-resolved analysis identified 17 while fraction level analysis identified two. Channel and fraction level comparisons could leave 10 mm dosimeter displacement errors unidentified. Dwell position dose rate comparisons correctly identified displacements ≥5 mm. Conclusion: This phantom study demonstrates that Al 2 O 3 :C real-time dosimetry can identify applicator displacements ≥5 mm and interchanged guide tube errors during PDR and HDR brachytherapy. The study demonstrates the shortcoming of a constant error criterion and the advantage of a statistical error criterion.
First photoelectron timing error evaluation of a new scintillation detector model
International Nuclear Information System (INIS)
Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III
1991-01-01
In this paper, a general timing system model for a scintillation detector developed is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulate a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. The authors find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, the authors find a low threshold is optimal while for higher energy pulses the optimal threshold increases
First photoelectron timing error evaluation of a new scintillation detector model
International Nuclear Information System (INIS)
Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III
1990-01-01
In this paper, a general timing system model for a scintillation detector that was developed, is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulated a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. We find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, we find a low threshold is optimal while for higher energy pulses the optimal threshold increases
On the effect of systematic errors in near real time accountancy
International Nuclear Information System (INIS)
Avenhaus, R.
1987-01-01
Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically
Structure and dating errors in the geologic time scale and periodicity in mass extinctions
Stothers, Richard B.
1989-01-01
Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Properties of truncated multiplicity distributions
International Nuclear Information System (INIS)
Lupia, S.
1995-01-01
Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)
Properties of truncated multiplicity distributions
Energy Technology Data Exchange (ETDEWEB)
Lupia, S. [Turin Univ. (Italy). Dipt. di Fisica
1995-12-31
Truncation effects on multiplicity distributions are discussed. Observables sensitive to the tail, like factorial moments, factorial cumulants and their ratio, are shown to be strongly affected by truncation. A possible way to overcome this problem by looking at the head of the distribution is suggested. (author)
Mixtures of truncated basis functions
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2012-01-01
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for representing general hybrid Bayesian networks. The proposed framework generalizes both the mixture of truncated exponentials (MTEs) framework and the mixture of polynomials (MoPs) framework. Similar t...
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Near real-time geocoding of SAR imagery with orbit error removal.
Smith, A.J.E.
2003-01-01
When utilizing knowledge of the spacecraft trajectory for near real-time geocoding of Synthetic Aperture Radar (SAR) images, the main problem is that predicted satellite orbits have to be used, which may be in error by several kilometres. As part of the development of a Dutch autonomous mobile
Time-series analysis of Nigeria rice supply and demand: Error ...
African Journals Online (AJOL)
The study examined a time-series analysis of Nigeria rice supply and demand with a view to determining any long-run equilibrium between them using the Error Correction Model approach (ECM). The data used for the study represents the annual series of 1960-2007 (47 years) for rice supply and demand in Nigeria, ...
Directory of Open Access Journals (Sweden)
Michael Short
2017-07-01
Full Text Available Embedded systems consist of one or more processing units which are completely encapsulated by the devices under their control, and they often have stringent timing constraints associated with their functional specification. Previous research has considered the performance of different types of task scheduling algorithm and developed associated timing analysis techniques for such systems. Although preemptive scheduling techniques have traditionally been favored, rapid increases in processor speeds combined with improved insights into the behavior of non-preemptive scheduling techniques have seen an increased interest in their use for real-time applications such as multimedia, automation and control. However when non-preemptive scheduling techniques are employed there is a potential lack of error confinement should any timing errors occur in individual software tasks. In this paper, the focus is upon adding fault tolerance in systems using non-preemptive deadline-driven scheduling. Schedulability conditions are derived for fault-tolerant periodic and sporadic task sets experiencing bounded error arrivals under non-preemptive deadline scheduling. A timing analysis algorithm is presented based upon these conditions and its run-time properties are studied. Computational experiments show it to be highly efficient in terms of run-time complexity and competitive ratio when compared to previous approaches.
A Post-Truncation Parameterization of Truncated Normal Technical Inefficiency
Christine Amsler; Peter Schmidt; Wen-Jen Tsay
2013-01-01
In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N^+ (μ,σ^2). This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier. We will distinguish the pre-truncation mean (μ) and variance (σ^2) from the post-truncation mean μ_*=E(u) and var...
GonzáLez, Pablo J.; FernáNdez, José
2011-10-01
Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....
Guermond, J.-L.; Salgado, Abner J.
2011-01-01
In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.
Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2012-01-01
A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....
Application of a truncated normal failure distribution in reliability testing
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
Model and Reduction of Inactive Times in a Maintenance Workshop Following a Diagnostic Error
Directory of Open Access Journals (Sweden)
T. Beda
2011-04-01
Full Text Available The majority of maintenance workshops in manufacturing factories are hierarchical. This arrangement permits quick response in advent of a breakdown. Reaction of the maintenance workshop is done by evaluating the characteristics of the breakdown. In effect, a diagnostic error at a given level of the process of decision making delays the restoration of normal operating state. The consequences are not just financial loses, but loss in customers’ satisfaction as well. The goal of this paper is to model the inactive time of a maintenance workshop in case that an unpredicted catalectic breakdown has occurred and a diagnostic error has also occurred at a certain level of decision-making, during the treatment process of the breakdown. We show that the expression for the inactive times obtained, is depended only on the characteristics of the workshop. Next, we propose a method to reduce the inactive times.
Review of current GPS methodologies for producing accurate time series and their error sources
He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping
2017-05-01
The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e
Neural Network Based Real-time Correction of Transducer Dynamic Errors
Roj, J.
2013-12-01
In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.
Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints
Directory of Open Access Journals (Sweden)
Qiang Zhang
2014-01-01
Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm
Directory of Open Access Journals (Sweden)
F Hermens
2014-08-01
Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.
5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.
2010-01-01
... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...
Lystrom, David J.
1972-01-01
The magnitude, frequency, and types of errors inherent in real-time streamflow data are presented in part I. It was found that real-time data are generally less accurate than are historical data, primarily because real-time data are often used before errors can be detected and corrections applied.
Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion
Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z.
2014-01-01
© 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.
Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion
Jin, B.
2014-05-30
© 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.
Effects of dating errors on nonparametric trend analyses of speleothem time series
Directory of Open Access Journals (Sweden)
M. Mudelsee
2012-10-01
Full Text Available A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ^{18}O from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka. We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser–Müller kernel regression. Error bands around fitted trend curves are determined by combining (1 block bootstrap resampling to preserve noise properties (shape, autocorrelation of the δ^{18}O residuals and (2 timescale simulations (models StalAge and iscam. The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka, with warm–cold amplitudes of around 0.5‰ δ^{18}O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP, the Little Ice Age (LIA and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.
Schmidt, Maria A; Morgan, Robert
2008-10-01
To investigate bolus timing artifacts that impair depiction of renal arteries at contrast material-enhanced magnetic resonance (MR) angiography and to determine the effect of contrast agent infusion rates on artifact generation. Renal contrast-enhanced MR angiography was simulated for a variety of infusion schemes, assuming both correct and incorrect timing between data acquisition and contrast agent injection. In addition, the ethics committee approved the retrospective evaluation of clinical breath-hold renal contrast-enhanced MR angiographic studies obtained with automated detection of contrast agent arrival. Twenty-two studies were evaluated for their ability to depict the origin of renal arteries in patent vessels and for any signs of timing errors. Simulations showed that a completely artifactual stenosis or an artifactual overestimation of an existing stenosis at the renal artery origin can be caused by timing errors of the order of 5 seconds in examinations performed with contrast agent infusion rates compatible with or higher than those of hand injections. Lower infusion rates make the studies more likely to accurately depict the origin of the renal arteries. In approximately one-third of all clinical examinations, different contrast agent uptake rates were detected on the left and right sides of the body, and thus allowed us to confirm that it is often impossible to optimize depiction of both renal arteries. In three renal arteries, a signal void was found at the origin in a patent vessel, and delayed contrast agent arrival was confirmed. Computer simulations and clinical examinations showed that timing errors impair the accurate depiction of renal artery origins. (c) RSNA, 2008.
Influence of planning time and treatment complexity on radiation therapy errors.
Gensheimer, Michael F; Zeng, Jing; Carlson, Joshua; Spady, Phil; Jordan, Loucille; Kane, Gabrielle; Ford, Eric C
2016-01-01
Radiation treatment planning is a complex process with potential for error. We hypothesized that shorter time from simulation to treatment would result in rushed work and higher incidence of errors. We examined treatment planning factors predictive for near-miss events. Treatments delivered from March 2012 through October 2014 were analyzed. Near-miss events were prospectively recorded and coded for severity on a 0 to 4 scale; only grade 3-4 (potentially severe/critical) events were studied in this report. For 4 treatment types (3-dimensional conformal, intensity modulated radiation therapy, stereotactic body radiation therapy [SBRT], neutron), logistic regression was performed to test influence of treatment planning time and clinical variables on near-miss events. There were 2257 treatment courses during the study period, with 322 grade 3-4 near-miss events. SBRT treatments had more frequent events than the other 3 treatment types (18% vs 11%, P = .04). For the 3-dimensional conformal group (1354 treatments), univariate analysis showed several factors predictive of near-miss events: longer time from simulation to first treatment (P = .01), treatment of primary site versus metastasis (P < .001), longer treatment course (P < .001), and pediatric versus adult patient (P = .002). However, on multivariate regression only pediatric versus adult patient remained predictive of events (P = 0.02). For the intensity modulated radiation therapy, SBRT, and neutron groups, time between simulation and first treatment was not found to be predictive of near-miss events on univariate or multivariate regression. When controlling for treatment technique and other clinical factors, there was no relationship between time spent in radiation treatment planning and near-miss events. SBRT and pediatric treatments were more error-prone, indicating that clinical and technical complexity of treatments should be taken into account when targeting safety interventions. Copyright © 2015 American
Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.
Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick
2007-08-01
To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.
Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick
2007-01-01
Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676
Time-discrete higher order ALE formulations: a priori error analysis
Bonito, Andrea
2013-03-16
We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Truncated Calogero-Sutherland models
Pittman, S. M.; Beau, M.; Olshanii, M.; del Campo, A.
2017-05-01
A one-dimensional quantum many-body system consisting of particles confined in a harmonic potential and subject to finite-range two-body and three-body inverse-square interactions is introduced. The range of the interactions is set by truncation beyond a number of neighbors and can be tuned to interpolate between the Calogero-Sutherland model and a system with nearest and next-nearest neighbors interactions discussed by Jain and Khare. The model also includes the Tonks-Girardeau gas describing impenetrable bosons as well as an extension with truncated interactions. While the ground state wave function takes a truncated Bijl-Jastrow form, collective modes of the system are found in terms of multivariable symmetric polynomials. We numerically compute the density profile, one-body reduced density matrix, and momentum distribution of the ground state as a function of the range r and the interaction strength.
Error characterization for asynchronous computations: Proxy equation approach
Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath
2017-11-01
Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.
A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention
Hiienkari, Markus; Teittinen, Jukka; Koskinen, Lauri; Turnquist, Matthew; Mäkipää, Jani; Rantala, Arto; Sopanen, Matti; Kaltiokallio, Mikko
2015-01-01
To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable ...
Errors in 'BED'-derived estimates of HIV incidence will vary by place, time and age.
Directory of Open Access Journals (Sweden)
Timothy B Hallett
2009-05-01
Full Text Available The BED Capture Enzyme Immunoassay, believed to distinguish recent HIV infections, is being used to estimate HIV incidence, although an important property of the test--how specificity changes with time since infection--has not been not measured.We construct hypothetical scenarios for the performance of BED test, consistent with current knowledge, and explore how this could influence errors in BED estimates of incidence using a mathematical model of six African countries. The model is also used to determine the conditions and the sample sizes required for the BED test to reliably detect trends in HIV incidence.If the chance of misclassification by BED increases with time since infection, the overall proportion of individuals misclassified could vary widely between countries, over time, and across age-groups, in a manner determined by the historic course of the epidemic and the age-pattern of incidence. Under some circumstances, changes in BED estimates over time can approximately track actual changes in incidence, but large sample sizes (50,000+ will be required for recorded changes to be statistically significant.The relationship between BED test specificity and time since infection has not been fully measured, but, if it decreases, errors in estimates of incidence could vary by place, time and age-group. This means that post-assay adjustment procedures using parameters from different populations or at different times may not be valid. Further research is urgently needed into the properties of the BED test, and the rate of misclassification in a wide range of populations.
Phase correction and error estimation in InSAR time series analysis
Zhang, Y.; Fattahi, H.; Amelung, F.
2017-12-01
During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same
Bound on quantum computation time: Quantum error correction in a critical environment
International Nuclear Information System (INIS)
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2010-01-01
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.
Post-event human decision errors: operator action tree/time reliability correlation
Energy Technology Data Exchange (ETDEWEB)
Hall, R E; Fragola, J; Wreathall, J
1982-11-01
This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.
Post-event human decision errors: operator action tree/time reliability correlation
International Nuclear Information System (INIS)
Hall, R.E.; Fragola, J.; Wreathall, J.
1982-11-01
This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations
A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention
Directory of Open Access Journals (Sweden)
Markus Hiienkari
2015-04-01
Full Text Available To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable operation with minimal safety margins while maximizing performance and energy efficiency at a given operating point. Measurements show minimum energy of 3.15 pJ/cyc at 400 mV, which corresponds to 39% energy saving compared to operation based on static signoff timing.
DEFF Research Database (Denmark)
Kertzscher Schwencke, Gustavo Adolfo Vladimir; Andersen, Claus E.; Tanderup, Kari
2014-01-01
Purpose:This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction ......, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-time in vivo point dosimetry....... of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods:In the event of a measured potential treatment error, the AEDA proposes the most...
Parinussa, R.M.; Meesters, A.G.C.A.; Liu, Y.Y.; Dorigo, W.; Wagner, W.; de Jeu, R.A.M.
2011-01-01
A time-efficient solution to estimate the error of satellite surface soil moisture from the land parameter retrieval model is presented. The errors are estimated using an analytical solution for soil moisture retrievals from this radiative-transfer-based model that derives soil moisture from
Li, Xingxing
2014-05-01
displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;
Design of a real-time spectroscopic rotating compensator ellipsometer without systematic errors
Energy Technology Data Exchange (ETDEWEB)
Broch, Laurent, E-mail: laurent.broch@univ-lorraine.fr [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Stein, Nicolas [Institut Jean Lamour, Universite de Lorraine, UMR 7198 CNRS, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Zimmer, Alexandre [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS, Universite de Bourgogne, 9 avenue Alain Savary BP 47870, F-21078 Dijon Cedex (France); Battie, Yann; Naciri, Aotmane En [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France)
2014-11-28
We describe a spectroscopic ellipsometer in the visible domain (400–800 nm) based on a rotating compensator technology using two detectors. The classical analyzer is replaced by a fixed Rochon birefringent beamsplitter which splits the incidence light wave into two perpendicularly polarized waves, one oriented at + 45° and the other one at − 45° according to the plane of incidence. Both emergent optical signals are analyzed by two identical CCD detectors which are synchronized by an optical encoder fixed on the shaft of the step-by-step motor of the compensator. The final spectrum is the result of the two averaged Ψ and Δ spectra acquired by both detectors. We show that Ψ and Δ spectra are acquired without systematic errors on a spectral range fixed from 400 to 800 nm. The acquisition time can be adjusted down to 25 ms. The setup was validated by monitoring the first steps of bismuth telluride film electrocrystallization. The results exhibit that induced experimental growth parameters, such as film thickness and volumic fraction of deposited material can be extracted with a better trueness. - Highlights: • High-speed rotating compensator ellipsometer equipped with 2 detectors. • Ellipsometric angles without systematic errors • In-situ monitoring of electrocrystallization of bismuth telluride thin layer • High-accuracy of fitted physical parameters.
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
International Nuclear Information System (INIS)
Gómez de León, F C; Meroño Pérez, P A
2010-01-01
The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement
Klevtsov, S. I.
2018-05-01
The impact of physical factors, such as temperature and others, leads to a change in the parameters of the technical object. Monitoring the change of parameters is necessary to prevent a dangerous situation. The control is carried out in real time. To predict the change in the parameter, a time series is used in this paper. Forecasting allows one to determine the possibility of a dangerous change in a parameter before the moment when this change occurs. The control system in this case has more time to prevent a dangerous situation. A simple time series was chosen. In this case, the algorithm is simple. The algorithm is executed in the microprocessor module in the background. The efficiency of using the time series is affected by its characteristics, which must be adjusted. In the work, the influence of these characteristics on the error of prediction of the controlled parameter was studied. This takes into account the behavior of the parameter. The values of the forecast lag are determined. The results of the research, in the case of their use, will improve the efficiency of monitoring the technical object during its operation.
Vintage errors: do real-time economic data improve election forecasts?
Directory of Open Access Journals (Sweden)
Mark Andreas Kayser
2015-07-01
Full Text Available Economic performance is a key component of most election forecasts. When fitting models, however, most forecasters unwittingly assume that the actual state of the economy, a state best estimated by the multiple periodic revisions to official macroeconomic statistics, drives voter behavior. The difference in macroeconomic estimates between revised and original data vintages can be substantial, commonly over 100% (two-fold for economic growth estimates, making the choice of which data release to use important for the predictive validity of a model. We systematically compare the predictions of four forecasting models for numerous US presidential elections using real-time and vintage data. We find that newer data are not better data for election forecasting: forecasting error increases with data revisions. This result suggests that voter perceptions of economic growth are influenced more by media reports about the economy, which are based on initial economic estimates, than by the actual state of the economy.
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki
2016-01-01
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Energy Technology Data Exchange (ETDEWEB)
Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)
2016-11-15
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Truncation Depth Rule-of-Thumb for Convolutional Codes
Moision, Bruce
2009-01-01
In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.
Truncated States Obtained by Iteration
International Nuclear Information System (INIS)
Cardoso, W. B.; Almeida, N. G. de
2008-01-01
We introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST
Suba, Eric J; Pfeifer, John D; Raab, Stephen S
2007-10-01
Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.
Directory of Open Access Journals (Sweden)
Siwon Song
2012-09-01
Full Text Available The climatology of mean bias errors (relative to 1-day forecasts was examined in a 20-year hindcast set from version 1 of the Climate Forecast System (CFS, for forecast lead times of 2, 4, 8, 16, ... 256 days, verifying in different seasons. Results mostly confirm the simple expectation that atmospheric model biases should be evident at short lead (2–4 days, while soil moisture errors develop over days-weeks and ocean errors emerge over months. A further simplification is also evident: surface temperature bias patterns have nearly fixed geographical structure, growing with different time scales over land and ocean. The geographical pattern has mostly warm and dry biases over land and cool bias over the oceans, with two main exceptions: (1 deficient stratocumulus clouds cause warm biases in eastern subtropical oceans, and (2 high latitude land is too cold in boreal winter. Further study of the east Pacific cold tongue-Intertropical Convergence Zone (ITCZ complex shows a possible interaction between a rapidly-expressed atmospheric model bias (poleward shift of deep convection beginning at day 2 and slow ocean dynamics (erroneously cold upwelling along the equator in leads > 1 month. Further study of the high latitude land cold bias shows that it is a thermal wind balance aspect of the deep polar vortex, not just a near-surface temperature error under the wintertime inversion, suggesting that its development time scale of weeks to months may involve long timescale processes in the atmosphere, not necessarily in the land model. Winter zonal wind errors are small in magnitude, but a refractive index map shows that this can cause modest errors in Rossby wave ducting. Finally, as a counterpoint to our initial expectations about error growth, a case of non-monotonic error growth is shown: velocity potential bias grows with lead on a time scale of weeks, then decays over months. It is hypothesized that compensations between land and ocean errors may
Truncated Groebner fans and lattice ideals
Lauritzen, Niels
2005-01-01
We outline a generalization of the Groebner fan of a homogeneous ideal with maximal cells parametrizing truncated Groebner bases. This "truncated" Groebner fan is usually much smaller than the full Groebner fan and offers the natural framework for conversion between truncated Groebner bases. The generic Groebner walk generalizes naturally to this setting by using the Buchberger algorithm with truncation on facets. We specialize to the setting of lattice ideals. Here facets along the generic w...
Positioning performance analysis of the time sum of arrival algorithm with error features
Gong, Feng-xun; Ma, Yan-qiu
2018-03-01
The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.
Roy, Debjit; Mandal, Saptarshi; De, Chayan K; Kumar, Kaushalendra; Mandal, Prasun K
2018-04-18
CdSe-based core/gradient alloy shell/shell semiconductor quantum dots (CGASS QDs) have been shown to be optically quite superior compared to core-shell QDs. However, very little is known about CGASS QDs at the single particle level. Photoluminescence blinking dynamics of four differently emitting (blue (λem = 510), green (λem = 532), orange (λem = 591), and red (λem = 619)) single CGASS QDs having average sizes 600 nm). In this manuscript, we report nearly suppressed PL blinking behaviour of CGASS QDs with average sizes correlation between the event durations and found that residual memory exists in both the ON- and OFF-event durations. Positively correlated successive ON-ON and OFF-OFF event durations and negatively correlated (anti-correlated) ON-OFF event durations perhaps suggest the involvement of more than one type of trapping process within the blinking framework. The timescale corresponding to the additional exponential term has been assigned to hole trapping for ON-event duration statistics. Similarly, for OFF-event duration statistics, this component suggests hole detrapping. We found that the average duration of the exponential process for the ON-event durations is an order of magnitude higher than that of the OFF-event durations. This indicates that the holes are trapped for a significantly long time. When electron trapping is followed by such a hole trapping, long ON-event durations result. We have observed long ON-event durations, as high as 50 s. The competing charge tunnelling model has been used to account for the observed blinking behaviour in these CGASS QDs. Quite interestingly, the PLQY of all of these differently emitting QDs (an ensemble level property) could be correlated with the truncation time (a property at the single particle level). A respective concomitant increase-decrease of ON-OFF event truncation times with increasing PLQY is also indicative of a varying degree of suppression of the Auger recombination processes in these four
Feature Migration in Time: Reflection of Selective Attention on Speech Errors
Nozari, Nazbanou; Dell, Gary S.
2012-01-01
This article describes an initial study of the effect of focused attention on phonological speech errors. In 3 experiments, participants recited 4-word tongue twisters and focused attention on 1 (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to avoid errors on the…
MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors
International Nuclear Information System (INIS)
Passarge, M; Fix, M K; Manser, P; Stampanoni, M F M; Siebers, J V
2016-01-01
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error
MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors
Energy Technology Data Exchange (ETDEWEB)
Passarge, M; Fix, M K; Manser, P [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern (Switzerland); Stampanoni, M F M [Institute for Biomedical Engineering, ETH Zurich, and PSI, Villigen (Switzerland); Siebers, J V [Department of Radiation Oncology, University of Virginia, Charlottesville, VA (United States)
2016-06-15
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error
Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J
2014-01-01
Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.
Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series
Zhang, Zhihua
2014-01-01
Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842
Directory of Open Access Journals (Sweden)
Jun Yang
2014-01-01
Full Text Available To improve the CNC machine tools precision, a thermal error modeling for the motorized spindle was proposed based on time series analysis, considering the length of cutting tools and thermal declined angles, and the real-time error compensation was implemented. A five-point method was applied to measure radial thermal declinations and axial expansion of the spindle with eddy current sensors, solving the problem that the three-point measurement cannot obtain the radial thermal angle errors. Then the stationarity of the thermal error sequences was determined by the Augmented Dickey-Fuller Test Algorithm, and the autocorrelation/partial autocorrelation function was applied to identify the model pattern. By combining both Yule-Walker equations and information criteria, the order and parameters of the models were solved effectively, which improved the prediction accuracy and generalization ability. The results indicated that the prediction accuracy of the time series model could reach up to 90%. In addition, the axial maximum error decreased from 39.6 μm to 7 μm after error compensation, and the machining accuracy was improved by 89.7%. Moreover, the X/Y-direction accuracy can reach up to 77.4% and 86%, respectively, which demonstrated that the proposed methods of measurement, modeling, and compensation were effective.
Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D
2018-05-18
Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.
Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems
Directory of Open Access Journals (Sweden)
Syed Imtiaz Husain
2009-01-01
Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.
Zero-truncated negative binomial - Erlang distribution
Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana
2017-11-01
The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
On the construction of a time base and the elimination of averaging errors in proxy records
Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.
2009-04-01
Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The
Directory of Open Access Journals (Sweden)
David M Williams
Full Text Available Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus. We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0-25%, 26-50%, 51-75%, and 76-100%. We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47, but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0-25%, 26-50%, 51-75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively and then leveled off in the 76-100% cover class (SD = 4.43 m. We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to
International Nuclear Information System (INIS)
Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.
2015-01-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.
Energy Technology Data Exchange (ETDEWEB)
Menelaou, Evdokia; Paul, Latoya T. [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Perera, Surangi N. [Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States); Svoboda, Kurt R., E-mail: svobodak@uwm.edu [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States)
2015-04-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.
Statistical estimation for truncated exponential families
Akahira, Masafumi
2017-01-01
This book presents new findings on nonregular statistical estimation. Unlike other books on this topic, its major emphasis is on helping readers understand the meaning and implications of both regularity and irregularity through a certain family of distributions. In particular, it focuses on a truncated exponential family of distributions with a natural parameter and truncation parameter as a typical nonregular family. This focus includes the (truncated) Pareto distribution, which is widely used in various fields such as finance, physics, hydrology, geology, astronomy, and other disciplines. The family is essential in that it links both regular and nonregular distributions, as it becomes a regular exponential family if the truncation parameter is known. The emphasis is on presenting new results on the maximum likelihood estimation of a natural parameter or truncation parameter if one of them is a nuisance parameter. In order to obtain more information on the truncation, the Bayesian approach is also considere...
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Truncation of CPC solar collectors and its effect on energy collection
Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.
1985-01-01
Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.
Energy Technology Data Exchange (ETDEWEB)
Ghezzehei, T.A.
2008-05-29
Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.
Lamp with a truncated reflector cup
Li, Ming; Allen, Steven C.; Bazydola, Sarah; Ghiu, Camil-Daniel
2013-10-15
A lamp assembly, and method for making same. The lamp assembly includes first and second truncated reflector cups. The lamp assembly also includes at least one base plate disposed between the first and second truncated reflector cups, and a light engine disposed on a top surface of the at least one base plate. The light engine is configured to emit light to be reflected by one of the first and second truncated reflector cups.
Computing correct truncated excited state wavefunctions
Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.
2016-12-01
We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.
Time-Weighted Balanced Stochastic Model Reduction
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2011-01-01
A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...
Pitts, Eric P
2011-01-01
This study looked at the medication ordering error frequency and the length of inpatient hospital stay in a subpopulation of stroke patients (n-60) as a function of time of patient admission to an inpatient rehabilitation hospital service. A total of 60 inpatient rehabilitation patients, 30 arriving before 4 pm, and 30 arriving after 4 pm, with as admitting diagnosis of stroke were randomly selected from a larger sample (N=426). There was a statistically significant increase in medication ordering errors and the number of inpatient rehabilitation hospital days in the group of patients who arrived after 4 pm.
One-Class Classification-Based Real-Time Activity Error Detection in Smart Homes.
Das, Barnan; Cook, Diane J; Krishnan, Narayanan C; Schmitter-Edgecombe, Maureen
2016-08-01
Caring for individuals with dementia is frequently associated with extreme physical and emotional stress, which often leads to depression. Smart home technology and advances in machine learning techniques can provide innovative solutions to reduce caregiver burden. One key service that caregivers provide is prompting individuals with memory limitations to initiate and complete daily activities. We hypothesize that sensor technologies combined with machine learning techniques can automate the process of providing reminder-based interventions. The first step towards automated interventions is to detect when an individual faces difficulty with activities. We propose machine learning approaches based on one-class classification that learn normal activity patterns. When we apply these classifiers to activity patterns that were not seen before, the classifiers are able to detect activity errors, which represent potential prompt situations. We validate our approaches on smart home sensor data obtained from older adult participants, some of whom faced difficulties performing routine activities and thus committed errors.
Study of run time errors of the ATLAS Pixel Detector in the 2012 data taking period
AUTHOR|(INSPIRE)INSPIRE-00339072
2013-05-16
The high resolution silicon Pixel detector is critical in event vertex reconstruction and in particle track reconstruction in the ATLAS detector. During the pixel data taking operation, some modules (Silicon Pixel sensor +Front End Chip+ Module Control Chip (MCC)) go to an auto-disable state, where the Modules don’t send the data for storage. Modules become operational again after reconfiguration. The source of the problem is not fully understood. One possible source of the problem is traced to the occurrence of single event upset (SEU) in the MCC. Such a module goes to either a Timeout or Busy state. This report is the study of different types and rates of errors occurring in the Pixel data taking operation. Also, the study includes the error rate dependency on Pixel detector geometry.
Perspective on rainbow-ladder truncation
International Nuclear Information System (INIS)
Eichmann, G.; Alkofer, R.; Krassnigg, A.; Cloeet, I. C.; Roberts, C. D.
2008-01-01
Prima facie the systematic implementation of corrections to the rainbow-ladder truncation of QCD's Dyson-Schwinger equations will uniformly reduce in magnitude those calculated mass-dimensioned results for pseudoscalar and vector meson properties that are not tightly constrained by symmetries. The aim and interpretation of studies employing rainbow-ladder truncation are reconsidered in this light
Stability of Slopes Reinforced with Truncated Piles
Directory of Open Access Journals (Sweden)
Shu-Wei Sun
2016-01-01
Full Text Available Piles are extensively used as a means of slope stabilization. A novel engineering technique of truncated piles that are unlike traditional piles is introduced in this paper. A simplified numerical method is proposed to analyze the stability of slopes stabilized with truncated piles based on the shear strength reduction method. The influential factors, which include pile diameter, pile spacing, depth of truncation, and existence of a weak layer, are systematically investigated from a practical point of view. The results show that an optimum ratio exists between the depth of truncation and the pile length above a slip surface, below which truncating behavior has no influence on the piled slope stability. This optimum ratio is bigger for slopes stabilized with more flexible piles and piles with larger spacing. Besides, truncated piles are more suitable for slopes with a thin weak layer than homogenous slopes. In practical engineering, the piles could be truncated reasonably while ensuring the reinforcement effect. The truncated part of piles can be filled with the surrounding soil and compacted to reduce costs by using fewer materials.
Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor
2015-11-30
Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important. Copyright © 2015 John Wiley & Sons, Ltd.
Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.
1978-01-01
Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.
Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S
2018-05-01
Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P patient exposure to potential harm following MAE events.
Analysis of the upper-truncated Weibull distribution for wind speed
International Nuclear Information System (INIS)
Kantar, Yeliz Mert; Usta, Ilhan
2015-01-01
Highlights: • Upper-truncated Weibull distribution is proposed to model wind speed. • Upper-truncated Weibull distribution nests Weibull distribution as special case. • Maximum likelihood is the best method for upper-truncated Weibull distribution. • Fitting accuracy of upper-truncated Weibull is analyzed on wind speed data. - Abstract: Accurately modeling wind speed is critical in estimating the wind energy potential of a certain region. In order to model wind speed data smoothly, several statistical distributions have been studied. Truncated distributions are defined as a conditional distribution that results from restricting the domain of statistical distribution and they also cover base distribution. This paper proposes, for the first time, the use of upper-truncated Weibull distribution, in modeling wind speed data and also in estimating wind power density. In addition, a comparison is made between upper-truncated Weibull distribution and well known Weibull distribution using wind speed data measured in various regions of Turkey. The obtained results indicate that upper-truncated Weibull distribution shows better performance than Weibull distribution in estimating wind speed distribution and wind power. Therefore, upper-truncated Weibull distribution can be an alternative for use in the assessment of wind energy potential
Balaji, K. A.; Prabu, K.
2018-03-01
There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.
Smalle, Eleonore H M; Muylle, Merel; Szmalec, Arnaud; Duyck, Wouter
2017-11-01
Speech errors typically respect the speaker's implicit knowledge of language-wide phonotactics (e.g., /t/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on another phoneme within the sequence (e.g., /t/ can only be an onset if the medial vowel is /i/), but not earlier than the second day of training. Thus far, no work has been done with children. In the current 4-day experiment, a group of Dutch-speaking adults and 9-year-old children were asked to rapidly recite sequences of novel word forms (e.g., kieng nief siet hiem ) that were consistent with phonotactics of the spoken Dutch language. Within the procedure of the experiment, some consonants (i.e., /t/ and /k/) were restricted to the onset or coda position depending on the medial vowel (i.e., /i/ or "ie" vs. /øː/ or "eu"). Speech errors in adults revealed a learning effect for the novel constraints on the second day of learning, consistent with earlier findings. A post hoc analysis at the trial level showed that learning was statistically reliable after an exposure of 120 sequence trials (including a consolidation period). However, children started learning the constraints already on the first day. More precisely, the effect appeared significantly after an exposure of 24 sequences. These findings indicate that children are rapid implicit learners of novel phonotactics, which bears important implications for theorizing about developmental sensitivities in language learning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
I Alimohammadi
2012-12-01
Full Text Available Background and Aims: Traffic noise is one of the most important urban noise pollution, which causes various physical and mental effects, impairment in daily activities, sleep disturbances, hearing loss and the impact on job performance. Thus it can reduce concentration significantly and increase the rate of traffic accidents. Some individual differences such as personality types in noise effects, affect. Methods : Traffic noise has been measured and recorded in 10 arterial streets in Tehran, and the average sound pressure level measured was72/9 dB during two hours played for participants in the acousticroom . The sample size consisted of 80 patients (40 cases and 40 controls who were students of Tehran University of Medical Sciences. Personality type was determined by using Eysenc’s Personality Inventory (EPIquestionnaire. The error time movement anticipation before and after exposure to traffic noisewas measured by ZBA computerize test. Results: The results revealed that error time movement anticipation before exposure to traffic noise have significant difference for introverts and extraverts Introverts have less errortime movement anticipation than extroversion ,whereas extroverts have less error time movement anticipation that introversion after exposure to traffic noise. Conclusion: According to the obtained results, noise created different effects on the performance of personality type. Extroverts may be expected to adapt better to noise during mental performance, compared to people with opposite personality traits.
BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.
2013-09-01
The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.
Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A
2018-05-01
An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.
Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari
2013-10-01
Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.
time-series analysis of nigeria rice supply and demand: error ...
African Journals Online (AJOL)
O. Ojogho Ph.D
the two series have been changing which may continue for longer time than foreseen {Figure (1c)}. Figure (c) shows a forecast of rice supply and demand for Nigeria. It shows that beyond 2010, rice supply will permanently lead rice demand. This indicates that they either have time-varying means, time-varying variances or ...
Energy Technology Data Exchange (ETDEWEB)
Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)
2018-01-08
Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2008-07-01
We analyze the long-time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a given syndrome history and the associated residual decoherence encoded in the reduced density matrix. Systems with nonzero gate times (“long gates”) are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.
NLO renormalization in the Hamiltonian truncation
Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.
2017-09-01
Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.
International Nuclear Information System (INIS)
Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su
2014-01-01
The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core
Energy Technology Data Exchange (ETDEWEB)
Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su [FNC Technology Co., Ltd., Yongin (Korea, Republic of)
2014-05-15
The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core.
Goldsmith, K A; Chalder, T; White, P D; Sharpe, M; Pickles, A
2018-06-01
Clinical trials are expensive and time-consuming and so should also be used to study how treatments work, allowing for the evaluation of theoretical treatment models and refinement and improvement of treatments. These treatment processes can be studied using mediation analysis. Randomised treatment makes some of the assumptions of mediation models plausible, but the mediator-outcome relationship could remain subject to bias. In addition, mediation is assumed to be a temporally ordered longitudinal process, but estimation in most mediation studies to date has been cross-sectional and unable to explore this assumption. This study used longitudinal structural equation modelling of mediator and outcome measurements from the PACE trial of rehabilitative treatments for chronic fatigue syndrome (ISRCTN 54285094) to address these issues. In particular, autoregressive and simplex models were used to study measurement error in the mediator, different time lags in the mediator-outcome relationship, unmeasured confounding of the mediator and outcome, and the assumption of a constant mediator-outcome relationship over time. Results showed that allowing for measurement error and unmeasured confounding were important. Contemporaneous rather than lagged mediator-outcome effects were more consistent with the data, possibly due to the wide spacing of measurements. Assuming a constant mediator-outcome relationship over time increased precision.
Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio
2013-10-01
Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this time spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat time series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat time series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of error sources has been performed identifying haze as a major source for time series analysis commission error.
Jolivet, R.; Simons, M.
2018-02-01
Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.
Rosch, E.
1975-01-01
The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.
Theoretical analysis of balanced truncation for linear switched systems
DEFF Research Database (Denmark)
Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef
2012-01-01
In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....
Adam J. Gaylord; Dana M. Sanchez
2014-01-01
Direct behavioral observations of multiple free-ranging animals over long periods of time and large geographic areas is prohibitively difficult. However, recent improvements in technology, such as Global Positioning System (GPS) collars equipped with motion-sensitive activity monitors, create the potential to remotely monitor animal behavior. Accelerometer-equipped...
Reduction in the ionospheric error for a single-frequency GPS timing solution using tomography
Directory of Open Access Journals (Sweden)
Cathryn N. Mitchell
2009-06-01
Full Text Available
Times;">Abstract
Times;">Single-frequency Global Positioning System (GPS receivers do not accurately compensate for the ionospheric delay imposed upon a GPS signal. They rely upon models to compensate for the ionosphere. This delay compensation can be improved by measuring it directly with a dual-frequency receiver, or by monitoring the ionosphere using real-time maps. This investigation uses a 4D tomographic algorithm, Multi Instrument Data Analysis System (MIDAS, to correct for the ionospheric delay and compares the results to existing single and dualfrequency techniques. Maps of the ionospheric electron density, across Europe, are produced by using data collected from a fixed network of dual-frequency GPS receivers. Single-frequency pseudorange observations are corrected by using the maps to find the excess propagation delay on the GPS L1 signals. Days during the solar maximum year 2002 and the October 2003 storm have been chosen to display results when the ionospheric delays are large and variable. Results that improve upon the use of existing ionospheric models are achieved by applying MIDAS to fixed and mobile single-frequency GPS timing solutions. The approach offers the potential for corrections to be broadcast over a local region, or provided via the internet and allows timing accuracies to within 10 ns to be achieved.
International Nuclear Information System (INIS)
Stripling, H.F.; Anitescu, M.; Adams, M.L.
2013-01-01
Highlights: ► We develop an abstract framework for computing the adjoint to the neutron/nuclide burnup equations posed as a system of differential algebraic equations. ► We validate use of the adjoint for computing both sensitivity to uncertain inputs and for estimating global time discretization error. ► Flexibility of the framework is leveraged to add heat transfer physics and compute its adjoint without a reformulation of the adjoint system. ► Such flexibility is crucial for high performance computing applications. -- Abstract: We develop a general framework for computing the adjoint variable to nuclear engineering problems governed by a set of differential–algebraic equations (DAEs). The nuclear engineering community has a rich history of developing and applying adjoints for sensitivity calculations; many such formulations, however, are specific to a certain set of equations, variables, or solution techniques. Any change or addition to the physics model would require a reformulation of the adjoint problem and substantial difficulties in its software implementation. In this work we propose an abstract framework that allows for the modification and expansion of the governing equations, leverages the existing theory of adjoint formulation for DAEs, and results in adjoint equations that can be used to efficiently compute sensitivities for parametric uncertainty quantification. Moreover, as we justify theoretically and demonstrate numerically, the same framework can be used to estimate global time discretization error. We first motivate the framework and show that the coupled Bateman and transport equations, which govern the time-dependent neutronic behavior of a nuclear reactor, may be formulated as a DAE system with a power constraint. We then use a variational approach to develop the parameter-dependent adjoint framework and apply existing theory to give formulations for sensitivity and global time discretization error estimates using the adjoint
Grinband, Jack; Savitskaya, Judith; Wager, Tor D; Teichert, Tobias; Ferrera, Vincent P; Hirsch, Joy
2011-07-15
The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task. Copyright © 2010 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Barbara Wachowicz
Full Text Available Circadian rhythms and restricted sleep length affect cognitive functions and, consequently, the performance of day to day activities. To date, no more than a few studies have explored the consequences of these factors on oculomotor behaviour. We have implemented a spatial cuing paradigm in an eye tracking experiment conducted four times of the day after one week of rested wakefulness and after one week of chronic partial sleep restriction. Our aim was to verify whether these conditions affect the number of a variety of saccadic task errors. Interestingly, we found that failures in response selection, i.e. premature responses and direction errors, were prone to time of day variations, whereas failures in response execution, i.e. omissions and commissions, were considerably affected by sleep deprivation. The former can be linked to the cue facilitation mechanism, while the latter to wake state instability and the diminished ability of top-down inhibition. Together, these results may be interpreted in terms of distinctive sensitivity of orienting and alerting systems to fatigue. Saccadic eye movements proved to be a novel and effective measure with which to study the susceptibility of attentional systems to time factors, thus, this approach is recommended for future research.
Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves
Misra, R.; Bora, A.; Dewangan, G.
2018-04-01
Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.
Cointegration and Error Correction Modelling in Time-Series Analysis: A Brief Introduction
Directory of Open Access Journals (Sweden)
Helmut Thome
2015-07-01
Full Text Available Criminological research is often based on time-series data showing some type of trend movement. Trending time-series may correlate strongly even in cases where no causal relationship exists (spurious causality. To avoid this problem researchers often apply some technique of detrending their data, such as by differencing the series. This approach, however, may bring up another problem: that of spurious non-causality. Both problems can, in principle, be avoided if the series under investigation are “difference-stationary” (if the trend movements are stochastic and “cointegrated” (if the stochastically changing trendmovements in different variables correspond to each other. The article gives a brief introduction to key instruments and interpretative tools applied in cointegration modelling.
Vortex breakdown in a truncated conical bioreactor
Energy Technology Data Exchange (ETDEWEB)
Balci, Adnan; Brøns, Morten [DTU Compute, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark); Herrada, Miguel A [E.S.I, Universidad de Sevilla, Camino de los Descubrimientos s/n, E-41092 (Spain); Shtern, Vladimir N, E-mail: mobr@dtu.dk [Shtern Research and Consulting, Houston, TX 77096 (United States)
2015-12-15
This numerical study explains the eddy formation and disappearance in a slow steady axisymmetric air–water flow in a vertical truncated conical container, driven by the rotating top disk. Numerous topological metamorphoses occur as the water height, H{sub w}, and the bottom-sidewall angle, α, vary. It is found that the sidewall convergence (divergence) from the top to the bottom stimulates (suppresses) the development of vortex breakdown (VB) in both water and air. At α = 60°, the flow topology changes eighteen times as H{sub w} varies. The changes are due to (a) competing effects of AMF (the air meridional flow) and swirl, which drive meridional motions of opposite directions in water, and (b) feedback of water flow on AMF. For small H{sub w}, the AMF effect dominates. As H{sub w} increases, the swirl effect dominates and causes VB. The water flow feedback produces and modifies air eddies. The results are of fundamental interest and can be relevant for aerial bioreactors. (paper)
Vortex breakdown in a truncated conical bioreactor
International Nuclear Information System (INIS)
Balci, Adnan; Brøns, Morten; Herrada, Miguel A; Shtern, Vladimir N
2015-01-01
This numerical study explains the eddy formation and disappearance in a slow steady axisymmetric air–water flow in a vertical truncated conical container, driven by the rotating top disk. Numerous topological metamorphoses occur as the water height, H w , and the bottom-sidewall angle, α, vary. It is found that the sidewall convergence (divergence) from the top to the bottom stimulates (suppresses) the development of vortex breakdown (VB) in both water and air. At α = 60°, the flow topology changes eighteen times as H w varies. The changes are due to (a) competing effects of AMF (the air meridional flow) and swirl, which drive meridional motions of opposite directions in water, and (b) feedback of water flow on AMF. For small H w , the AMF effect dominates. As H w increases, the swirl effect dominates and causes VB. The water flow feedback produces and modifies air eddies. The results are of fundamental interest and can be relevant for aerial bioreactors. (paper)
Directory of Open Access Journals (Sweden)
Liu Yu-Sun
2011-01-01
Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.
Real-time minimal-bit-error probability decoding of convolutional codes
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K
2016-11-25
Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.
Chen, Wei; Shen, Jana K.
2014-01-01
Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: 1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? 2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK a values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via co-titrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle-mesh Ewald, considering the known artifacts due to charge-compensating background plasma. PMID:25142416
Chen, Wei; Shen, Jana K
2014-10-15
Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here, we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: (1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? (2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force-shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK(a) values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via cotitrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle mesh Ewald, considering the known artifacts due to charge-compensating background plasma. Copyright © 2014 Wiley Periodicals, Inc.
Truncation correction for oblique filtering lines
International Nuclear Information System (INIS)
Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic
2008-01-01
State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.
Formal truncations of connected kernel equations
International Nuclear Information System (INIS)
Dixon, R.M.
1977-01-01
The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems
Energy Technology Data Exchange (ETDEWEB)
Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University
2013-08-25
The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A
Thermospheric mass density model error variance as a function of time scale
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
A real-time error-free color-correction facility for digital consumers
Shaw, Rodney
2008-01-01
It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...
New Schemes for Positive Real Truncation
Directory of Open Access Journals (Sweden)
Kari Unneland
2007-07-01
Full Text Available Model reduction, based on balanced truncation, of stable and of positive real systems are considered. An overview over some of the already existing techniques are given: Lyapunov balancing and stochastic balancing, which includes Riccati balancing. A novel scheme for positive real balanced truncation is then proposed, which is a combination of the already existing Lyapunov balancing and Riccati balancing. Using Riccati balancing, the solution of two Riccati equations are needed to obtain positive real reduced order systems. For the suggested method, only one Lyapunov equation and one Riccati equation are solved in order to obtain positive real reduced order systems, which is less computationally demanding. Further it is shown, that in order to get positive real reduced order systems, only one Riccati equation needs to be solved. Finally, this is used to obtain positive real frequency weighted balanced truncation.
Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox
2015-01-01
A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186
Directory of Open Access Journals (Sweden)
Chi-Jim Chen
2015-03-01
Full Text Available A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS, successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM, based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates.
Energy Technology Data Exchange (ETDEWEB)
Mongioj, Valeria (Dept. of Medical Physics, Fondazione IRCCS Istituto Nazionale Tumori, Milan (Italy)), e-mail: valeria.mongioj@istitutotumori.mi.it; Orlandi, Ester (Dept. of Radiotherapy, Fondazione IRCCS Istituto Nazionale Tumori, Milan (Italy)); Palazzi, Mauro (Dept. of Radiotherapy, A.O. Niguarda Ca' Granda, Milan (Italy)) (and others)
2011-01-15
Introduction. The aims of this study were to analyze the systematic and random interfractional set-up errors during Intensity Modulated Radiation Therapy (IMRT) in 20 consecutive nasopharyngeal carcinoma (NPC) patients by means of Electronic Portal Images Device (EPID), to define appropriate Planning Target Volume (PTV) and Planning Risk Volume (PRV) margins, as well as to investigate set-up displacement trend as a function of time during fractionated RT course. Material and methods. Before EPID clinical implementation, an anthropomorphic phantom was shifted intentionally 5 mm to all directions and the EPIs were compared with the digitally reconstructed radiographs (DRRs) to test the system's capability to recognize displacements observed in clinical studies. Then, 578 clinical images were analyzed with a mean of 29 images for each patient. Results. Phantom data showed that the system was able to correct shifts with an accuracy of 1 mm. As regards clinical data, the estimated population systematic errors were 1.3 mm for left-right (L-R), 1 mm for superior-inferior (S-I) and 1.1 mm for anterior-posterior (A-P) directions, respectively. Population random errors were 1.3 mm, 1.5 mm and 1.3 mm for L-R, S-I and A-P directions, respectively. PTV margin was at least 3.4, 3 and 3.2 mm for L-R, S-I and A-P direction, respectively. PRV margins for brainstem and spinal cord were 2.3, 2 and 2.1 mm and 3.8, 3.5 and 3.2 mm for L-R, A-P and S-I directions, respectively. Set-up error displacements showed no significant changes as the therapy progressed (p>0.05), although displacements >3 mm were found more frequently when severe weight loss or tumor nodal shrinkage occurred. Discussion. These results enable us to choose margins that guarantee with sufficient accuracy the coverage of PTVs and organs at risk sparing. Collected data confirmed the need for a strict check of patient position reproducibility in case of anatomical changes
An iterative reconstruction from truncated projection data
International Nuclear Information System (INIS)
Anon.
1985-01-01
Various methods have been proposed for tomographic reconstruction from truncated projection data. In this paper, a reconstructive method is discussed which consists of iterations of filtered back-projection, reprojection and some nonlinear processings. First, the method is so constructed that it converges to a fixed point. Then, to examine its effectiveness, comparisons are made by computer experiments with two existing reconstructive methods for truncated projection data, that is, the method of extrapolation based on the smooth assumption followed by filtered back-projection, and modified additive ART
Stellar Disk Truncations: HI Density and Dynamics
Trujillo, Ignacio; Bakos, Judit
2010-06-01
Using HI Nearby Galaxy Survey (THINGS) 21-cm observations of a sample of nearby (nearly face-on) galaxies we explore whether the stellar disk truncation phenomenon produces any signature either in the HI gas density and/or in the gas dynamics. Recent cosmological simulations suggest that the origin of the break on the surface brightness distribution is produced by the appearance of a warp at the truncation position. This warp should produce a flaring on the gas distribution increasing the velocity dispersion of the HI component beyond the break. We do not find, however, any evidence of this increase in the gas velocity dispersion profile.
Directory of Open Access Journals (Sweden)
Jianzhong Zhou
2017-12-01
Full Text Available Model simulation and control of pumped storage unit (PSU are essential to improve the dynamic quality of power station. Only under the premise of the PSU models reflecting the actual transient process, the novel control method can be properly applied in the engineering. The contributions of this paper are that (1 a real-time accurate equivalent circuit model (RAECM of PSU via error compensation is proposed to reconcile the conflict between real-time online simulation and accuracy under various operating conditions, and (2 an adaptive predicted fuzzy PID controller (APFPID based on RAECM is put forward to overcome the instability of conventional control under no-load conditions with low water head. Respectively, all hydraulic factors in pipeline system are fully considered based on equivalent lumped-circuits theorem. The pretreatment, which consists of improved Suter-transformation and BP neural network, and online simulation method featured by two iterative loops are synthetically proposed to improve the solving accuracy of the pump-turbine. Moreover, the modified formulas for compensating error are derived with variable-spatial discretization to improve the accuracy of the real-time simulation further. The implicit RadauIIA method is verified to be more suitable for PSUGS owing to wider stable domain. Then, APFPID controller is constructed based on the integration of fuzzy PID and the model predictive control. Rolling prediction by RAECM is proposed to replace rolling optimization with its computational speed guaranteed. Finally, the simulation and on-site measurements are compared to prove trustworthy of RAECM under various running conditions. Comparative experiments also indicate that APFPID controller outperforms other controllers in most cases, especially low water head conditions. Satisfying results of RAECM have been achieved in engineering and it provides a novel model reference for PSUGS.
The effect of truncation on very small cardiac SPECT camera systems
International Nuclear Information System (INIS)
Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.
2006-01-01
Background: The limited transaxial field-of-view (FOV) of a very small cardiac SPECT camera system causes view-dependent truncation of the projection of structures exterior to, but near the heart. Basic tomographic principles suggest that the reconstruction of non-attenuated truncated data gives a distortion-free image in the interior of the truncated region, but the DC term of the Fourier spectrum of the reconstructed image is incorrect, meaning that the intensity scale of the reconstruction is inaccurate. The purpose of this study was to characterize the reconstructed image artifacts from truncated data, and to quantify their effects on the measurement of tracer uptake in the myocardial. Particular attention was given to instances where the heart wall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heart region. Truncated and non-truncated projections were formed both with and without attenuation. The reconstructions were analyzed for artifacts in the myocardium caused by truncation, and for the effect that attenuation has relative to increasing those artifacts. Results: The inaccuracy due to truncation is primarily caused by an incorrect DC component. For visualizing the left ventricular wall, this error is not worse than the effect of attenuation. The addition of a small hot bowel-like structure near the left ventricle causes few changes in counts on the wall. Larger artifacts due to the truncation are located at the boundary of the truncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstruction results than an analytical filtered back-projection reconstruction algorithm. Conclusion: Small inaccuracies in reconstructed images from small FOV camera systems should have little effect on clinical interpretation. However, changes in the degree of inaccuracy in counts from slice to slice are due to changes in
Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi
2015-10-01
The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.
Determination of αS from scaling violations of truncated moments of structure functions
International Nuclear Information System (INIS)
Forte, Stefano; Latorre, J.I.; Magnea, Lorenzo; Piccione, Andrea
2002-01-01
We determine the strong coupling α S (M Z ) from scaling violations of truncated moments of the nonsinglet deep inelastic structure function F 2 . Truncated moments are determined from BCDMS and NMC data using a neural network parametrization which retains the full experimental information on errors and correlations. Our method minimizes all sources of theoretical uncertainty and bias which characterize extractions of α S from scaling violations. We obtain α S (M Z )=0.124 +0.004 -0.007 (exp.) +0.003 -0.004 (th.)
A SUZAKU OBSERVATION OF NGC 4593: ILLUMINATING THE TRUNCATED DISK
International Nuclear Information System (INIS)
Markowitz, A. G.; Reeves, J. N.
2009-01-01
We report results from a 2007 Suzaku observation of the Seyfert 1 AGN NGC 4593. The narrow Fe Kα emission line has a FWHM width ∼ 4000 km s -1 , indicating emission from ∼> 5000 R g . There is no evidence for a relativistically broadened Fe K line, consistent with the presence of a radiatively efficient outer disk which is truncated or transitions to an interior radiatively inefficient flow. The Suzaku observation caught the source in a low-flux state; comparison to a 2002 XMM-Newton observation indicates that the hard X-ray flux decreased by 3.6, while the Fe Kα line intensity and width σ each roughly halved. Two model-dependent explanations for the changes in Fe Kα line profile are explored. In one, the Fe Kα line width has decreased from ∼10,000 to ∼4000 km s -1 from 2002 to 2007, suggesting that the thin disk truncation/transition radius has increased from 1000-2000 to ∼>5000 R g . However, there are indications from other compact accreting systems that such truncation radii tend to be associated only with accretion rates relative to Eddington much lower than that of NGC 4593. In the second model, the line profile in the XMM-Newton observation consists of a time-invariant narrow component plus a broad component originating from the inner part of the truncated disk (∼300 R g ) which has responded to the drop in continuum flux. The Compton reflection component strength R is ∼ 1.1, consistent with the measured Fe Kα line total equivalent width with an Fe abundance 1.7 times the solar value. The modest soft excess, modeled well by either thermal bremsstrahlung emission or by Comptonization of soft seed photons in an optical thin plasma, has fallen by a factor of ∼20 from 2002 to 2007, ruling out emission from a region 5 lt-yr in size.
A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains
Directory of Open Access Journals (Sweden)
Kemin Wang
2014-01-01
Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.
Ng, Kar Yong; Awang, Norhashidah
2018-01-06
Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst.
Truncation in diffraction pattern analysis. Pt. 1
International Nuclear Information System (INIS)
Delhez, R.; Keijser, T.H. de; Mittemeijer, E.J.; Langford, J.I.
1986-01-01
An evaluation of the concept of a line profile is provoked by truncation of the range of intensity measurement in practice. The measured truncated line profile can be considered either as part of the total intensity distribution which peaks at or near the reciprocal-lattice points (approach 1), or as part of a component line profile which is confined to a single reciprocal-lattice point (approach 2). Some false conceptions in line-profile analysis can then be avoided and recipes can be developed for the extrapolation of the tails of the truncated line profile. Fourier analysis of line profiles, according to the first approach, implies a Fourier series development of the total intensity distribution defined within [l - 1/2, l + 1/2] (l indicates the node considered in reciprocal space); the second approach implies a Fourier transformation of the component line profile defined within [ - ∞, + ∞]. Exact descriptions of size broadening are provided by both approaches, whereas combined size and strain broadening can only be evaluated adequately within the first approach. Straightforward methods are given for obtaining truncation-corrected values for the average crystallite size. (orig.)
Balanced truncation for linear switched systems
DEFF Research Database (Denmark)
Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef
2013-01-01
In this paper, we present a theoretical analysis of the model reduction algorithm for linear switched systems from Shaker and Wisniewski (2011, 2009) and . This algorithm is a reminiscence of the balanced truncation method for linear parameter varying systems (Wood et al., 1996) [3]. Specifically...
Family Therapy for the "Truncated" Nuclear Family.
Zuk, Gerald H.
1980-01-01
The truncated nuclear family consists of a two-generation group in which conflict has produced a polarization of values. The single-parent family is at special risk. Go-between process enables the therapist to depolarize sharply conflicted values and reduce pathogenic relating. (Author)
International Nuclear Information System (INIS)
J Zwan, B; Colvill, E; Booth, J; J O’Connor, D; Keall, P; B Greer, P
2016-01-01
Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3) field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm 2 (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Energy Technology Data Exchange (ETDEWEB)
Kim, K.S. [Samsung Techwin Co., Ltd., Seoul (Korea); Kim, D.Y. [Bucheon College, Bucheon (Korea); Kim, S.H. [University of Seoul, Seoul (Korea)
2002-05-01
In this paper, the implementation of a new AF(Automatic Focusing) system for a digital still camera is introduced. The proposed system operates in real-time while adjusting focus after the measurement of distance to an object using a passive sensor, which is different from a typical method. In addition, measurement errors were minimized by using the data acquired empirically, and the optimal measuring time was obtained using EV(Exposure Value) which is calculated from CCD luminance signal. Moreover, this system adopted an auxiliary light source for focusing in absolute dark conditions, which is very hard for CCD image processing. Since this is an open-loop system adjusting focus immediately after the distance measurement, it guarantees real-time operation. The performance of this new AF system was verified by comparing the focusing value curve obtained from AF experiment with the one from the measurement by MF(Manual-Focusing). In both case, edge detector was used for various objects and backgrounds. (author). 9 refs., 11 figs., 5 tabs.
Schumann, G.; di Baldassarre, G.; Alsdorf, D.; Bates, P. D.
2009-04-01
In February 2000, the Shuttle Radar Topography Mission (SRTM) measured the elevation of most of the Earth's surface with spatially continuous sampling and an absolute vertical accuracy greater than 9 m. The vertical error has been shown to change with topographic complexity, being less important over flat terrain. This allows water surface slopes to be measured and associated discharge volumes to be estimated for open channels in large basins, such as the Amazon. Building on these capabilities, this paper demonstrates that near real-time coarse resolution radar imagery of a recent flood event on a 98 km reach of the River Po (Northern Italy) combined with SRTM terrain height data leads to a water slope remarkably similar to that derived by combining the radar image with highly accurate airborne laser altimetry. Moreover, it is shown that this space-borne flood wave approximation compares well to a hydraulic model and thus allows the performance of the latter, calibrated on a previous event, to be assessed when applied to an event of different magnitude in near real-time. These results are not only of great importance to real-time flood management and flood forecasting but also support the upcoming Surface Water and Ocean Topography (SWOT) mission that will routinely provide water levels and slopes with higher precision around the globe.
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
A Residual Approach for Balanced Truncation Model Reduction (BTMR of Compartmental Systems
Directory of Open Access Journals (Sweden)
William La Cruz
2014-05-01
Full Text Available This paper presents a residual approach of the square root balanced truncation algorithm for model order reduction of continuous, linear and time-invariante compartmental systems. Specifically, the new approach uses a residual method to approximate the controllability and observability gramians, whose resolution is an essential step of the square root balanced truncation algorithm, that requires a great computational cost. Numerical experiences are included to highlight the efficacy of the proposed approach.
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.
2011-01-01
Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most
On truncations of the exact renormalization group
Morris, T R
1994-01-01
We investigate the Exact Renormalization Group (ERG) description of (Z_2 invariant) one-component scalar field theory, in the approximation in which all momentum dependence is discarded in the effective vertices. In this context we show how one can perform a systematic search for non-perturbative continuum limits without making any assumption about the form of the lagrangian. Concentrating on the non-perturbative three dimensional Wilson fixed point, we then show that the sequence of truncations n=2,3,\\dots, obtained by expanding about the field \\varphi=0 and discarding all powers \\varphi^{2n+2} and higher, yields solutions that at first converge to the answer obtained without truncation, but then cease to further converge beyond a certain point. No completely reliable method exists to reject the many spurious solutions that are also found. These properties are explained in terms of the analytic behaviour of the untruncated solutions -- which we describe in some detail.
Truncated Wigner dynamics and conservation laws
Drummond, Peter D.; Opanchuk, Bogdan
2017-10-01
Ultracold Bose gases can be used to experimentally test many-body theory predictions. Here we point out that both exact conservation laws and dynamical invariants exist in the topical case of the one-dimensional Bose gas, and these provide an important validation of methods. We show that the first four quantum conservation laws are exactly conserved in the approximate truncated Wigner approach to many-body quantum dynamics. Center-of-mass position variance is also exactly calculable. This is nearly exact in the truncated Wigner approximation, apart from small terms that vanish as N-3 /2 as N →∞ with fixed momentum cutoff. Examples of this are calculated in experimentally relevant, mesoscopic cases.
No chiral truncation of quantum log gravity?
Andrade, Tomás; Marolf, Donald
2010-03-01
At the classical level, chiral gravity may be constructed as a consistent truncation of a larger theory called log gravity by requiring that left-moving charges vanish. In turn, log gravity is the limit of topologically massive gravity (TMG) at a special value of the coupling (the chiral point). We study the situation at the level of linearized quantum fields, focussing on a unitary quantization. While the TMG Hilbert space is continuous at the chiral point, the left-moving Virasoro generators become ill-defined and cannot be used to define a chiral truncation. In a sense, the left-moving asymptotic symmetries are spontaneously broken at the chiral point. In contrast, in a non-unitary quantization of TMG, both the Hilbert space and charges are continuous at the chiral point and define a unitary theory of chiral gravity at the linearized level.
Approximate truncation robust computed tomography—ATRACT
International Nuclear Information System (INIS)
Dennerlein, Frank; Maier, Andreas
2013-01-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)
Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C
2011-07-01
Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.
Truncated Dual-Cap Nucleation Site Development
Matson, Douglas M.; Sander, Paul J.
2012-01-01
During heterogeneous nucleation within a metastable mushy-zone, several geometries for nucleation site development must be considered. Traditional spherical dual cap and crevice models are compared to a truncated dual cap to determine the activation energy and critical cluster growth kinetics in ternary Fe-Cr-Ni steel alloys. Results of activation energy results indicate that nucleation is more probable at grain boundaries within the solid than at the solid-liquid interface.
On the Truncated Pareto Distribution with applications
Zaninetti, Lorenzo; Ferraro, Mario
2008-01-01
The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...
Directory of Open Access Journals (Sweden)
Jia Ning
2017-11-01
Full Text Available The uncertainty of wind power results in wind power forecasting errors (WPFE which lead to difficulties in formulating dispatching strategies to maintain the power balance. Demand response (DR is a promising tool to balance power by alleviating the impact of WPFE. This paper offers a control method of combining DR and automatic generation control (AGC units to smooth the system’s imbalance, considering the real-time DR potential (DRP and security constraints. A schematic diagram is proposed from the perspective of a dispatching center that manages smart appliances including air conditioner (AC, water heater (WH, electric vehicle (EV loads, and AGC units to maximize the wind accommodation. The presented model schedules the AC, WH, and EV loads without compromising the consumers’ comfort preferences. Meanwhile, the ramp constraint of generators and power flow transmission constraint are considered to guarantee the safety and stability of the power system. To demonstrate the performance of the proposed approach, simulations are performed in an IEEE 24-node system. The results indicate that considerable benefits can be realized by coordinating the DR and AGC units to mitigate the WPFE impacts.
International Nuclear Information System (INIS)
Gregory, R.B.
1991-01-01
We have recently described modifications to the program CONTIN for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a refernce material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminium (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene. (orig.)
Chaos and noise in a truncated Toda potential
International Nuclear Information System (INIS)
Habib, S.; Kandrup, H.E.; Mahon, M.E.
1996-01-01
Results are reported from a numerical investigation of orbits in a truncated Toda potential that is perturbed by weak friction and noise. Aside from the perturbations displaying a simple scaling in the amplitude of the friction and noise, it is found that even very weak friction and noise can induce an extrinsic diffusion through cantori on a time scale that is much shorter than that associated with intrinsic diffusion in the unperturbed system. The results have applications in galactic dynamics and in the formation of a beam halo in charged particle beams. copyright 1996 The American Physical Society
Directory of Open Access Journals (Sweden)
P.A.V.B. Swamy
2017-02-01
Full Text Available Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.
Sirenko, Kostyantyn
2013-07-01
Exact absorbing and periodic boundary conditions allow to truncate grating problems\\' infinite physical domains without introducing any errors. This work presents exact absorbing boundary conditions for 3D diffraction gratings and describes their discretization within a high-order time-domain discontinuous Galerkin finite element method (TD-DG-FEM). The error introduced by the boundary condition discretization matches that of the TD-DG-FEM; this results in an optimal solver in terms of accuracy and computation time. Numerical results demonstrate the superiority of this solver over TD-DG-FEM with perfectly matched layers (PML)-based domain truncation. © 2013 IEEE.
Fanchon, Louise M; Apte, Adytia; Schmidtlein, C Ross; Yorke, Ellen; Hu, Yu-Chi; Dogan, Snjezana; Hatt, Mathieu; Visvikis, Dimitris; Humm, John L; Solomon, Stephen B; Kirov, Assen S
2017-10-01
The purpose of this study is to quantify tumor displacement during real-time PET/CT guided biopsy and to investigate correlations between tumor displacement and false-negative results. 19 patients who underwent real-time 18 F-FDG PET-guided biopsy and were found positive for malignancy were included in this study under IRB approval. PET/CT images were acquired for all patients within minutes prior to biopsy to visualize the FDG-avid region and plan the needle insertion. The biopsy needle was inserted and a post-insertion CT scan was acquired. The two CT scans acquired before and after needle insertion were registered using a deformable image registration (DIR) algorithm. The DIR deformation vector field (DVF) was used to calculate the mean displacement between the pre-insertion and post-insertion CT scans for a region around the tip of the biopsy needle. For 12 patients one biopsy core from each was tracked during histopathological testing to investigate correlations of the mean displacement between the two CT scans and false-negative or true-positive biopsy results. For 11 patients, two PET scans were acquired; one at the beginning of the procedure, pre-needle insertion, and an additional one with the needle in place. The pre-insertion PET scan was corrected for intraprocedural motion by applying the DVF. The corrected PET was compared with the post-needle insertion PET to validate the correction method. The mean displacement of tissue around the needle between the pre-biopsy CT and the postneedle insertion CT was 5.1 mm (min = 1.1 mm, max = 10.9 mm and SD = 3.0 mm). For mean displacements larger than 7.2 mm, the biopsy cores gave false-negative results. Correcting pre-biopsy PET using the DVF improved the PET/CT registration in 8 of 11 cases. The DVF obtained from DIR of the CT scans can be used for evaluation and correction of the error in needle placement with respect to the FDG-avid area. Misregistration between the pre-biopsy PET and the CT acquired with the
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
Phase retrieval via incremental truncated amplitude flow algorithm
Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao
2017-10-01
This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.
Energy Technology Data Exchange (ETDEWEB)
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Bouchard, Amy E; Corriveau, Hélène; Milot, Marie-Hélène
2015-01-01
With age, a decline in the temporal aspect of movement is observed such as a longer movement execution time and a decreased timing accuracy. Robotic training can represent an interesting approach to help improve movement timing among the elderly. Two types of robotic training-haptic guidance (HG; demonstrating the correct movement for a better movement planning and improved execution of movement) and error amplification (EA; exaggerating movement errors to have a more rapid and complete learning) have been positively used in young healthy subjects to boost timing accuracy. For healthy seniors, only HG training has been used so far where significant and positive timing gains have been obtained. The goal of the study was to evaluate and compare the impact of both HG and EA robotic trainings on the improvement of seniors' movement timing. Thirty-two healthy seniors (mean age 68 ± 4 years) learned to play a pinball-like game by triggering a one-degree-of-freedom hand robot at the proper time to make a flipper move and direct a falling ball toward a randomly positioned target. During HG and EA robotic trainings, the subjects' timing errors were decreased and increased, respectively, based on the subjects' timing errors in initiating a movement. Results showed that only HG training benefited learning, but the improvement did not generalize to untrained targets. Also, age had no influence on the efficacy of HG robotic training, meaning that the oldest subjects did not benefit more from HG training than the younger senior subjects. Using HG to teach the correct timing of movement seems to be a good strategy to improve motor learning for the elderly as for younger people. However, more studies are needed to assess the long-term impact of HG robotic training on improvement in movement timing.
International Nuclear Information System (INIS)
McDonald, D.W.
1977-01-01
Thermocouples with ferromagnetic thermoelements (iron, Alumel, Nisil) are used extensively in industry. We have observed the generation of voltage spikes within ferromagnetic wires when the wires are placed in an alternating magnetic field. This effect has implications for thermocouple thermometry, where it was first observed. For example, the voltage generated by this phenomenon will contaminate the thermocouple thermal emf, resulting in temperature measurement error
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small
Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.
1992-01-01
For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.
Firewalls as artefacts of inconsistent truncations of quantum geometries
Energy Technology Data Exchange (ETDEWEB)
Germani, Cristiano [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany); Institut de Ciencies del Cosmos, Universitat de Barcelona (Spain); Sarkar, Debajyoti [Max-Planck-Institut fuer Physik, Muenchen (Germany); Arnold Sommerfeld Center, Ludwig-Maximilians-University, Muenchen (Germany)
2016-01-15
In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e{sup -S}), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e{sup -S}) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g{sup tt}e{sup -S}). Therefore, while at a distance far away from the horizon, where g{sup tt} ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g{sup tt} → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Firewalls as artefacts of inconsistent truncations of quantum geometries
Germani, Cristiano; Sarkar, Debajyoti
2016-01-01
In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order ${\\cal O}(e^{-S})$, inexorably leads to a "localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by ${\\cal O}(e^{-S})$ effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like ${\\cal O}( g^{tt} e^{-S})$. Therefore, while at a distance far away from the horizon, where $g^{tt}\\approx 1$, quantum corrections {\\it are} perturbative, they {\\it do} diverge close to the horizon, where $g^{tt}\\rightarrow \\infty$. Nevertheless, these "corrections" nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes.
Firewalls as artefacts of inconsistent truncations of quantum geometries
International Nuclear Information System (INIS)
Germani, Cristiano; Sarkar, Debajyoti
2016-01-01
In this paper we argue that a firewall is simply a manifestation of an inconsistent truncation of non-perturbative effects that unitarize the semiclassical black hole. Namely, we show that a naive truncation of quantum corrections to the Hawking spectrum at order O(e -S ), inexorably leads to a ''localised'' divergent energy density near the black hole horizon. Nevertheless, in the same approximation, a distant observer only sees a discretised spectrum and concludes that unitarity is achieved by (e -S ) effects. This is due to the fact that instead, the correct quantum corrections to the Hawking spectrum go like (g tt e -S ). Therefore, while at a distance far away from the horizon, where g tt ∼ 1, quantum corrections are perturbative, they do diverge close to the horizon, where g tt → ∞. Nevertheless, these ''corrections'' nicely re-sum so that correlations functions are smooth at the would-be black hole horizon. Thus, we conclude that the appearance of firewalls is just a signal of the breaking of the semiclassical approximation at the Page time, even for large black holes. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Hamiltonian truncation approach to quenches in the Ising field theory
Directory of Open Access Journals (Sweden)
T. Rakovszky
2016-10-01
Full Text Available In contrast to lattice systems where powerful numerical techniques such as matrix product state based methods are available to study the non-equilibrium dynamics, the non-equilibrium behaviour of continuum systems is much harder to simulate. We demonstrate here that Hamiltonian truncation methods can be efficiently applied to this problem, by studying the quantum quench dynamics of the 1+1 dimensional Ising field theory using a truncated free fermionic space approach. After benchmarking the method with integrable quenches corresponding to changing the mass in a free Majorana fermion field theory, we study the effect of an integrability breaking perturbation by the longitudinal magnetic field. In both the ferromagnetic and paramagnetic phases of the model we find persistent oscillations with frequencies set by the low-lying particle excitations not only for small, but even for moderate size quenches. In the ferromagnetic phase these particles are the various non-perturbative confined bound states of the domain wall excitations, while in the paramagnetic phase the single magnon excitation governs the dynamics, allowing us to capture the time evolution of the magnetisation using a combination of known results from perturbation theory and form factor based methods. We point out that the dominance of low lying excitations allows for the numerical or experimental determination of the mass spectra through the study of the quench dynamics.
DEFF Research Database (Denmark)
Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt
2015-01-01
We present a model fitting the distribution of non-systematic errors in STR second generation sequencing, SGS, analysis. The model fits the distribution of non-systematic errors, i.e. the noise, using a one-inflated, zero-truncated, negative binomial model. The model is a two component model...
Yang, Yuli; Ma, Hao; Aï ssa, Sonia
2012-01-01
In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU's packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU's packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.
Yang, Yuli
2012-11-01
In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU\\'s packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU\\'s packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.
Directory of Open Access Journals (Sweden)
Carmen Tabernero
2014-05-01
Full Text Available The current economic crisis is triggering a new scenario of uncertainty, which is affecting the organizational behavior of individuals and working teams. In contexts of uncertainty, organizational performance suffers a significant decline—workers are faced with the perceived threat of job loss, individuals distrust their organization and perceive that they must compete with their peers. This paper analyzes the effect of uncertainty on both performance and the affective states of workers, as well as the cognitive, affective and personality strategies (goals and error orientation to cope with uncertainty as either learning pportunities or as situations to be avoided. Moreover, this paper explores gender differences in both coping styles in situations of uncertainty and the results of a training program based on error affect inoculation in which positive emotional responses were emphasized. Finally, we discuss the relevance of generating practices and experiences of team cooperation that build trust and promote collective efficacy in work teams.
International Nuclear Information System (INIS)
Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang
1998-01-01
In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)
International Nuclear Information System (INIS)
Knuefer; Lindauer
1980-01-01
Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)
Joint survival probability via truncated invariant copula
International Nuclear Information System (INIS)
Kim, Jeong-Hoon; Ma, Yong-Ki; Park, Chan Yeol
2016-01-01
Highlights: • We have studied an issue of dependence structure between default intensities. • We use a multivariate shot noise intensity process, where jumps occur simultaneously and their sizes are correlated. • We obtain the joint survival probability of the integrated intensities by using a copula. • We apply our theoretical result to pricing basket default swap spread. - Abstract: Given an intensity-based credit risk model, this paper studies dependence structure between default intensities. To model this structure, we use a multivariate shot noise intensity process, where jumps occur simultaneously and their sizes are correlated. Through very lengthy algebra, we obtain explicitly the joint survival probability of the integrated intensities by using the truncated invariant Farlie–Gumbel–Morgenstern copula with exponential marginal distributions. We also apply our theoretical result to pricing basket default swap spreads. This result can provide a useful guide for credit risk management.
Shell model truncation schemes for rotational nuclei
International Nuclear Information System (INIS)
Halse, P.; Jaqua, L.; Barrett, B.R.
1990-01-01
The suitability of the pair condensate approach for rotational states is studied in a single j = 17/2 shell of identical nucleons interacting through a quadrupole-quadrupole hamiltonian. The ground band and a K = 2 excited band are both studied in detail. A direct comparison of the exact states with those constituting the SD and SDG subspaces is used to identify the important degrees of freedom for these levels. The range of pairs necessary for a good description is found to be highly state dependent; S and D pairs are the major constituents of the low-spin ground band levels, while G pairs are needed for those in the γ-band. Energy spectra are obtained for each truncated subspace. SDG pairs allow accurate reproduction of the binding energy and K = 2 excitation energy, but still give a moment of inertia which is about 30% too small even for the lowest levels
Entanglement entropy from the truncated conformal space
Directory of Open Access Journals (Sweden)
T. Palmai
2016-08-01
Full Text Available A new numerical approach to entanglement entropies of the Rényi type is proposed for one-dimensional quantum field theories. The method extends the truncated conformal spectrum approach and we will demonstrate that it is especially suited to study the crossover from massless to massive behavior when the subsystem size is comparable to the correlation length. We apply it to different deformations of massless free fermions, corresponding to the scaling limit of the Ising model in transverse and longitudinal fields. For massive free fermions the exactly known crossover function is reproduced already in very small system sizes. The new method treats ground states and excited states on the same footing, and the applicability for excited states is illustrated by reproducing Rényi entropies of low-lying states in the transverse field Ising model.
Exact error estimation for solutions of nuclide chain equations
International Nuclear Information System (INIS)
Tachihara, Hidekazu; Sekimoto, Hiroshi
1999-01-01
The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)
Real-time high-resolution PC-based system for measurement of errors on compact disks
Tehranchi, Babak; Howe, Dennis G.
1994-10-01
Hardware and software utilities are developed to directly monitor the Eight-to-Fourteen (EFM) demodulated data bytes at the input of a CD player's Cross-Interleaved Reed-Solomon Code (CIRC) block decoder. The hardware is capable of identifying erroneous data with single-byte resolution in the serial data stream read from a Compact Disc by a CDD 461 Philips CD-ROM drive. In addition, the system produces graphical maps that show the physical location of the measured errors on the entire disc, or via a zooming and planning feature, on user selectable local disc regions.
Adaptive bit plane quadtree-based block truncation coding for image compression
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
Solving Schwinger-Dyson equations by truncation in zero-dimensional scalar quantum field theory
International Nuclear Information System (INIS)
Okopinska, A.
1991-01-01
Three sets of Schwinger-Dyson equations, for all Green's functions, for connected Green's functions, and for proper vertices, are considered in scalar quantum field theory. A truncation scheme applied to the three sets gives three different approximation series for Green's functions. For the theory in zero-dimensional space-time the results for respective two-point Green's functions are compared with the exact value calculated numerically. The best convergence of the truncation scheme is obtained for the case of proper vertices
Ginzburg, Irina
2017-01-01
established that the truncation errors in the three transport coefficients kT, Sk, and Ku decay with the second-order accuracy. While the physical values of the three transport coefficients are set by Péclet number, their truncation corrections additionally depend on the two adjustable relaxation rates and the two adjustable equilibrium weight families which independently determine the convective and diffusion discretization stencils. We identify flow- and dimension-independent optimal strategies for adjustable parameters and confront them to stability requirements. Through specific choices of two relaxation rates and weights, we expect our results be directly applicable to forward-time central differences and leap-frog central-convective Du Fort-Frankel-diffusion schemes. In straight channel, a quasi-exact validation of the truncation predictions through the numerical moments becomes possible thanks to the specular-forward no-flux boundary rule. In the staircase description of a cylindrical capillary, we account for the spurious boundary-layer diffusion and dispersion because of the tangential constraint of the bounce-back no-flux boundary rule.
Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment
Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.
2016-11-01
This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.
Karon, Brad S; Meeusen, Jeffrey W; Bryant, Sandra C
2015-08-25
We retrospectively studied the impact of glucose meter error on the efficacy of glycemic control after cardiovascular surgery. Adult patients undergoing intravenous insulin glycemic control therapy after cardiovascular surgery, with 12-24 consecutive glucose meter measurements used to make insulin dosing decisions, had glucose values analyzed to determine glycemic variability by both standard deviation (SD) and continuous overall net glycemic action (CONGA), and percentage glucose values in target glucose range (110-150 mg/dL). Information was recorded for 70 patients during each of 2 periods, with different glucose meters used to measure glucose and dose insulin during each period but no other changes to the glycemic control protocol. Accuracy and precision of each meter were also compared using whole blood specimens from ICU patients. Glucose meter 1 (GM1) had median bias of 11 mg/dL compared to a laboratory reference method, while glucose meter 2 (GM2) had a median bias of 1 mg/dL. GM1 and GM2 differed little in precision (CV = 2.0% and 2.7%, respectively). Compared to the period when GM1 was used to make insulin dosing decisions, patients whose insulin dose was managed by GM2 demonstrated reduced glycemic variability as measured by both SD (13.7 vs 21.6 mg/dL, P meter error (bias) was associated with decreased glycemic variability and increased percentage of values in target glucose range for patients placed on intravenous insulin therapy following cardiovascular surgery. © 2015 Diabetes Technology Society.
Wigner distribution function of circularly truncated light beams
Bastiaans, M.J.; Nijhawan, O.P.; Gupta, A.K.; Musla, A.K.; Singh, Kehar
1998-01-01
Truncating a light beam is expressed as a convolution of its Wigner distribution function and the WDF of the truncating aperture. The WDF of a circular aperture is derived and an approximate expression - which is exact in the space and the spatial-frequency origin and whose integral over the spatial
International Nuclear Information System (INIS)
Winterflood, A.H.
1980-01-01
In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)
Directory of Open Access Journals (Sweden)
Hugues Santin-Janin
Full Text Available BACKGROUND: Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal with respect to extrinsic factors (the Moran effect in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i has been previously estimated, and (ii has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. CONCLUSION/SIGNIFICANCE: The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for
The Dynamics of Truncated Black Hole Accretion Disks. I. Viscous Hydrodynamic Case
Energy Technology Data Exchange (ETDEWEB)
Hogg, J. Drew; Reynolds, Christopher S. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States)
2017-07-10
Truncated accretion disks are commonly invoked to explain the spectro-temporal variability in accreting black holes in both small systems, i.e., state transitions in galactic black hole binaries (GBHBs), and large systems, i.e., low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the dynamics of truncated disks is lacking. We present a well-resolved viscous, hydrodynamic simulation that uses an ad hoc cooling prescription to drive a thermal instability and, hence, produce the first sustained truncated accretion disk. With this simulation, we perform a study of the dynamics, angular momentum transport, and energetics of a truncated disk. We find that the time variability introduced by the quasi-periodic transition of gas from efficient cooling to inefficient cooling impacts the evolution of the simulated disk. A consequence of the thermal instability is that an outflow is launched from the hot/cold gas interface, which drives large, sub-Keplerian convective cells into the disk atmosphere. The convective cells introduce a viscous θ − ϕ stress that is less than the generic r − ϕ viscous stress component, but greatly influences the evolution of the disk. In the truncated disk, we find that the bulk of the accreted gas is in the hot phase.
Ignee, Andre; Jedrejczyk, Maciej; Schuessler, Gudrun; Jakubowski, Wieslaw; Dietrich, Christoph F
2010-01-01
Time intensity curves for real-time contrast enhanced low MI ultrasound is a promising technique since it adds objective data to the more subjective conventional contrast enhanced technique. Current developments showed that the amount of uptake in modern targeted therapy strategies correlates with therapy response. Nevertheless no basic research has been done concerning the reliability and validity of the method. Videos sequences of 31 consecutive patients for at least 60s were recorded. Parameters analysed: area under the curve, maximum intensity, mean transit time, perfusion index, time to peak, rise time. The influence of depth, lateral shift as well as size and shape of the region of interest was analysed. The parameters time to peak and rise time showed a good stability in different depths. Overall there was a variation >50% for all other parameters. Mean transit time, time to peak and rise time were stable from 3 to 10cm depths, whereas all other parameters showed only satisfying results at 4-6cm. Time to peak and rise time were stable as well against lateral shifting whereas all other parameters had again variations over 50%. Size and shape of the region of interest did not influence the results. (1) It is important to compare regions of interest, e.g. in a tumour vs. representative parenchyma in the same depths. (2) Time intensity curves should not be analysed in a depth of less than 4cm. (3) The parameters area under the curve, perfusion index and maximum intensity should not be analysed in a depth more than 6cm. (4) Size and shape of a region of interest in liver parenchyma do not affect time intensity curves. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Ignee, Andre [Department of Internal Medicine and Diagnostic Imaging, Caritas Hospital, Uhlandstr. 7, 97990 Bad Mergentheim (Germany)], E-mail: andre.ignee@gmx.de; Jedrejczyk, Maciej [Department of Diagnostic Imaging, 2nd Division of Medical Faculty, Medical University, Ul. Kondratowicza 8, 03-242 Warsaw (Poland)], E-mail: mjedrzejczyk@interia.pl; Schuessler, Gudrun [Department of Internal Medicine and Diagnostic Imaging, Caritas Hospital, Uhlandstr. 7, 97990 Bad Mergentheim (Germany)], E-mail: gudrunschuessler@gmx.de; Jakubowski, Wieslaw [Department of Diagnostic Imaging, 2nd Division of Medical Faculty, Medical University, Ul. Kondratowicza 8, 03-242 Warsaw (Poland)], E-mail: ewajbmd@go2.pl; Dietrich, Christoph F. [Department of Internal Medicine and Diagnostic Imaging, Caritas Hospital, Uhlandstr. 7, 97990 Bad Mergentheim (Germany)], E-mail: christoph.dietrich@ckbm.de
2010-01-15
Introduction: Time intensity curves for real-time contrast enhanced low MI ultrasound is a promising technique since it adds objective data to the more subjective conventional contrast enhanced technique. Current developments showed that the amount of uptake in modern targeted therapy strategies correlates with therapy response. Nevertheless no basic research has been done concerning the reliability and validity of the method. Patients and methods: Videos sequences of 31 consecutive patients for at least 60 s were recorded. Parameters analysed: area under the curve, maximum intensity, mean transit time, perfusion index, time to peak, rise time. The influence of depth, lateral shift as well as size and shape of the region of interest was analysed. Results: The parameters time to peak and rise time showed a good stability in different depths. Overall there was a variation >50% for all other parameters. Mean transit time, time to peak and rise time were stable from 3 to 10 cm depths, whereas all other parameters showed only satisfying results at 4-6 cm. Time to peak and rise time were stable as well against lateral shifting whereas all other parameters had again variations over 50%. Size and shape of the region of interest did not influence the results. Discussion: (1) It is important to compare regions of interest, e.g. in a tumour vs. representative parenchyma in the same depths. (2) Time intensity curves should not be analysed in a depth of less than 4 cm. (3) The parameters area under the curve, perfusion index and maximum intensity should not be analysed in a depth more than 6 cm. (4) Size and shape of a region of interest in liver parenchyma do not affect time intensity curves.
International Nuclear Information System (INIS)
Ignee, Andre; Jedrejczyk, Maciej; Schuessler, Gudrun; Jakubowski, Wieslaw; Dietrich, Christoph F.
2010-01-01
Introduction: Time intensity curves for real-time contrast enhanced low MI ultrasound is a promising technique since it adds objective data to the more subjective conventional contrast enhanced technique. Current developments showed that the amount of uptake in modern targeted therapy strategies correlates with therapy response. Nevertheless no basic research has been done concerning the reliability and validity of the method. Patients and methods: Videos sequences of 31 consecutive patients for at least 60 s were recorded. Parameters analysed: area under the curve, maximum intensity, mean transit time, perfusion index, time to peak, rise time. The influence of depth, lateral shift as well as size and shape of the region of interest was analysed. Results: The parameters time to peak and rise time showed a good stability in different depths. Overall there was a variation >50% for all other parameters. Mean transit time, time to peak and rise time were stable from 3 to 10 cm depths, whereas all other parameters showed only satisfying results at 4-6 cm. Time to peak and rise time were stable as well against lateral shifting whereas all other parameters had again variations over 50%. Size and shape of the region of interest did not influence the results. Discussion: (1) It is important to compare regions of interest, e.g. in a tumour vs. representative parenchyma in the same depths. (2) Time intensity curves should not be analysed in a depth of less than 4 cm. (3) The parameters area under the curve, perfusion index and maximum intensity should not be analysed in a depth more than 6 cm. (4) Size and shape of a region of interest in liver parenchyma do not affect time intensity curves.
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
Siebert, Johan N; Ehrler, Frederic; Combescure, Christophe; Lacroix, Laurence; Haddad, Kevin; Sanchez, Oliver; Gervaix, Alain; Lovis, Christian; Manzano, Sergio
2017-02-01
During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusion is both complex and time-consuming, placing children at higher risk than adults for medication errors. Following an evidence-based ergonomic-driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. The aim of our study was to determine whether the use of PedAMINES reduces drug preparation time (TDP) and time to delivery (TDD; primary outcome), as well as medication errors (secondary outcomes) when compared with conventional preparation methods. The study was a randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drugs infusion rate table in the preparation of continuous drug infusion. We used a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin in the shock room of a tertiary care pediatric emergency department. After epinephrine-induced return of spontaneous circulation, pediatric emergency nurses were first asked to prepare a continuous infusion of dopamine, using either PedAMINES (intervention group) or the infusion table (control group), and second, a continuous infusion of norepinephrine by crossing the procedure. The primary outcome was the elapsed time in seconds, in each allocation group, from the oral prescription by the physician to TDD by the nurse. TDD included TDP. The secondary outcome was the medication dosage error rate during the sequence from drug preparation to drug injection. A total of 20 nurses were randomized into 2 groups. During the first study period, mean TDP while using PedAMINES and conventional preparation methods was 128.1 s (95% CI 102-154) and 308.1 s (95% CI 216-400), respectively (180 s reduction, P=.002). Mean TDD was 214 s (95% CI 171-256) and
Barcaru, A.; Anroedh-Sampat, A.; Janssen, H.-G.; Vivó-Truyols, G.
2014-01-01
In this paper we present a model relating exptl. factors (column lengths, diams. and thickness, modulation times, pressures and temp. programs) with retention times. Unfortunately, an anal. soln. to calc. the retention in temp. programmed GC×GC is impossible, making thus necessary to perform a
A Novel SCCA Approach via Truncated ℓ1-norm and Truncated Group Lasso for Brain Imaging Genetics.
Du, Lei; Liu, Kefei; Zhang, Tuo; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li
2017-09-18
Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Video Error Correction Using Steganography
Robie, David L.; Mersereau, Russell M.
2002-12-01
The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Video Error Correction Using Steganography
Directory of Open Access Journals (Sweden)
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Hightower, Rebecca E
2008-01-01
Since the publication of the first analysis of Medicare payment error rates in 1998, the Office of Inspector General and the Centers for Medicare & Medicaid Services have focused resources on Medicare payment error prevention programs, now referred to as the Hospital Payment Monitoring Program. The purpose of the Hospital Payment Monitoring Program is to educate providers of Medicare Part A services in strategies to improve medical record documentation and decrease the potential for payment errors through appropriate claims completion. Although the payment error rates by state (and dollars paid in error) have decreased significantly, opportunities for improvement remain as demonstrated in this study of nine hospitals with a high proportion of short-term admissions over time. Previous studies by the Quality Improvement Organization had focused on inpatient stays of 1 day or less, a primary target due to the large amount of Medicare dollars spent on these admissions. Random review of Louisiana Medicare admissions revealed persistent medical record documentation and process issues regardless of length of stay as well as the opportunity for significant future savings to the Medicare Trust Fund. The purpose of this study was to determine whether opportunities for improvement in reduction of payment error continue to exist for inpatient admissions of greater than 1 day, despite focused education provided by Louisiana Health Care Review, the Louisiana Medicare Quality Improvement Organization, from 1999 to 2005, and to work individually with the nine selected hospitals to assist them in reducing the number of unnecessary short-term admissions and billing errors in each hospital by a minimum of 50% by the end of the study period. Inpatient Short-Term Acute Care Hospitals. A sample of claims for short-term stays (defined as an inpatient admission with a length of stay of 3 days or less excluding deaths, interim bills for those still a patient and those who left against
Evolution of truncated moments of singlet parton distributions
International Nuclear Information System (INIS)
Forte, S.; Magnea, L.; Piccione, A.; Ridolfi, G.
2001-01-01
We define truncated Mellin moments of parton distributions by restricting the integration range over the Bjorken variable to the experimentally accessible subset x 0 ≤x≤1 of the allowed kinematic range 0≤x≤1. We derive the evolution equations satisfied by truncated moments in the general (singlet) case in terms of an infinite triangular matrix of anomalous dimensions which couple each truncated moment to all higher moments with orders differing by integers. We show that the evolution of any moment can be determined to arbitrarily good accuracy by truncating the system of coupled moments to a sufficiently large but finite size, and show how the equations can be solved in a way suitable for numerical applications. We discuss in detail the accuracy of the method in view of applications to precision phenomenology
Flexible scheme to truncate the hierarchy of pure states.
Zhang, P-P; Bentley, C D B; Eisfeld, A
2018-04-07
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
Measuring a Truncated Disk in Aquila X-1
King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix;
2016-01-01
We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation.
Squeezing in multi-mode nonlinear optical state truncation
International Nuclear Information System (INIS)
Said, R.S.; Wahiddin, M.R.B.; Umarov, B.A.
2007-01-01
In this Letter, we show that multi-mode qubit states produced via nonlinear optical state truncation driven by classical external pumpings exhibit squeezing condition. We restrict our discussions to the two- and three-mode cases
Investigation of propagation dynamics of truncated vector vortex beams.
Srinivas, P; Perumangatt, C; Lal, Nijil; Singh, R P; Srinivasan, B
2018-06-01
In this Letter, we experimentally investigate the propagation dynamics of truncated vector vortex beams generated using a Sagnac interferometer. Upon focusing, the truncated vector vortex beam is found to regain its original intensity structure within the Rayleigh range. In order to explain such behavior, the propagation dynamics of a truncated vector vortex beam is simulated by decomposing it into the sum of integral charge beams with associated complex weights. We also show that the polarization of the truncated composite vector vortex beam is preserved all along the propagation axis. The experimental observations are consistent with theoretical predictions based on previous literature and are in good agreement with our simulation results. The results hold importance as vector vortex modes are eigenmodes of the optical fiber.
Truncated Newton-Raphson Methods for Quasicontinuum Simulations
National Research Council Canada - National Science Library
Liang, Yu; Kanapady, Ramdev; Chung, Peter W
2006-01-01
.... In this research, we report the effectiveness of the truncated Newton-Raphson method and quasi-Newton method with low-rank Hessian update strategy that are evaluated against the full Newton-Raphson...
Flexible scheme to truncate the hierarchy of pure states
Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.
2018-04-01
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
On the propagation of truncated localized waves in dispersive silica
Salem, Mohamed; Bagci, Hakan
2010-01-01
Propagation characteristics of truncated Localized Waves propagating in dispersive silica and free space are numerically analyzed. It is shown that those characteristics are affected by the changes in the relation between the transverse spatial
Energy Technology Data Exchange (ETDEWEB)
Simon, Vitor Hugo
1997-12-01
The goal of this work was a development of an algorithm for the Truncated Plurigaussian Stochastic Simulation and its validation in a complex geologic model. The reservoir data comes from Aux Vases Zone at Rural Hill Field in Illinois, USA, and from the 2D geological interpretation, described by WEIMER et al. (1982), three sets of samples, with different grid densities ware taken. These sets were used to condition the simulation and to refine the estimates of the non-stationary matrix of facies proportions, used to truncate the gaussian random functions (RF). The Truncated Plurigaussian Model is an extension of the Truncated Gaussian Model (TG). In this new model its possible to use several facies with different spatial structures, associated with the simplicity of TG. The geological interpretation, used as a validation model, was chosen because it shows a set of NW/SE elongated tidal channels cutting the NE/SW shoreline deposits interleaved by impermeable facies. These characteristics of spatial structures of sedimentary facies served to evaluate the simulation model. Two independent gaussian RF were used, as well as an 'erosive model' as the truncation strategy. Also, non-conditional simulations were proceeded, using linearly combined gaussian RF with varying correlation coefficients. It was analyzed the influence of some parameters like: number of gaussian RF,correlation coefficient, truncations strategy, in the outcome of simulation, and also the physical meaning of these parameters under a geological point of view. It was showed, step by step, using an example, the theoretical model, and how to construct an algorithm to simulate with the Truncated Plurigaussian Model. The conclusion of this work was that even with a plain algorithm of the Conditional Truncated Plurigaussian and a complex geological model it's possible to obtain a usefulness product. (author)
Enhancing propagation characteristics of truncated localized waves in silica
Salem, Mohamed
2011-07-01
The spectral characteristics of truncated Localized Waves propagating in dispersive silica are analyzed. Numerical experiments show that the immunity of the truncated Localized Waves propagating in dispersive silica to decay and distortion is enhanced as the non-linearity of the relation between the transverse spatial spectral components and the wave vector gets stronger, in contrast to free-space propagating waves, which suffer from early decay and distortion. © 2011 IEEE.
Siebert, Johan N; Ehrler, Frederic; Lovis, Christian; Combescure, Christophe; Haddad, Kevin; Gervaix, Alain; Manzano, Sergio
2017-08-22
During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusions is complex and time-consuming. The need for individual specific weight-based drug dose calculation and preparation places children at higher risk than adults for medication errors. Following an evidence-based and ergonomic driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. In a prior single center randomized controlled trial, medication errors were reduced from 70% to 0% by using PedAMINES when compared with conventional preparation methods. The purpose of this study is to determine whether the use of PedAMINES in both university and smaller hospitals reduces medication dosage errors (primary outcome), time to drug preparation (TDP), and time to drug delivery (TDD) (secondary outcomes) during pediatric CPR when compared with conventional preparation methods. This is a multicenter, prospective, randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drug infusion rate table in the preparation of continuous drug infusion. The evaluation setting uses a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin. The study involving 120 certified nurses (sample size) will take place in the resuscitation rooms of 3 tertiary pediatric emergency departments and 3 smaller hospitals. After epinephrine-induced return of spontaneous circulation, nurses will be asked to prepare a continuous infusion of dopamine using either PedAMINES (intervention group) or the infusion table (control group) and then prepare a continuous infusion of norepinephrine by crossing the procedure. The primary outcome is the medication dosage error rate. The secondary outcome is the time in seconds elapsed since the oral
First online real-time evaluation of motion-induced 4D dose errors during radiotherapy delivery
DEFF Research Database (Denmark)
Ravkilde, Thomas; Skouboe, Simon; Hansen, Rune
2018-01-01
PURPOSE: In radiotherapy, dose deficits caused by tumor motion often far outweigh the discrepancies typically allowed in plan-specific quality assurance (QA). Yet, tumor motion is not usually included in present QA. We here present a novel method for online treatment verification by real......-time motion-including 4D dose reconstruction and dose evaluation and demonstrate its use during stereotactic body radiotherapy (SBRT) delivery with and without MLC tracking. METHODS: Five volumetric modulated arc therapy (VMAT) plans were delivered with and without MLC tracking to a motion stage carrying...... a Delta4 dosimeter. The VMAT plans have previously been used for (non-tracking) liver SBRT with intra-treatment tumor motion recorded by kilovoltage intrafraction monitoring (KIM). The motion stage reproduced the KIM-measured tumor motions in 3D while optical monitoring guided the MLC tracking. Linac...
DEFF Research Database (Denmark)
Cano-Fácila, Francisco José; Pivnenko, Sergey; Sierra-Castaner, Manuel
2012-01-01
spectrum (PWS) is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT) plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar...
A truncated accretion disk in the galactic black hole candidate source H1743-322
International Nuclear Information System (INIS)
Sriram, Kandulapati; Agrawal, Vivek Kumar; Rao, Arikkala Raghurama
2009-01-01
To investigate the geometry of the accretion disk in the source H1743-322, we have carried out a detailed X-ray temporal and spectral study using RXTE pointed observations. We have selected all data pertaining to the Steep Power Law (SPL) state during the 2003 outburst of this source. We find anti-correlated hard X-ray lags in three of the observations and the changes in the spectral and timing parameters (like the QPO frequency) confirm the idea of a truncated accretion disk in this source. Compiling data from similar observations of other sources, we find a correlation between the fractional change in the QPO frequency and the observed delay. We suggest that these observations indicate a definite size scale in the inner accretion disk (the radius of the truncated disk) and we explain the observed correlation using various disk parameters like Compton cooling time scale, viscous time scale etc. (research papers)
The truncated Wigner method for Bose-condensed gases: limits of validity and applications
International Nuclear Information System (INIS)
Sinatra, Alice; Lobo, Carlos; Castin, Yvan
2002-01-01
We study the truncated Wigner method applied to a weakly interacting spinless Bose-condensed gas which is perturbed away from thermal equilibrium by a time-dependent external potential. The principle of the method is to generate an ensemble of classical fields ψ(r) which samples the Wigner quasi-distribution function of the initial thermal equilibrium density operator of the gas, and then to evolve each classical field with the Gross-Pitaevskii equation. In the first part of the paper we improve the sampling technique over our previous work (Sinatra et al 2000 J. Mod. Opt. 47 2629-44) and we test its accuracy against the exactly solvable model of the ideal Bose gas. In the second part of the paper we investigate the conditions of validity of the truncated Wigner method. For short evolution times it is known that the time-dependent Bogoliubov approximation is valid for almost pure condensates. The requirement that the truncated Wigner method reproduces the Bogoliubov prediction leads to the constraint that the number of field modes in the Wigner simulation must be smaller than the number of particles in the gas. For longer evolution times the nonlinear dynamics of the noncondensed modes of the field plays an important role. To demonstrate this we analyse the case of a three-dimensional spatially homogeneous Bose-condensed gas and we test the ability of the truncated Wigner method to correctly reproduce the Beliaev-Landau damping of an excitation of the condensate. We have identified the mechanism which limits the validity of the truncated Wigner method: the initial ensemble of classical fields, driven by the time-dependent Gross-Pitaevskii equation, thermalizes to a classical field distribution at a temperature T class which is larger than the initial temperature T of the quantum gas. When T class significantly exceeds T a spurious damping is observed in the Wigner simulation. This leads to the second validity condition for the truncated Wigner method, T class - T
Kaluza-Klein theories without truncation
International Nuclear Information System (INIS)
Becker, Katrin; Becker, Melanie; Robbins, Daniel
2015-01-01
In this note we will present a closed expression for the space-time effective action for all bosonic fields (massless and massive) obtained from the compactification of gravity or supergravity theories (such as type II or eleven-dimensional supergravities) from D to d space-time dimensions.
Kaluza-Klein theories without truncation
Energy Technology Data Exchange (ETDEWEB)
Becker, Katrin; Becker, Melanie; Robbins, Daniel [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy,Texas A& M University, College Station, TX 77843-4242 (United States)
2015-02-23
In this note we will present a closed expression for the space-time effective action for all bosonic fields (massless and massive) obtained from the compactification of gravity or supergravity theories (such as type II or eleven-dimensional supergravities) from D to d space-time dimensions.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
The Stars and Gas in Outer Parts of Galaxy Disks : Extended or Truncated, Flat or Warped?
van der Kruit, P. C.; Funes, JG; Corsini, EM
2008-01-01
I review observations of truncations of stellar disks and models for their origin, compare observations of truncations in moderately inclined galaxies to those in edge-on systems and discuss the relation between truncations and H I-warps and their systematics and origin. Truncations are a common
Probability distributions with truncated, log and bivariate extensions
Thomopoulos, Nick T
2018-01-01
This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...
Decrease in medical command errors with use of a "standing orders" protocol system.
Holliman, C J; Wuerz, R C; Meador, S A
1994-05-01
The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)
Calvo, Esteban; García, Juan A.; García, Ignacio; Aísa, Luis A.
2009-09-01
Phase-Doppler anemometry (PDA) is a powerful tool for two-phase flow measurements and testing. Particle concentration and mass flux can also be evaluated using the raw particle data supplied by this technique. The calculation starts from each particle velocity, diameter, transit time data, and the total measurement time. There are two main evaluation strategies. The first one uses the probe volume effective cross section, and it is usually simplified assuming that particles follow quasi one-directional trajectories. In the text, it will be called the cross section method. The second one includes a set of methods which will be denoted as “Generalized Integral Methods” (GIM). Concentration algorithms such as the transit time method (TTM) and the integral volume method (IVM) are particular cases of the GIM. In any case, a previous calibration of the measurement volume geometry is necessary to apply the referred concentration evaluation methods. In this study, concentrations and mass fluxes both evaluated by the cross-section method and the TTM are compared. Experimental data are obtained from a particle-laden jet generated by a convergent nozzle. Errors due to trajectory dispersion, burst splitting, and multi-particle signals are discussed.
Energy Technology Data Exchange (ETDEWEB)
Calvo, Esteban; Garcia, Juan A.; Garcia, Ignacio; Aisa, Luis A. [University of Zaragoza, Area de Mecanica de Fluidos, Centro Politecnico Superior, Zaragoza (Spain)
2009-09-15
Phase-Doppler anemometry (PDA) is a powerful tool for two-phase flow measurements and testing. Particle concentration and mass flux can also be evaluated using the raw particle data supplied by this technique. The calculation starts from each particle velocity, diameter, transit time data, and the total measurement time. There are two main evaluation strategies. The first one uses the probe volume effective cross section, and it is usually simplified assuming that particles follow quasi one-directional trajectories. In the text, it will be called the cross section method. The second one includes a set of methods which will be denoted as ''Generalized Integral Methods'' (GIM). Concentration algorithms such as the transit time method (TTM) and the integral volume method (IVM) are particular cases of the GIM. In any case, a previous calibration of the measurement volume geometry is necessary to apply the referred concentration evaluation methods. In this study, concentrations and mass fluxes both evaluated by the cross-section method and the TTM are compared. Experimental data are obtained from a particle-laden jet generated by a convergent nozzle. Errors due to trajectory dispersion, burst splitting, and multi-particle signals are discussed. (orig.)
Riesz Representation Theorem on Bilinear Spaces of Truncated Laurent Series
Directory of Open Access Journals (Sweden)
Sabarinsyah
2017-06-01
Full Text Available In this study a generalization of the Riesz representation theorem on non-degenerate bilinear spaces, particularly on spaces of truncated Laurent series, was developed. It was shown that any linear functional on a non-degenerate bilinear space is representable by a unique element of the space if and only if its kernel is closed. Moreover an explicit equivalent condition can be identiﬁed for the closedness property of the kernel when the bilinear space is a space of truncated Laurent series.
Vortex breakdown in a truncated conical bioreactor
DEFF Research Database (Denmark)
Balci, Adnan; Brøns, Morten; Herrada, Miguel A.
2015-01-01
. It is found that the sidewall convergence (divergence) from the top to the bottom stimulates (suppresses) the development of vortex breakdown (VB) in both water and air. At α = 60°, the flow topology changes eighteen times as Hw varies. The changes are due to (a) competing effects of AMF (the air meridional...
Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C
2017-09-01
The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
Generation of truncated recombinant form of tumor necrosis factor ...
African Journals Online (AJOL)
7. Original Research Article. Generation of truncated recombinant form of tumor necrosis factor ... as 6×His tagged using E.coli BL21 (DE3) expression system. The protein was ... proapoptotic signaling cascade through TNFR1. [5] which is ...
Measuring a truncated disk in Aquila X-1
DEFF Research Database (Denmark)
King, Ashley L.; Tomsick, John A.; Miller, Jon M.
2016-01-01
We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe Kα line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner r...
Scavenger receptor AI/II truncation, lung function and COPD
DEFF Research Database (Denmark)
Thomsen, M; Nordestgaard, B G; Tybjaerg-Hansen, A
2011-01-01
The scavenger receptor A-I/II (SRA-I/II) on alveolar macrophages is involved in recognition and clearance of modified lipids and inhaled particulates. A rare variant of the SRA-I/II gene, Arg293X, truncates the distal collagen-like domain, which is essential for ligand recognition. We tested whet...
Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2010-01-01
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...
Maximum nondiffracting propagation distance of aperture-truncated Airy beams
Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu
2018-05-01
Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.
Multiple-scattering theory with a truncated basis set
International Nuclear Information System (INIS)
Zhang, X.; Butler, W.H.
1992-01-01
Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals
Analytic Method for Pressure Recovery in Truncated Diffusers ...
African Journals Online (AJOL)
A prediction method is presented for the static pressure recovery in subsonic axisymmetric truncated conical diffusers. In the analysis, a turbulent boundary layer is assumed at the diffuser inlet and a potential core exists throughout the flow. When flow separation occurs, this approach cannot be used to predict the maximum ...
International Nuclear Information System (INIS)
Fiske, David R
2006-01-01
Computing spherical harmonic decompositions is a ubiquitous technique that arises in a wide variety of disciplines and a large number of scientific codes. Because spherical harmonics are defined by integrals over spheres, however, one must perform some sort of interpolation in order to compute them when data are stored on a cubic lattice. Misner (2004 Class. Quantum Grav. 21 S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid, which has been found in real applications to be both efficient and robust to the presence of mesh refinement boundaries. At the same time, however, practical applications of the algorithm require knowledge of how the truncation errors of the algorithm depend on the various parameters in the algorithm. Based on analytic arguments and experience using the algorithm in real numerical simulations, I explore these dependences and provide a rule of thumb for choosing the parameters based on the truncation errors of the underlying data. I also demonstrate that symmetries in the spherical harmonics themselves allow for an even more efficient implementation of the algorithm than was suggested by Misner in his original paper
Quark-gluon vertex dressing and meson masses beyond ladder-rainbow truncation
International Nuclear Information System (INIS)
Matevosyan, Hrayr H.; Thomas, Anthony W.; Tandy, Peter C.
2007-01-01
We include a generalized infinite class of quark-gluon vertex dressing diagrams in a study of how dynamics beyond the ladder-rainbow truncation influences the Bethe-Salpeter description of light-quark pseudoscalar and vector mesons. The diagrammatic specification of the vertex is mapped into a corresponding specification of the Bethe-Salpeter kernel, which preserves chiral symmetry. This study adopts the algebraic format afforded by the simple interaction kernel used in previous work on this topic. The new feature of the present work is that in every diagram summed for the vertex and the corresponding Bethe-Salpeter kernel, each quark-gluon vertex is required to be the self-consistent vertex solution. We also adopt from previous work the effective accounting for the role of the explicitly non-Abelian three-gluon coupling in a global manner through one parameter determined from recent lattice-QCD data for the vertex. Within the current model, the more consistent dressed vertex limits the ladder-rainbow truncation error for vector mesons to be never more than 10% as the current quark mass is varied from the u/d region to the b region
Social aspects of clinical errors.
Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave
2009-08-01
Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.
Hrnkova, Miroslava; Zilka, Norbert; Minichova, Zuzana; Koson, Peter; Novak, Michal
2007-01-26
Human truncated tau protein is an active constituent of the neurofibrillary degeneration in sporadic Alzheimer's disease. We have shown that modified tau protein, when expressed as a transgene in rats, induced AD characteristic tau cascade consisting of tau hyperphosphorylation, formation of argyrophilic tangles and sarcosyl-insoluble tau complexes. These pathological changes led to the functional impairment characterized by a variety of neurobehavioural symptoms. In the present study we have focused on the behavioural alterations induced by transgenic expression of human truncated tau. Transgenic rats underwent a battery of behavioural tests involving cognitive- and sensorimotor-dependent tasks accompanied with neurological assessment at the age of 4.5, 6 and 9 months. Behavioural examination of these rats showed altered spatial navigation in Morris water maze resulting in less time spent in target quadrant (popen field was not influenced by transgene expression. However beam walking test revealed that transgenic rats developed progressive sensorimotor disturbances related to the age of tested animals. The disturbances were most pronounced at the age of 9 months (p<0.01). Neurological alterations indicating impaired reflex responses were other added features of behavioural phenotype of this novel transgenic rat. These results allow us to suggest that neurodegeneration, caused by the non-mutated human truncated tau derived from sporadic human AD, result in the neuronal dysfunction consequently leading to the progressive neurobehavioural impairment.
Grosvenor, Anita J; Haigh, Brendan J; Dyer, Jolon M
2014-11-01
The extent to which nutritional and functional benefit is derived from proteins in food is related to its breakdown and digestion in the body after consumption. Further, detailed information about food protein truncation during digestion is critical to understanding and optimising the availability of bioactives, in controlling and limiting allergen release, and in minimising or monitoring the effects of processing and food preparation. However, tracking the complex array of products formed during the digestion of proteins is not easily accomplished using classical proteomics. We here present and develop a novel proteomic approach using isobaric labelling to mapping and tracking protein truncation and peptide release during simulated gastric digestion, using bovine lactoferrin as a model food protein. The relative abundance of related peptides was tracked throughout a digestion time course, and the effect of pasteurisation on peptide release assessed. The new approach to food digestion proteomics developed here therefore appears to be highly suitable not only for tracking the truncation and relative abundance of released peptides during gastric digestion, but also for determining the effects of protein modification on digestibility and potential bioavailability.
The lamppost model: effects of photon trapping, the bottom lamp and disc truncation
Niedźwiecki, Andrzej; Zdziarski, Andrzej A.
2018-04-01
We study the lamppost model, in which the primary X-ray sources in accreting black-hole systems are located symmetrically on the rotation axis on both sides of the black hole surrounded by an accretion disc. We show the importance of the emission of the source on the opposite side to the observer. Due to gravitational light bending, its emission can increase the direct (i.e., not re-emitted by the disc) flux by as much as an order of magnitude. This happens for near to face-on observers when the disc is even moderately truncated. For truncated discs, we also consider effects of emission of the top source gravitationally bent around the black hole. We also present results for the attenuation of the observed radiation with respect to that emitted by the lamppost as functions of the lamppost height, black-hole spin and the degree of disc truncation. This attenuation, which is due to the time dilation, gravitational redshift and the loss of photons crossing the black-hole horizon, can be as severe as by several orders of magnitude for low lamppost heights. We also consider the contribution to the observed flux due to re-emission by optically-thick matter within the innermost stable circular orbit.
Varying coefficient subdistribution regression for left-truncated semi-competing risks data.
Li, Ruosha; Peng, Limin
2014-10-01
Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.
Modifications of Geometric Truncation of the Scattering Phase Function
Radkevich, A.
2017-12-01
Phase function (PF) of light scattering on large atmospheric particles has very strong peak in forward direction constituting a challenge for accurate numerical calculations of radiance. Such accurate (and fast) evaluations are important in the problems of remote sensing of the atmosphere. Scaling transformation replaces original PF with a sum of the delta function and a new regular smooth PF. A number of methods to construct such a PF were suggested. Delta-M and delta-fit methods require evaluation of the PF moments which imposes a numerical problem if strongly anisotropic PF is given as a function of angle. Geometric truncation keeps the original PF unchanged outside the forward peak cone replacing it with a constant within the cone. This approach is designed to preserve the asymmetry parameter. It has two disadvantages: 1) PF has discontinuity at the cone; 2) the choice of the cone is subjective, no recommendations were provided on the choice of the truncation angle. This choice affects both truncation fraction and the value of the phase function within the forward cone. Both issues are addressed in this study. A simple functional form of the replacement PF is suggested. This functional form allows for a number of modifications. This study consider 3 versions providing continuous PF. The considered modifications also bear either of three properties: preserve asymmetry parameter, provide continuity of the 1st derivative of the PF, and preserve mean scattering angle. The second problem mentioned above is addressed with a heuristic approach providing unambiguous criterion of selection of the truncation angle. The approach showed good performance on liquid water and ice clouds with different particle size distributions. Suggested modifications were tested on different cloud PFs using both discrete ordinates and Monte Carlo methods. It was showed that the modifications provide better accuracy of the radiance computation compare to the original geometric truncation.
Zhang, X.; Anagnostou, E. N.; Schwartz, C. S.
2017-12-01
Satellite precipitation products tend to have significant biases over complex terrain. Our research investigates a statistical approach for satellite precipitation adjustment based solely on numerical weather simulations. This approach has been evaluated in two mid-latitude (Zhang et al. 2013*1, Zhang et al. 2016*2) and three topical mountainous regions by using the WRF model to adjust two high-resolution satellite products i) National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center morphing technique (CMORPH) and ii) Global Satellite Mapping of Precipitation (GSMaP). Results show the adjustment effectively reduces the satellite underestimation of high rain rates, which provides a solid proof-of-concept for continuing research of NWP-based satellite correction. In this study we investigate the feasibility of using NCAR Real-time Ensemble Forecasts*3 for adjusting near-real-time satellite precipitation datasets over complex terrain areas in the Continental United States (CONUS) such as Olympic Peninsula, California coastal mountain ranges, Rocky Mountains and South Appalachians. The research will focus on flood-inducing storms occurred from May 2015 to December 2016 and four satellite precipitation products (CMORPH, GSMaP, PERSIANN-CCS and IMERG). The error correction performance evaluation will be based on comparisons against the gauge-adjusted Stage IV precipitation data. *1 Zhang, Xinxuan, et al. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14.6 (2013): 1844-1858. *2 Zhang, Xinxuan, et al. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099. *3 Schwartz, Craig S., et al. "NCAR's experimental real-time convection-allowing ensemble prediction system." Weather and Forecasting 30.6 (2015): 1645-1654.
Directory of Open Access Journals (Sweden)
MA. Lendita Kryeziu
2015-06-01
Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.
Compact disk error measurements
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Truncated power control for improving TCP/IP performance over CDMA wireless links
DEFF Research Database (Denmark)
Cianca, Ernestina; Prasad, Ramjee; De Sanctis, Mauro
2005-01-01
The issue of the performance degradation of transmission control protocol/Internet Protocol (TCP/IP) over wireless links due to the presence of noncongestion-related packet losses has been addressed with a physical layer approach. The effectiveness of automatic repeat request techniques...... in enhancing TCP/IP performance depends on the tradeoff between frame transmission delay and residual errors after retransmissions. The paper shows how a truncated power control can be effectively applied to improve that tradeoff so that a higher transmission reliability is provided without increasing...... the frame transmission delay through the radio link layer and without increasing the energy consumption. An analytical framework has been developed to show the feasibility and effectiveness of the proposed power control. The analytical results, which are carried out assuming a constant multiuser...
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
International Nuclear Information System (INIS)
Fanchon, L; Apte, A; Dzyubak, O; Mageras, G; Yorke, E; Solomon, S; Kirov, A; Visvikis, D; Hatt, M
2015-01-01
Purpose: PET/CT guidance is used for biopsies of metabolically active lesions, which are not well seen on CT alone or to target the metabolically active tissue in tumor ablations. It has also been shown that PET/CT guided biopsies provide an opportunity to verify the location of the lesion border at the place of needle insertion. However the error in needle placement with respect to the metabolically active region may be affected by motion between the PET/CT scan performed at the start of the procedure and the CT scan performed with the needle in place and this error has not been previously quantified. Methods: Specimens from 31 PET/CT guided biopsies were investigated and correlated to the intraoperative PET scan under an IRB approved HIPAA compliant protocol. For 4 of the cases in which larger motion was suspected a second PET scan was obtained with the needle in place. The CT and the PET images obtained before and after the needle insertion were used to calculate the displacement of the voxels along the needle path. CTpost was registered to CTpre using a free form deformable registration and then fused with PETpre. The shifts between the PET image contours (42% of SUVmax) for PETpre and PETpost were obtained at the needle position. Results: For these extreme cases the displacement of the CT voxels along the needle path ranged from 2.9 to 8 mm with a mean of 5 mm. The shift of the PET image segmentation contours (42% of SUVmax) at the needle position ranged from 2.3 to 7 mm between the two scans. Conclusion: Evaluation of the mis-registration between the CT with the needle in place and the pre-biopsy PET can be obtained using deformable registration of the respective CT scans and can be used to indicate the need of a second PET in real-time. This work is supported in part by a grant from Biospace Lab, S.A
Lethal mutants and truncated selection together solve a paradox of the origin of life.
Directory of Open Access Journals (Sweden)
David B Saakian
Full Text Available BACKGROUND: Many attempts have been made to describe the origin of life, one of which is Eigen's cycle of autocatalytic reactions [Eigen M (1971 Naturwissenschaften 58, 465-523], in which primordial life molecules are replicated with limited accuracy through autocatalytic reactions. For successful evolution, the information carrier (either RNA or DNA or their precursor must be transmitted to the next generation with a minimal number of misprints. In Eigen's theory, the maximum chain length that could be maintained is restricted to 100-1000 nucleotides, while for the most primitive genome the length is around 7000-20,000. This is the famous error catastrophe paradox. How to solve this puzzle is an interesting and important problem in the theory of the origin of life. METHODOLOGY/PRINCIPAL FINDINGS: We use methods of statistical physics to solve this paradox by carefully analyzing the implications of neutral and lethal mutants, and truncated selection (i.e., when fitness is zero after a certain Hamming distance from the master sequence for the critical chain length. While neutral mutants play an important role in evolution, they do not provide a solution to the paradox. We have found that lethal mutants and truncated selection together can solve the error catastrophe paradox. There is a principal difference between prebiotic molecule self-replication and proto-cell self-replication stages in the origin of life. CONCLUSIONS/SIGNIFICANCE: We have applied methods of statistical physics to make an important breakthrough in the molecular theory of the origin of life. Our results will inspire further studies on the molecular theory of the origin of life and biological evolution.
Hellström, Åke; Rammsayer, Thomas H
2015-10-01
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.
Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers
Directory of Open Access Journals (Sweden)
E. George Walters III
2015-11-01
Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.
Propagation of truncated modified Laguerre-Gaussian beams
Deng, D.; Li, J.; Guo, Q.
2010-01-01
By expanding the circ function into a finite sum of complex Gaussian functions and applying the Collins formula, the propagation of hard-edge diffracted modified Laguerre-Gaussian beams (MLGBs) through a paraxial ABCD system is studied, and the approximate closed-form propagation expression of hard-edge diffracted MLGBs is obtained. The transverse intensity distribution of the MLGB carrying finite power can be characterized by a single bright and symmetric ring during propagation when the aperture radius is very large. Starting from the definition of the generalized truncated second-order moments, the beam quality factor of MLGBs through a hard-edged circular aperture is investigated in a cylindrical coordinate system, which turns out to be dependent on the truncated radius and the beam orders.
Rotating D0-branes and consistent truncations of supergravity
International Nuclear Information System (INIS)
Anabalón, Andrés; Ortiz, Thomas; Samtleben, Henning
2013-01-01
The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1) 4 truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S 8 . As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS 2 ×M 8 geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields
Intersection spaces, spatial homology truncation, and string theory
Banagl, Markus
2010-01-01
Intersection cohomology assigns groups which satisfy a generalized form of Poincaré duality over the rationals to a stratified singular space. The present monograph introduces a method that assigns to certain classes of stratified spaces cell complexes, called intersection spaces, whose ordinary rational homology satisfies generalized Poincaré duality. The cornerstone of the method is a process of spatial homology truncation, whose functoriality properties are analyzed in detail. The material on truncation is autonomous and may be of independent interest to homotopy theorists. The cohomology of intersection spaces is not isomorphic to intersection cohomology and possesses algebraic features such as perversity-internal cup-products and cohomology operations that are not generally available for intersection cohomology. A mirror-symmetric interpretation, as well as applications to string theory concerning massless D-branes arising in type IIB theory during a Calabi-Yau conifold transition, are discussed.
Rotating D0-branes and consistent truncations of supergravity
Energy Technology Data Exchange (ETDEWEB)
Anabalón, Andrés [Departamento de Ciencias, Facultad de Artes Liberales, Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Av. Padre Hurtado 750, Viña del Mar (Chile); Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France); Ortiz, Thomas; Samtleben, Henning [Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS École Normale Supérieure de Lyon 46, allée d' Italie, F-69364 Lyon cedex 07 (France)
2013-12-18
The fluctuations around the D0-brane near-horizon geometry are described by two-dimensional SO(9) gauged maximal supergravity. We work out the U(1){sup 4} truncation of this theory whose scalar sector consists of five dilaton and four axion fields. We construct the full non-linear Kaluza–Klein ansatz for the embedding of the dilaton sector into type IIA supergravity. This yields a consistent truncation around a geometry which is the warped product of a two-dimensional domain wall and the sphere S{sup 8}. As an application, we consider the solutions corresponding to rotating D0-branes which in the near-horizon limit approach AdS{sub 2}×M{sub 8} geometries, and discuss their thermodynamical properties. More generally, we study the appearance of such solutions in the presence of non-vanishing axion fields.
Wang, Chia-Yih; Carriquiry, Alicia L; Chen, Te-Ching; Loria, Catherine M; Pfeiffer, Christine M; Liu, Kiang; Sempos, Christopher T; Perrine, Cria G; Cogswell, Mary E
2015-05-01
High US sodium intake and national reduction efforts necessitate developing a feasible and valid monitoring method across the distribution of low-to-high sodium intake. We examined a statistical approach using timed urine voids to estimate the population distribution of usual 24-h sodium excretion. A sample of 407 adults, aged 18-39 y (54% female, 48% black), collected each void in a separate container for 24 h; 133 repeated the procedure 4-11 d later. Four timed voids (morning, afternoon, evening, overnight) were selected from each 24-h collection. We developed gender-specific equations to calibrate total sodium excreted in each of the one-void (e.g., morning) and combined two-void (e.g., morning + afternoon) urines to 24-h sodium excretion. The calibrated sodium excretions were used to estimate the population distribution of usual 24-h sodium excretion. Participants were then randomly assigned to modeling (n = 160) or validation (n = 247) groups to examine the bias in estimated population percentiles. Median bias in predicting selected percentiles (5th, 25th, 50th, 75th, 95th) of usual 24-h sodium excretion with one-void urines ranged from -367 to 284 mg (-7.7 to 12.2% of the observed usual excretions) for men and -604 to 486 mg (-14.6 to 23.7%) for women, and with two-void urines from -338 to 263 mg (-6.9 to 10.4%) and -166 to 153 mg (-4.1 to 8.1%), respectively. Four of the 6 two-void urine combinations produced no significant bias in predicting selected percentiles. Our approach to estimate the population usual 24-h sodium excretion, which uses calibrated timed-void sodium to account for day-to-day variation and covariance between measurement errors, produced percentile estimates with relatively low biases across low-to-high sodium excretions. This may provide a low-burden, low-cost alternative to 24-h collections in monitoring population sodium intake among healthy young adults and merits further investigation in other population subgroups. © 2015 American
Statistical errors in Monte Carlo estimates of systematic errors
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.
Generation of truncated recombinant form of tumor necrosis factor ...
African Journals Online (AJOL)
Purpose: To produce truncated recombinant form of tumor necrosis factor receptor 1 (TNFR1), cysteine-rich domain 2 (CRD2) and CRD3 regions of the receptor were generated using pET28a and E. coli/BL21. Methods: DNA coding sequence of CRD2 and CRD3 was cloned into pET28a vector and the corresponding ...
Dual scan CT image recovery from truncated projections
Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat
2017-12-01
There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.
Filter Factors of Truncated TLS Regularization with Multiple Observations
Czech Academy of Sciences Publication Activity Database
Hnětynková, I.; Plešinger, Martin; Žáková, J.
2017-01-01
Roč. 62, č. 2 (2017), s. 105-120 ISSN 0862-7940 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : truncated total least squares * multiple right-hand sides * eigenvalues of rank-d update * ill-posed problem * regularization * filter factors Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.618, year: 2016 http://hdl.handle.net/10338.dmlcz/146698
Patra, M.; Karttunen, M.E.J.; Hyvönen, M.T.; Falck, E.; Lindqvist, P.; Vattulainen, I.
2003-01-01
We study the influence of truncating the electrostatic interactions in a fully hydrated pure dipalmitoylphosphatidylcholine (DPPC) bilayer through 20 ns molecular dynamics simulations. The computations in which the electrostatic interactions were truncated are compared to similar simulations using
A protein-truncating R179X variant in RNF186 confers protection against ulcerative colitis
Rivas, Manuel A.; Graham, Daniel; Sulem, Patrick; Stevens, Christine; Desch, A. Nicole; Goyette, Philippe; Gudbjartsson, Daniel; Jonsdottir, Ingileif; Thorsteinsdottir, Unnur; Degenhardt, Frauke; Mucha, Soeren; Kurki, Mitja I.; Li, Dalin; D'Amato, Mauro; Annese, Vito; Vermeire, Severine; Weersma, Rinse K.; Halfvarson, Jonas; Paavola-Sakki, Paulina; Lappalainen, Maarit; Lek, Monkol; Cummings, Beryl; Tukiainen, Taru; Haritunians, Talin; Halme, Leena; Koskinen, Lotta L. E.; Ananthakrishnan, Ashwin N.; Luo, Yang; Heap, Graham A.; Visschedijk, Marijn C.; MacArthur, Daniel G.; Neale, Benjamin M.; Ahmad, Tariq; Anderson, Carl A.; Brant, Steven R.; Duerr, Richard H.; Silverberg, Mark S.; Cho, Judy H.; Palotie, Aarno; Saavalainen, Paivi; Kontula, Kimmo; Farkkila, Martti; McGovern, Dermot P. B.; Franke, Andre; Stefansson, Kari; Rioux, John D.; Xavier, Ramnik J.; Daly, Mark J.
Protein-truncating variants protective against human disease provide in vivo validation of therapeutic targets. Here we used targeted sequencing to conduct a search for protein-truncating variants conferring protection against inflammatory bowel disease exploiting knowledge of common variants
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Evidence for Truncated Exponential Probability Distribution of Earthquake Slip
Thingbaijam, Kiran Kumar; Mai, Paul Martin
2016-01-01
Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.
Evidence for Truncated Exponential Probability Distribution of Earthquake Slip
Thingbaijam, Kiran K. S.
2016-07-13
Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.
Truncatable bootstrap equations in algebraic form and critical surface exponents
Energy Technology Data Exchange (ETDEWEB)
Gliozzi, Ferdinando [Dipartimento di Fisica, Università di Torino andIstituto Nazionale di Fisica Nucleare - sezione di Torino,Via P. Giuria 1, Torino, I-10125 (Italy)
2016-10-10
We describe examples of drastic truncations of conformal bootstrap equations encoding much more information than that obtained by a direct numerical approach. A three-term truncation of the four point function of a free scalar in any space dimensions provides algebraic identities among conformal block derivatives which generate the exact spectrum of the infinitely many primary operators contributing to it. In boundary conformal field theories, we point out that the appearance of free parameters in the solutions of bootstrap equations is not an artifact of truncations, rather it reflects a physical property of permeable conformal interfaces which are described by the same equations. Surface transitions correspond to isolated points in the parameter space. We are able to locate them in the case of 3d Ising model, thanks to a useful algebraic form of 3d boundary bootstrap equations. It turns out that the low-lying spectra of the surface operators in the ordinary and the special transitions of 3d Ising model form two different solutions of the same polynomial equation. Their interplay yields an estimate of the surface renormalization group exponents, y{sub h}=0.72558(18) for the ordinary universality class and y{sub h}=1.646(2) for the special universality class, which compare well with the most recent Monte Carlo calculations. Estimates of other surface exponents as well as OPE coefficients are also obtained.
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes
Creel, Scott; Creel, Michael
2009-11-01
1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results
Clock error models for simulation and estimation
International Nuclear Information System (INIS)
Meditch, J.S.
1981-10-01
Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction
DEFF Research Database (Denmark)
Rinalducci, Sara; Campostrini, Natascia; Antonioli, Paolo
2005-01-01
Different spot profiles were observed in 2D gel electrophoresis of thylakoid membranes performed either under complete darkness or by leaving the sample for a short time to low visible light. In the latter case, a large number of new spots with lower molecular masses, ranging between 15,000 and 25......,000 Da, were observed, and high-molecular-mass aggregates, seen as a smearing in the upper part of the gel, appeared in the region around 250 kDa. Identification of protein(s) contained in these new spots by MS/MS revealed that most of them are simply truncated proteins deriving from native ones...
Unified theory of fermion pair to boson mappings in full and truncated spaces
International Nuclear Information System (INIS)
Ginocchio, J.N.; Johnson, C.W.
1995-01-01
After a brief review of various mappings of fermion pairs to bosons, we rigorously derive a general approach. Following the methods of Marumori and Otsuka, Arima, and Iachello, our approach begins with mapping states and constructs boson representations that preserve fermion matrix elements. In several cases these representations factor into finite, Hermitian boson images times a projection or norm operator that embodies the Pauli principle. We pay particular attention to truncated boson spaces, and describe general methods for constructing Hermitian and approximately finite boson image Hamiltonians. This method is akin to that of Otsuka, Arima, and Iachello introduced in connection with the interacting boson model, but is more rigorous, general, and systematic
Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution
Directory of Open Access Journals (Sweden)
Amer Ibrahim Al-Omari
2018-03-01
Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.
DEFF Research Database (Denmark)
Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker
2015-01-01
simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... prior to the experiment and with a properly trained ANN it is no problem to obtain accurate simulations much faster than real time-without any need for large computational capacity. The present study demonstrates how this hybrid method can be applied to the active truncated experiments yielding a system...
Statistical errors in Monte Carlo estimates of systematic errors
Energy Technology Data Exchange (ETDEWEB)
Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.
Statistical errors in Monte Carlo estimates of systematic errors
International Nuclear Information System (INIS)
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2
Learning from prescribing errors
Dean, B
2002-01-01
The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...
LGI2 truncation causes a remitting focal epilepsy in dogs.
Directory of Open Access Journals (Sweden)
Eija H Seppälä
2011-07-01
Full Text Available One quadrillion synapses are laid in the first two years of postnatal construction of the human brain, which are then pruned until age 10 to 500 trillion synapses composing the final network. Genetic epilepsies are the most common neurological diseases with onset during pruning, affecting 0.5% of 2-10-year-old children, and these epilepsies are often characterized by spontaneous remission. We previously described a remitting epilepsy in the Lagotto romagnolo canine breed. Here, we identify the gene defect and affected neurochemical pathway. We reconstructed a large Lagotto pedigree of around 34 affected animals. Using genome-wide association in 11 discordant sib-pairs from this pedigree, we mapped the disease locus to a 1.7 Mb region of homozygosity in chromosome 3 where we identified a protein-truncating mutation in the Lgi2 gene, a homologue of the human epilepsy gene LGI1. We show that LGI2, like LGI1, is neuronally secreted and acts on metalloproteinase-lacking members of the ADAM family of neuronal receptors, which function in synapse remodeling, and that LGI2 truncation, like LGI1 truncations, prevents secretion and ADAM interaction. The resulting epilepsy onsets at around seven weeks (equivalent to human two years, and remits by four months (human eight years, versus onset after age eight in the majority of human patients with LGI1 mutations. Finally, we show that Lgi2 is expressed highly in the immediate post-natal period until halfway through pruning, unlike Lgi1, which is expressed in the latter part of pruning and beyond. LGI2 acts at least in part through the same ADAM receptors as LGI1, but earlier, ensuring electrical stability (absence of epilepsy during pruning years, preceding this same function performed by LGI1 in later years. LGI2 should be considered a candidate gene for common remitting childhood epilepsies, and LGI2-to-LGI1 transition for mechanisms of childhood epilepsy remission.
Directory of Open Access Journals (Sweden)
Rampratap S. Kushwaha
2004-01-01
Full Text Available The present studies were conducted to determine whether a synthetic truncated apoC-I peptide that inhibits CETP activity in baboons would raise plasma HDL cholesterol levels in nonhuman primates with low HDL levels. We used 2 cynomolgus monkeys and 3 baboons fed a cholesterol- and fat-enriched diet. In cynomolgus monkeys, we injected synthetic truncated apoC-I inhibitor peptide at a dose of 20 mg/kg and, in baboons, at doses of 10, 15, and 20 mg/kg at weekly intervals. Blood samples were collected 3 times a week and VLDL + LDL and HDL cholesterol concentrations were measured. In cynomolgus monkeys, administration of the inhibitor peptide caused a rapid decrease in VLDL + LDL cholesterol concentrations (30%–60% and an increase in HDL cholesterol concentrations (10%–20%. VLDL + LDL cholesterol concentrations returned to baseline levels in approximately 15 days. In baboons, administration of the synthetic inhibitor peptide caused a decrease in VLDL + LDL cholesterol (20%–60% and an increase in HDL cholesterol (10%–20%. VLDL + LDL cholesterol returned to baseline levels by day 21, whereas HDL cholesterol concentrations remained elevated for up to 26 days. ApoA-I concentrations increased, whereas apoE and triglyceride concentrations decreased. Subcutaneous and intravenous administrations of the inhibitor peptide had similar effects on LDL and HDL cholesterol concentrations. There was no change in body weight, food consumption, or plasma IgG levels of any baboon during the study. These studies suggest that the truncated apoC-I peptide can be used to raise HDL in humans.
On the propagation of truncated localized waves in dispersive silica
Salem, Mohamed
2010-01-01
Propagation characteristics of truncated Localized Waves propagating in dispersive silica and free space are numerically analyzed. It is shown that those characteristics are affected by the changes in the relation between the transverse spatial spectral components and the wave vector. Numerical experiments demonstrate that as the non-linearity of this relation gets stronger, the pulses propagating in silica become more immune to decay and distortion whereas the pulses propagating in free-space suffer from early decay and distortion. © 2010 Optical Society of America.
Truncated conformal space approach to scaling Lee-Yang model
International Nuclear Information System (INIS)
Yurov, V.P.; Zamolodchikov, Al.B.
1989-01-01
A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs
Error Estimation and Accuracy Improvements in Nodal Transport Methods
International Nuclear Information System (INIS)
Zamonsky, O.M.
2000-01-01
The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid
Symmetric truncations of the shallow-water equations
International Nuclear Information System (INIS)
Rouhi, A.; Abarbanel, H.D.I.
1993-01-01
Conservation of potential vorticity in Eulerian fluids reflects particle interchange symmetry in the Lagrangian fluid version of the same theory. The algebra associated with this symmetry in the shallow-water equations is studied here, and we give a method for truncating the degrees of freedom of the theory which preserves a maximal number of invariants associated with this algebra. The finite-dimensional symmetry associated with keeping only N modes of the shallow-water flow is SU(N). In the limit where the number of modes goes to infinity (N→∞) all the conservation laws connected with potential vorticity conservation are recovered. We also present a Hamiltonian which is invariant under this truncated symmetry and which reduces to the familiar shallow-water Hamiltonian when N→∞. All this provides a finite-dimensional framework for numerical work with the shallow-water equations which preserves not only energy and enstrophy but all other known conserved quantities consistent with the finite number of degrees of freedom. The extension of these ideas to other nearly two-dimensional flows is discussed
Learning Mixtures of Truncated Basis Functions from Data
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Pérez-Bernabé, Inmaculada
2014-01-01
In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a ke...... propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log......In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing......-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular sub- class of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference....
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
Martínez-Legaz, Juan Enrique; Soubeyran, Antoine
2003-01-01
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Medication errors: prescribing faults and prescription errors.
Velo, Giampaolo P; Minuz, Pietro
2009-06-01
1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.
Reducing Approximation Error in the Fourier Flexible Functional Form
Directory of Open Access Journals (Sweden)
Tristan D. Skolrud
2017-12-01
Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.
van Gent, P. L.; Schrijer, F. F. J.; van Oudheusden, B. W.
2018-04-01
Pseudo-tracking refers to the construction of imaginary particle paths from PIV velocity fields and the subsequent estimation of the particle (material) acceleration. In view of the variety of existing and possible alternative ways to perform the pseudo-tracking method, it is not straightforward to select a suitable combination of numerical procedures for its implementation. To address this situation, this paper extends the theoretical framework for the approach. The developed theory is verified by applying various implementations of pseudo-tracking to a simulated PIV experiment. The findings of the investigations allow us to formulate the following insights and practical recommendations: (1) the velocity errors along the imaginary particle track are primarily a function of velocity measurement errors and spatial velocity gradients; (2) the particle path may best be calculated with second-order accurate numerical procedures while ensuring that the CFL condition is met; (3) least-square fitting of a first-order polynomial is a suitable method to estimate the material acceleration from the track; and (4) a suitable track length may be selected on the basis of the variation in material acceleration with track length.
Energy Technology Data Exchange (ETDEWEB)
Marrazzo, Livia; Arilli, Chiara; Casati, Marta [Careggi University Hospital, Medical Physic Unit, Florence (Italy); Pasler, Marlies [Lake Constance Radiation Oncology Center, Singen-Friedrichshafen (Germany); Kusters, Martijn; Canters, Richard [Radboud University Medical Center, Department of Radiation Oncology, Nijmegen (Netherlands); Fedeli, Luca; Calusi, Silvia [University of Florence, Department of Experimental and Clinical Biomedical Sciences ' ' Mario Serio' ' , Florence (Italy); Talamonti, Cinzia; Pallotta, Stefania [Careggi University Hospital, Medical Physic Unit, Florence (Italy); University of Florence, Department of Experimental and Clinical Biomedical Sciences ' ' Mario Serio' ' , Florence (Italy); Simontacchi, Gabriele [Careggi University Hospital, Radiation Oncology Unit, Florence (Italy); Livi, Lorenzo [University of Florence, Department of Experimental and Clinical Biomedical Sciences ' ' Mario Serio' ' , Florence (Italy); Careggi University Hospital, Radiation Oncology Unit, Florence (Italy)
2018-03-15
This study aimed to test the sensitivity of a transmission detector for online dose monitoring of intensity-modulated radiation therapy (IMRT) for detecting small delivery errors. Furthermore, the correlation of changes in detector output induced by small delivery errors with other metrics commonly employed to quantify the deviations between calculated and delivered dose distributions was investigated. Transmission detector measurements were performed at three institutions. Seven types of errors were induced in nine clinical step-and-shoot (S and S) IMRT plans by modifying the number of monitor units (MU) and introducing small deviations in leaf positions. Signal reproducibility was investigated for short- and long-term stability. Calculated dose distributions were compared in terms of γ passing rates and dose-volume histogram (DVH) metrics (e.g., D{sub mean}, D{sub x%}, V{sub x%}). The correlation between detector signal variations, γ passing rates, and DVH parameters was investigated. Both short- and long-term reproducibility was within 1%. Dose variations down to 1 MU (∇signal 1.1 ± 0.4%) as well as changes in field size and positions down to 1 mm (∇signal 2.6 ± 1.0%) were detected, thus indicating high error-detection sensitivity. A moderate correlation of detector signal was observed with γ passing rates (R{sup 2} = 0.57-0.70), while a good correlation was observed with DVH metrics (R{sup 2} = 0.75-0.98). The detector is capable of detecting small delivery errors in MU and leaf positions, and is thus a highly sensitive dose monitoring device for S and S IMRT for clinical practice. The results of this study indicate a good correlation of detector signal with DVH metrics; therefore, clinical action levels can be defined based on the presented data. (orig.) [German] In dieser Arbeit wurde die Sensitivitaet bezueglich der Fehlererkennung eines Transmissionsdetektors fuer die Online-Dosisueberwachung von intensitaetsmodulierter Strahlentherapie (IMRT
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Design and Synthesis of a Series of Truncated Neplanocin Fleximers
Directory of Open Access Journals (Sweden)
Sarah C. Zimmermann
2014-12-01
Full Text Available In an effort to study the effects of flexibility on enzyme recognition and activity, we have developed several different series of flexible nucleoside analogues in which the purine base is split into its respective imidazole and pyrimidine components. The focus of this particular study was to synthesize the truncated neplanocin A fleximers to investigate their potential anti-protozoan activities by inhibition of S-adenosylhomocysteine hydrolase (SAHase. The three fleximers tested displayed poor anti-trypanocidal activities, with EC50 values around 200 μM. Further studies of the corresponding ribose fleximers, most closely related to the natural nucleoside substrates, revealed low affinity for the known T. brucei nucleoside transporters P1 and P2, which may be the reason for the lack of trypanocidal activity observed.
Administering truncated receive functions in a parallel messaging interface
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-12-09
Administering truncated receive functions in a parallel messaging interface (`PMI`) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded.
Effect of truncated cone roughness element density on hydrodynamic drag
Womack, Kristofer; Schultz, Michael; Meneveau, Charles
2017-11-01
An experimental study was conducted on rough-wall, turbulent boundary layer flow with roughness elements whose idealized shape model barnacles that cause hydrodynamic drag in many applications. Varying planform densities of truncated cone roughness elements were investigated. Element densities studied ranged from 10% to 79%. Detailed turbulent boundary layer velocity statistics were recorded with a two-component LDV system on a three-axis traverse. Hydrodynamic roughness length (z0) and skin-friction coefficient (Cf) were determined and compared with the estimates from existing roughness element drag prediction models including Macdonald et al. (1998) and other recent models. The roughness elements used in this work model idealized barnacles, so implications of this data set for ship powering are considered. This research was supported by the Office of Naval Research and by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
Pair truncation for rotational nuclei: j=17/2 model
International Nuclear Information System (INIS)
Halse, P.; Jaqua, L.; Barrett, B.R.
1989-01-01
The suitability of the pair condensate approach for rotational states is studied in a single j=17/2 shell of identical nucleons interacting through a quadrupole-quadrupole Hamiltonian. The ground band and a K=2 excited band are both studied in detail. A direct comparison of the exact states with those constituting the SD and SDG subspaces is used to identify the important degrees of freedom for these levels. The range of pairs necessary for a good description is found to be highly state dependent; S and D pairs are the major constituents of the low-spin ground-band levels, while G pairs are needed for those in the γ band. Energy spectra are obtained for each truncated subspace. SDG pairs allow accurate reproduction of the binding energy and K=2 excitation energy, but still give a moment of inertia which is about 30% too small even for the lowest levels
Solution of the Stieltjes truncated matrix moment problem
Directory of Open Access Journals (Sweden)
Vadim M. Adamyan
2005-01-01
Full Text Available The truncated Stieltjes matrix moment problem consisting in the description of all matrix distributions \\(\\boldsymbol{\\sigma}(t\\ on \\([0,\\infty\\ with given first \\(2n+1\\ power moments \\((\\mathbf{C}_j_{n=0}^j\\ is solved using known results on the corresponding Hamburger problem for which \\(\\boldsymbol{\\sigma}(t\\ are defined on \\((-\\infty,\\infty\\. The criterion of solvability of the Stieltjes problem is given and all its solutions in the non-degenerate case are described by selection of the appropriate solutions among those of the Hamburger problem for the same set of moments. The results on extensions of non-negative operators are used and a purely algebraic algorithm for the solution of both Hamburger and Stieltjes problems is proposed.
Generalized Truncated Methods for an Efficient Solution of Retrial Systems
Directory of Open Access Journals (Sweden)
Ma Jose Domenech-Benlloch
2008-01-01
Full Text Available We are concerned with the analytic solution of multiserver retrial queues including the impatience phenomenon. As there are not closed-form solutions to these systems, approximate methods are required. We propose two different generalized truncated methods to effectively solve this type of systems. The methods proposed are based on the homogenization of the state space beyond a given number of users in the retrial orbit. We compare the proposed methods with the most well-known methods appeared in the literature in a wide range of scenarios. We conclude that the proposed methods generally outperform previous proposals in terms of accuracy for the most common performance parameters used in retrial systems with a moderated growth in the computational cost.
Developmental regulation of human truncated nerve growth factor receptor
Energy Technology Data Exchange (ETDEWEB)
DiStefano, P.S.; Clagett-Dame, M.; Chelsea, D.M.; Loy, R. (Abbott Laboratories, Abbott Park, IL (USA))
1991-01-01
Monoclonal antibodies (designated XIF1 and IIIG5) recognizing distinct epitopes of the human truncated nerve growth factor receptor (NGF-Rt) were used in a two-site radiometric immunosorbent assay to monitor levels of NGF-Rt in human urine as a function of age. Urine samples were collected from 70 neurologically normal subjects ranging in age from 1 month to 68 years. By using this sensitive two-site radiometric immunosorbent assay, NGF-Rt levels were found to be highest in urine from 1-month old subjects. By 2.5 months, NGF-Rt values were half of those seen at 1 month and decreased more gradually between 0.5 and 15 years. Between 15 and 68 years, urine NGF-Rt levels were relatively constant at 5% of 1-month values. No evidence for diurnal variation of adult NGF-Rt was apparent. Pregnant women in their third trimester showed significantly elevated urine NGF-Rt values compared with age-matched normals. Affinity labeling of NGF-Rt with 125I-NGF followed by immunoprecipitation with ME20.4-IgG and gel autoradiography indicated that neonatal urine contained high amounts of truncated receptor (Mr = 50 kd); decreasingly lower amounts of NGF-Rt were observed on gel autoradiograms with development, indicating that the two-site radiometric immunosorbent assay correlated well with the affinity labeling technique for measuring NGF-Rt. NGF-Rt in urines from 1-month-old and 36-year-old subjects showed no differences in affinities for NGF or for the monoclonal antibody IIIG5. These data show that NGF-Rt is developmentally regulated in human urine, and are discussed in relation to the development and maturation of the peripheral nervous system.
Developmental regulation of human truncated nerve growth factor receptor
International Nuclear Information System (INIS)
DiStefano, P.S.; Clagett-Dame, M.; Chelsea, D.M.; Loy, R.
1991-01-01
Monoclonal antibodies (designated XIF1 and IIIG5) recognizing distinct epitopes of the human truncated nerve growth factor receptor (NGF-Rt) were used in a two-site radiometric immunosorbent assay to monitor levels of NGF-Rt in human urine as a function of age. Urine samples were collected from 70 neurologically normal subjects ranging in age from 1 month to 68 years. By using this sensitive two-site radiometric immunosorbent assay, NGF-Rt levels were found to be highest in urine from 1-month old subjects. By 2.5 months, NGF-Rt values were half of those seen at 1 month and decreased more gradually between 0.5 and 15 years. Between 15 and 68 years, urine NGF-Rt levels were relatively constant at 5% of 1-month values. No evidence for diurnal variation of adult NGF-Rt was apparent. Pregnant women in their third trimester showed significantly elevated urine NGF-Rt values compared with age-matched normals. Affinity labeling of NGF-Rt with 125I-NGF followed by immunoprecipitation with ME20.4-IgG and gel autoradiography indicated that neonatal urine contained high amounts of truncated receptor (Mr = 50 kd); decreasingly lower amounts of NGF-Rt were observed on gel autoradiograms with development, indicating that the two-site radiometric immunosorbent assay correlated well with the affinity labeling technique for measuring NGF-Rt. NGF-Rt in urines from 1-month-old and 36-year-old subjects showed no differences in affinities for NGF or for the monoclonal antibody IIIG5. These data show that NGF-Rt is developmentally regulated in human urine, and are discussed in relation to the development and maturation of the peripheral nervous system
Medication Administration Errors Involving Paediatric In-Patients in a ...
African Journals Online (AJOL)
The drug mostly associated with error was gentamicin with 29 errors (1.2 %). Conclusion: During the study, a high frequency of error was observed. There is a need to modify the way information is handled and shared by professionals as wrong time error was the most implicated error. Attention should also be given to IV ...
International Nuclear Information System (INIS)
Blumhagen, Jan O.; Ladebeck, Ralf; Fenchel, Matthias; Braun, Harald; Quick, Harald H.; Faul, David; Scheffler, Klaus
2014-01-01
Purpose: In quantitative PET imaging, it is critical to accurately measure and compensate for the attenuation of the photons absorbed in the tissue. While in PET/CT the linear attenuation coefficients can be easily determined from a low-dose CT-based transmission scan, in whole-body MR/PET the computation of the linear attenuation coefficients is based on the MR data. However, a constraint of the MR-based attenuation correction (AC) is the MR-inherent field-of-view (FoV) limitation due to static magnetic field (B 0 ) inhomogeneities and gradient nonlinearities. Therefore, the MR-based human AC map may be truncated or geometrically distorted toward the edges of the FoV and, consequently, the PET reconstruction with MR-based AC may be biased. This is especially of impact laterally where the patient arms rest beside the body and are not fully considered. Methods: A method is proposed to extend the MR FoV by determining an optimal readout gradient field which locally compensates B 0 inhomogeneities and gradient nonlinearities. This technique was used to reduce truncation in AC maps of 12 patients, and the impact on the PET quantification was analyzed and compared to truncated data without applying the FoV extension and additionally to an established approach of PET-based FoV extension. Results: The truncation artifacts in the MR-based AC maps were successfully reduced in all patients, and the mean body volume was thereby increased by 5.4%. In some cases large patient-dependent changes in SUV of up to 30% were observed in individual lesions when compared to the standard truncated attenuation map. Conclusions: The proposed technique successfully extends the MR FoV in MR-based attenuation correction and shows an improvement of PET quantification in whole-body MR/PET hybrid imaging. In comparison to the PET-based completion of the truncated body contour, the proposed method is also applicable to specialized PET tracers with little uptake in the arms and might reduce the
An arbitrary-order staggered time integrator for the linear acoustic wave equation
Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo
2018-02-01
We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.
Prescription Errors in Psychiatry
African Journals Online (AJOL)
Arun Kumar Agnihotri
clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.
Moreno, Isabel; Ochoa, Dolores; Román, Manuel; Cabaleiro, Teresa; Abad-Santos, Francisco
2016-01-01
Bioequivalence studies of drugs with a long half-life require long periods of time for pharmacokinetic sampling. The latest update of the European guideline allows the area under the curve (AUC) truncated at 72 hr to be used as an alternative to AUC0-t as the primary parameter. The objective of this study was to evaluate the effect of truncating the AUC at 48, 24 and 12 hr on the acceptance of the bioequivalence criterion as compared with truncation at 72 hr in bioequivalence trials. The effect of truncated AUC on the within-individual coefficient of variation (CVw) and on the ratio of the formulations was also analysed. Twenty-eight drugs were selected from bioequivalence trials. Pharmacokinetic data were analysed using WinNonLin 2.0 based on the trapezoidal method. Analysis of variance (ANOVA) was performed to obtain the ratios and 90% confidence intervals for AUC at different time-points. The degree of agreement of AUC0-72 in relation to AUC0-48 and AUC0-24, according to the Landis and Koch classification, was 'almost perfect'. Statistically significant differences were observed when the CVw of AUC truncated at 72, 48 and 24 hr was compared with the CVw of AUC0-12. There were no statistically significant differences in the AUC ratio at any time-point. Compared to AUC0-72, Pearson's correlation coefficient for mean AUC, AUC ratio and AUC CVw was worse for AUC0-12 than AUC0-24 or AUC0-48. These preliminary results could suggest that AUC truncation at 24 or 48 hr is adequate to determine whether two formulations are bioequivalent. © 2015 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Knöpfler, Andreas; Mayer, Michael; Heck, Bernhard
2014-05-01
Within the last decades, positioning using GNSS (Global Navigation Satellite Systems; e.g., GPS) has become a standard tool in many (geo-) sciences. The positioning methods Precise Point Positioning and differential point positioning based on carrier phase observations have been developed for a broad variety of applications with different demands for example on accuracy. In high precision applications, a lot of effort was invested to mitigate different error sources: the products for satellite orbits and satellite clocks were improved; the misbehaviour of satellite and receiver antennas compared to an ideal antenna is modelled by calibration values on absolute level, the modelling of the ionosphere and the troposphere is updated year by year. Therefore, within processing of data of CORS (continuously operating reference sites), equipped with geodetic hardware using a sophisticated strategy, the latest products and models nowadays enable positioning accuracies at low mm level. Despite the considerable improvements that have been achieved within GNSS data processing, a generally valid multipath model is still lacking. Therefore, site specific multipath still represents a major error source in precise GNSS positioning. Furthermore, the calibration information of receiving GNSS antennas, which is for instance derived by a robot or chamber calibration, is valid strictly speaking only for the location of the calibration. The calibrated antenna can show a slightly different behaviour at the CORS due to near field multipath effects. One very promising strategy to mitigate multipath effects as well as imperfectly calibrated receiver antennas is to stack observation residuals of several days, thereby, multipath-loaded observation residuals are analysed for example with respect to signal direction, to find and reduce systematic constituents. This presentation will give a short overview about existing stacking approaches. In addition, first results of the stacking approach
Park, G.; Gao, X.; Sorooshian, S.
2005-12-01
The atmospheric model is sensitive to the land surface interactions and its coupling with Land surface Models (LSMs) leads to a better ability to forecast weather under extreme climate conditions, such as droughts and floods (Atlas et al. 1993; Beljaars et al. 1996). However, it is still questionable how accurately the surface exchanges can be simulated using LSMs, since terrestrial properties and processes have high variability and heterogeneity. Examinations with long-term and multi-site surface observations including both remotely sensed and ground observations are highly needed to make an objective evaluation on the effectiveness and uncertainty of LSMs at different circumstances. Among several atmospheric forcing required for the offline simulation of LSMs, incident surface solar radiation is one of the most significant components, since it plays a major role in total incoming energy into the land surface. The North American Land Data Assimilation System (NLDAS) and North American Regional Reanalysis (NARR) are two important data sources providing high-resolution surface solar radiation data for the use of research communities. In this study, these data are evaluated against field observations (AmeriFlux) to identify their advantages, deficiencies and sources of errors. The NLDAS incident solar radiation shows a pretty good agreement in monthly mean prior to the summer of 2001, while it overestimates after the summer of 2001 and its bias is pretty close to the EDAS. Two main error sources are identified: 1) GOES solar radiation was not used in the NLDAS for several months in 2001 and 2003, and 2) GOES incident solar radiation when available, was positively biased in year 2002. The known snow detection problem is sometimes identified in the NLDAS, since it is inherited from GOES incident solar radiation. The NARR consistently overestimates incident surface solar radiation, which might produce erroneous outputs if used in the LSMs. Further attention is given to
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
Hubbeling, Dieneke
2016-09-01
This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.
Trapping of low-mass planets outside the truncated inner edges of protoplanetary discs
Miranda, Ryan; Lai, Dong
2018-02-01
We investigate the migration of a low-mass (≲10 M⊕) planet near the inner edge of a protoplanetary disc using two-dimensional viscous hydrodynamics simulations. We employ an inner boundary condition representing the truncation of the disc at the stellar corotation radius. As described by Tsang, wave reflection at the inner disc boundary modifies the Type I migration torque on the planet, allowing migration to be halted before the planet reaches the inner edge of the disc. For low-viscosity discs (α ≲ 10-3), planets may be trapped with semi-major axes as large as three to five times the inner disc radius. In general, planets are trapped closer to the inner edge as either the planet mass or the disc viscosity parameter α increases, and farther from the inner edge as the disc thickness is increased. This planet trapping mechanism may impact the formation and migration history of close-in compact multiplanet systems.
The robustness of truncated Airy beam in PT Gaussian potentials media
Wang, Xianni; Fu, Xiquan; Huang, Xianwei; Yang, Yijun; Bai, Yanfeng
2018-03-01
The robustness of truncated Airy beam in parity-time (PT) symmetric Gaussian potentials media is numerically investigated. A high-peak power beam sheds from the Airy beam due to the media modulation while the Airy wavefront still retain its self-bending and non-diffraction characteristics under the influence of modulation parameters. Increasing the modulation factor results in the smaller value of maximum power of the center beam, and the opposite trend occurs with the increment of the modulation depth. However, the parabolic trajectory of the Airy wavefront does not be influenced. By utilizing the unique features, the Airy beam can be used as a long distance transmission source under the PT symmetric Gaussian potentials medium.
Eliminating US hospital medical errors.
Kumar, Sameer; Steinebach, Marc
2008-01-01
Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.
International Nuclear Information System (INIS)
Li Benxian; Wang Xiaofeng; Xia Dandan; Chu Qingxin; Liu Xiaoyang; Lu Fengguo; Zhao Xudong
2011-01-01
Cuprous oxide (Cu 2 O) was synthesized via reactions between cupric oxide (CuO) and copper metal (Cu) at a low temperature of 300 deg. C. This progress is green, environmentally friendly and energy efficient. Cu 2 O crystals with truncated octahedra morphology were grown under high pressure using sodium hydroxide (NaOH) and potassium hydroxide (KOH) with a molar ratio of 1:1 as a flux. The growth mechanism of Cu 2 O polyhedral microcrystals are proposed and discussed. - Graphical Abstract: The Cu 2 O crystals with truncated octahedral shape were one-step synthesized in high yield via high pressure flux method for the first time, which is green and environmentally friendly. The mechanisms of synthesis and crystal growth were discussed in this paper. Highlights: → Cuprous oxide was one-step green synthesized by high pressure flux method. → The approach was based on the reverse dismutation reactions between cupric oxide and copper metal. → This progress is green, environmentally friendly and energy efficient. → The synthesized Cu2O crystals were of truncated octahedra morphology.
Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data
Directory of Open Access Journals (Sweden)
Na Wei
2016-05-01
Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.
On the effect of numerical errors in large eddy simulations of turbulent flows
International Nuclear Information System (INIS)
Kravchenko, A.G.; Moin, P.
1997-01-01
Aliased and dealiased numerical simulations of a turbulent channel flow are performed using spectral and finite difference methods. Analytical and numerical studies show that aliasing errors are more destructive for spectral and high-order finite-difference calculations than for low-order finite-difference simulations. Numerical errors have different effects for different forms of the nonlinear terms in the Navier-Stokes equations. For divergence and convective forms, spectral methods are energy-conserving only if dealiasing is performed. For skew-symmetric and rotational forms, both spectral and finite-difference methods are energy-conserving even in the presence of aliasing errors. It is shown that discrepancies between the results of dealiased spectral and standard nondialiased finite-difference methods are due to both aliasing and truncation errors with the latter being the leading source of differences. The relative importance of aliasing and truncation errors as compared to subgrid scale model terms in large eddy simulations is analyzed and discussed. For low-order finite-difference simulations, truncation errors can exceed the magnitude of the subgrid scale term. 25 refs., 17 figs., 1 tab
Impact of degree truncation on the spread of a contagious process on networks.
Harling, Guy; Onnela, Jukka-Pekka
2018-03-01
Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.
Estimation of Panel Data Regression Models with Two-Sided Censoring or Truncation
DEFF Research Database (Denmark)
Alan, Sule; Honore, Bo E.; Hu, Luojia
2014-01-01
This paper constructs estimators for panel data regression models with individual speci…fic heterogeneity and two–sided censoring and truncation. Following Powell (1986) the estimation strategy is based on moment conditions constructed from re–censored or re–truncated residuals. While these moment...
Inference for shared-frailty survival models with left-truncated data
van den Berg, G.J.; Drepper, B.
2016-01-01
Shared-frailty survival models specify that systematic unobserved determinants of duration outcomes are identical within groups of individuals. We consider random-effects likelihood-based statistical inference if the duration data are subject to left-truncation. Such inference with left-truncated
On truncated Taylor series and the position of their spurious zeros
DEFF Research Database (Denmark)
Christiansen, Søren; Madsen, Per A.
2006-01-01
A truncated Taylor series, or a Taylor polynomial, which may appear when treating the motion of gravity water waves, is obtained by truncating an infinite Taylor series for a complex, analytical function. For such a polynomial the position of the complex zeros is considered in case the Taylor...
A Lynden-Bell integral estimator for the tail index of right-truncated ...
African Journals Online (AJOL)
By means of a Lynden-Bell integral with deterministic threshold, Worms and Worms [A Lynden-Bell integral estimator for extremes of randomly truncated data. Statist. Probab. Lett. 2016; 109: 106-117] recently introduced an asymptotically normal estimator of the tail index for randomly right-truncated Pareto-type data.
Resonant Excitation of a Truncated Metamaterial Cylindrical Shell by a Thin Wire Monopole
DEFF Research Database (Denmark)
Kim, Oleksiy S.; Erentok, Aycan; Breinbjerg, Olav
2009-01-01
A truncated metamaterial cylindrical shell excited by a thin wire monopole is investigated using the integral equation technique as well as the finite element method. Simulations reveal a strong field singularity at the edge of the truncated cylindrical shell, which critically affects the matching...
Immature truncated O-glycophenotype of cancer directly induces oncogenic features
DEFF Research Database (Denmark)
Radhakrishnan, Prakash; Dabelsteen, Sally; Madsen, Frey Brus
2014-01-01
Aberrant expression of immature truncated O-glycans is a characteristic feature observed on virtually all epithelial cancer cells, and a very high frequency is observed in early epithelial premalignant lesions that precede the development of adenocarcinomas. Expression of the truncated O-glycan s...
Bounded real and positive real balanced truncation using Σ-normalised coprime factors
Trentelman, H.L.
2009-01-01
In this article, we will extend the method of balanced truncation using normalised right coprime factors of the system transfer matrix to balanced truncation with preservation of half line dissipativity. Special cases are preservation of positive realness and bounded realness. We consider a half
Influence of miscut on crystal truncation rod scattering
International Nuclear Information System (INIS)
Munkholm, A.; Brennan, S.
1999-01-01
X-rays can be used to measure the roughness of a surface by the study of crystal truncation rod scattering. It is shown that for a simple cubic lattice the presence of a miscut surface with a regular step array has no effect on the scattered intensity of a single rod and that a distribution of terrace widths on the surface is shown to have the same effect as adding roughness to the surface. For a perfect crystal without miscut, the scattered intensity is the sum of the intensity from all the rods with the same in-plane momentum transfer. For all real crystals, the scattered intensity is better described as that from a single rod. It is shown that data-collection strategies must correctly account for the sample miscut or there is a potential for improperly measuring the rod intensity. This can result in an asymmetry in the rod intensity above and below the Bragg peak, which can be misinterpreted as being due to a relaxation of the surface. The calculations presented here are compared with data for silicon (001) wafers with 0.1 and 4 miscuts. (orig.)
Weakly nonlinear sloshing in a truncated circular conical tank
International Nuclear Information System (INIS)
Gavrilyuk, I P; Hermann, M; Lukovsky, I A; Solodun, O V; Timokha, A N
2013-01-01
Sloshing of an ideal incompressible liquid in a rigid truncated (tapered) conical tank is considered when the tank performs small-magnitude oscillatory motions with the forcing frequency close to the lowest natural sloshing frequency. The multimodal method, the non-conformal mapping technique and the Moiseev type asymptotics are employed to derive a finite-dimensional system of weakly nonlinear ordinary differential (modal) equations. This modal system is a generalization of that by Gavrilyuk et al 2005 Fluid Dyn. Res. 37 399–429. Using the derived modal equations, we classify the resonant steady-state wave regimes occurring due to horizontal harmonic tank excitations. The frequency ranges are detected where the ‘planar’ and/or ‘swirling’ steady-state sloshing are stable as well as a range in which all steady-state wave regimes are not stable and irregular (chaotic) liquid motions occur is established. The results on the frequency ranges are qualitatively supported by experiments by Matta E 2002 PhD Thesis Politecnico di Torino, Torino. (paper)
Adaptive designs based on the truncated product method
Directory of Open Access Journals (Sweden)
Neuhäuser Markus
2005-09-01
Full Text Available Abstract Background Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more stages and combines the p-values of the different stages using Fisher's combination test. Methods Alternatively to Fisher's test, the recently proposed truncated product method (TPM can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. Results When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. Conclusion It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design.
Consistent Kaluza-Klein truncations via exceptional field theory
Energy Technology Data Exchange (ETDEWEB)
Hohm, Olaf [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States); Samtleben, Henning [Université de Lyon, Laboratoire de Physique, UMR 5672, CNRS,École Normale Supérieure de Lyon, 46, allée d’Italie, F-69364 Lyon cedex 07 (France)
2015-01-26
We present the generalized Scherk-Schwarz reduction ansatz for the full supersymmetric exceptional field theory in terms of group valued twist matrices subject to consistency equations. With this ansatz the field equations precisely reduce to those of lower-dimensional gauged supergravity parametrized by an embedding tensor. We explicitly construct a family of twist matrices as solutions of the consistency equations. They induce gauged supergravities with gauge groups SO(p,q) and CSO(p,q,r). Geometrically, they describe compactifications on internal spaces given by spheres and (warped) hyperboloides H{sup p,q}, thus extending the applicability of generalized Scherk-Schwarz reductions beyond homogeneous spaces. Together with the dictionary that relates exceptional field theory to D=11 and IIB supergravity, respectively, the construction defines an entire new family of consistent truncations of the original theories. These include not only compactifications on spheres of different dimensions (such as AdS{sub 5}×S{sup 5}), but also various hyperboloid compactifications giving rise to a higher-dimensional embedding of supergravities with non-compact and non-semisimple gauge groups.
Proteolysis of truncated hemolysin A yields a stable dimerization interface
Energy Technology Data Exchange (ETDEWEB)
Novak, Walter R.P.; Bhattacharyya, Basudeb; Grilley, Daniel P.; Weaver, Todd M. (Wabash); (UW)
2017-02-21
Wild-type and variant forms of HpmA265 (truncated hemolysin A) from
Directory of Open Access Journals (Sweden)
Maria Eugenia Chaves Maldonado
2016-01-01
Full Text Available In his unfinished and posthumously published book Apologie pour l’histoire, Marc Bloch bestowed on future historians a seminal legacy of critical reflections on the concept of time as the object of historical analysis. During the last decades, the concept of time in History has experienced a renewed interest by professional historians, in particular in reference to the category of anachronism. The Italian historian Carlo Ginzburg and the French art historian Georges Didi-Huberman are among those engaged in this debate. This article offers a reading of two works by these historians with the purpose of underlying the fundamental influence that Marc Bloch’s ideas on time had in Ginzburg and Didi-Hubermans’ critical interventions.
The error in total error reduction.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2014-02-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.
and Correlated Error-Regressor
African Journals Online (AJOL)
Nekky Umera
in queuing theory and econometrics, where the usual assumption of independent error terms may not be plausible in most cases. Also, when using time-series data on a number of micro-economic units, such as households and service oriented channels, where the stochastic disturbance terms in part reflect variables which ...
Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano
2013-01-01
Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...
National Research Council Canada - National Science Library
Byrne, Michael D
2006-01-01
.... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...
International Nuclear Information System (INIS)
Wahlstroem, B.
1993-01-01
Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)
Effects of errors on the dynamic aperture of the Advanced Photon Source storage ring
International Nuclear Information System (INIS)
Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.; Wirsbinski, J.
1991-01-01
The individual tolerance limits for alignment errors and magnet fabrication errors in the 7-GeV Advanced Photon Source storage ring are determined by computer-simulated tracking. Limits are established for dipole strength and roll errors, quadrupole strength and alignment errors, sextupole strength and alignment errors, as well as higher order multipole strengths in dipole and quadrupole magnets. The effects of girder misalignments on the dynamic aperture are also studied. Computer simulations are obtained with the tracking program RACETRACK, with errors introduced from a user-defined Gaussian distribution, truncated at ±5 standard deviation units. For each error, the average and rms spread of the stable amplitudes are determined for ten distinct machines, defined as ten different seeds to the random distribution, and for five distinct initial directions of the tracking particle. 4 refs., 4 figs., 1 tab
Analysis of error patterns in clinical radiotherapy
International Nuclear Information System (INIS)
Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin
1996-01-01
Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to
Meyers, S. R.; Siewert, S. E.; Singer, B. S.; Sageman, B. B.; Condon, D. J.; Obradovich, J. D.; Jicha, B.; Sawyer, D. A.
2010-12-01
We develop a new intercalibrated astrochronologic and radioisotopic time scale for the Cenomanian/Turonian (C/T) boundary interval near the GSSP in Colorado, where orbitally-influenced rhythmic strata host bentonites that contain sanidine and zircon suitable for 40Ar/39Ar and U-Pb dating. This provides a rare opportunity to directly intercalibrate two independent radioisotopic chronometers against an astrochronologic age model. We present paired 40Ar/39Ar and U-Pb ages from four bentonites spanning the Vascoceras diartianum to Pseudaspidoceras flexuosum biozones, utilizing both newly collected material and legacy sanidine samples of Obradovich (1993). Full 2σ uncertainties (decay constant, standard age, analytical sources) for the 40Ar/39Ar ages, using a weighted mean of 33-103 concordant age determinations and an age of 28.201 Ma for Fish Canyon sanidine (FCs), range from ±0.15 to 0.19 Ma, with ages from 93.67 to 94.43 Ma. The traditional FCs age of 28.02 Ma yields ages from 93.04 to 93.78 Ma with full uncertainties of ±1.58 Ma. Using the ET535 tracer, single zircon CA-TIMS 206Pb/238U ages determined from each bentonite record a range of ages (up to 2.1 Ma), however, in three of the four bentonites the youngest single crystal ages are statistically indistinguishable from the 40Ar/39Ar ages calculated relative to 28.201 Ma FCs, supporting this calibration. Using the new radioisotopic data and published astrochronology (Sageman et al., 2006) we develop an integrated C/T boundary time scale using a Bayesian statistical approach that builds upon the strength of each geochronologic method. Whereas the radioisotopic data provide an age with a well-defined uncertainty for each bentonite, the orbital time scale yields a more highly resolved estimate of the duration between stratigraphic horizons, including the radioisotopically dated beds. The Bayesian algorithm yields a C/T time scale that is statistically compatible with the astrochronologic and radioisotopic data
Redundant measurements for controlling errors
International Nuclear Information System (INIS)
Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.
1979-07-01
Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program
Sensation seeking and error processing.
Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan
2014-09-01
Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.
Nair, Bala G; Peterson, Gene N; Newman, Shu-Fang; Wu, Wei-Ying; Kolios-Morris, Vickie; Schwid, Howard A
2012-06-01
Continuation of perioperative beta-blockers for surgical patients who are receiving beta-blockers prior to arrival for surgery is an important quality measure (SCIP-Card-2). For this measure to be considered successful, name, date, and time of the perioperative beta-blocker must be documented. Alternately, if the beta-blocker is not given, the medical reason for not administering must be documented. Before the study was conducted, the institution lacked a highly reliable process to document the date and time of self-administration of beta-blockers prior to hospital admission. Because of this, compliance with the beta-blocker quality measure was poor (-65%). To improve this measure, the anesthesia care team was made responsible for documenting perioperative beta-blockade. Clear documentation guidelines were outlined, and an electronic Anesthesia Information Management System (AIMS) was configured to facilitate complete documentation of the beta-blocker quality measure. In addition, real-time electronic alerts were generated using Smart Anesthesia Messenger (SAM), an internally developed decision-support system, to notify users concerning incomplete beta-blocker documentation. Weekly compliance for perioperative beta-blocker documentation before the study was 65.8 +/- 16.6%, which served as the baseline value. When the anesthesia care team started documenting perioperative beta-blocker in AIMS, compliance was 60.5 +/- 8.6% (p = .677 as compared with baseline). Electronic alerts with SAM improved documentation compliance to 94.6 +/- 3.5% (p documentation and (2) enhance features in the electronic medical systems to alert the user concerning incomplete documentation.
Time dependence linear transport III convergence of the discrete ordinate method
International Nuclear Information System (INIS)
Wilson, D.G.
1983-01-01
In this paper the uniform pointwise convergence of the discrete ordinate method for weak and strong solutions of the time dependent, linear transport equation posed in a multidimensional, rectangular parallelepiped with partially reflecting walls is established. The first result is that a sequence of discrete ordinate solutions converges uniformly on the quadrature points to a solution of the continuous problem provided that the corresponding sequence of truncation errors for the solution of the continuous problem converges to zero in the same manner. The second result is that continuity of the solution with respect to the velocity variables guarantees that the truncation erros in the quadrature formula go the zero and hence that the discrete ordinate approximations converge to the solution of the continuous problem as the discrete ordinate become dense. An existence theory for strong solutions of the the continuous problem follows as a result
Medication errors: an overview for clinicians.
Wittich, Christopher M; Burkle, Christopher M; Lanier, William L
2014-08-01
Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Analysis of errors in forensic science
Directory of Open Access Journals (Sweden)
Mingxiao Du
2017-01-01
Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.
Metcalfe, Janet
2017-01-01
Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…
International Nuclear Information System (INIS)
Wang Yue; Wang Jian-Guo; Chen Zai-Gao
2015-01-01
Based on conformal construction of physical model in a three-dimensional Cartesian grid, an integral-based conformal convolutional perfectly matched layer (CPML) is given for solving the truncation problem of the open port when the enlarged cell technique conformal finite-difference time-domain (ECT-CFDTD) method is used to simulate the wave propagation inside a perfect electric conductor (PEC) waveguide. The algorithm has the same numerical stability as the ECT-CFDTD method. For the long-time propagation problems of an evanescent wave in a waveguide, several numerical simulations are performed to analyze the reflection error by sweeping the constitutive parameters of the integral-based conformal CPML. Our numerical results show that the integral-based conformal CPML can be used to efficiently truncate the open port of the waveguide. (paper)
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
A qualitative description of human error
International Nuclear Information System (INIS)
Li Zhaohuan
1992-11-01
The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed
A qualitative description of human error
Energy Technology Data Exchange (ETDEWEB)
Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy
1992-11-01
The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.
International Nuclear Information System (INIS)
Yan, Y.T.
1996-11-01
A brief review of the Zlib development is given. Emphasized is the Zlib nerve system which uses the One-Step Index Pointers (OSIPs) for efficient computation and flexible use of the Truncated Power Series Algebra (TPSA). Also emphasized is the treatment of parameterized maps with an object-oriented language (e.g. C++). A parameterized map can be a Vector Power Series (Vps) or a Lie generator represented by an exponent of a Truncated Power Series (Tps) of which each coefficient is an object of truncated power series
A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow
Directory of Open Access Journals (Sweden)
Lluís Garrido
2015-06-01
Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.
McVay, Jennifer C.; Kane, Michael J.
2012-01-01
A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Süß, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects’ WMC and their performance and mind-wandering rates during a sustained-attention task; subjects completed either a go/no-go version requiring executive control over habit, or a vigilance version that did not. We further combined the data with those from McVay and Kane (2009) to: (1) gauge the contributions of WMC and attentional lapses to the worst-performance rule and the tail, or τ parameter, of response time (RT) distributions; (2) assess which parameters from a quantitative evidence-accumulation RT model were predicted by WMC and mind-wandering reports, and (3) consider intra-subject RT patterns – particularly, speeding – as potential objective markers of mind wandering. We found that WMC predicted action and thought control in only some conditions, that attentional lapses (indicated by TUT reports and drift-rate variability in evidence accumulation) contributed to τ, performance accuracy, and WMC’s association with them, and that mind-wandering experiences were not predicted by trial-to-trial RT changes, and so they cannot always be inferred from objective performance measures. PMID:22004270
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Directory of Open Access Journals (Sweden)
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
ERROR HANDLING IN INTEGRATION WORKFLOWS
Directory of Open Access Journals (Sweden)
Alexey M. Nazarenko
2017-01-01
Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.
Medication errors: definitions and classification
Aronson, Jeffrey K
2009-01-01
To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526
Residual-based Methods for Controlling Discretization Error in CFD
2015-08-24
ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1 . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a
Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation
International Nuclear Information System (INIS)
Lychak, Oleh V; Holyns’kiy, Ivan S
2016-01-01
The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen. (paper)
Preventing Errors in Laterality
Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie
2014-01-01
An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...
International Nuclear Information System (INIS)
Reason, J.
1988-01-01
This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated
Characteristics of pediatric chemotherapy medication errors in a national error reporting database.
Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R
2007-07-01
Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.
Probabilistic error bounds for reduced order modeling
Energy Technology Data Exchange (ETDEWEB)
Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)
2015-07-01
Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)
A generalized right truncated bivariate Poisson regression model with applications to health data.
Islam, M Ataharul; Chowdhury, Rafiqul I
2017-01-01
A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
DEFF Research Database (Denmark)
Borg, Leise; Jørgensen, Jakob Sauer; Frikel, Jürgen
2017-01-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered...... and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray...... backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance...
Propagation of a general-type beam through a truncated fractional Fourier transform optical system.
Zhao, Chengliang; Cai, Yangjian
2010-03-01
Paraxial propagation of a general-type beam through a truncated fractional Fourier transform (FRT) optical system is investigated. Analytical formulas for the electric field and effective beam width of a general-type beam in the FRT plane are derived based on the Collins formula. Our formulas can be used to study the propagation of a variety of laser beams--such as Gaussian, cos-Gaussian, cosh-Gaussian, sine-Gaussian, sinh-Gaussian, flat-topped, Hermite-cosh-Gaussian, Hermite-sine-Gaussian, higher-order annular Gaussian, Hermite-sinh-Gaussian and Hermite-cos-Gaussian beams--through a FRT optical system with or without truncation. The propagation properties of a Hermite-cos-Gaussian beam passing through a rectangularly truncated FRT optical system are studied as a numerical example. Our results clearly show that the truncated FRT optical system provides a convenient way for laser beam shaping.
truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models
Directory of Open Access Journals (Sweden)
Maria Karlsson
2014-05-01
Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
Position Error Covariance Matrix Validation and Correction
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...
2012-03-01
This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...
International Nuclear Information System (INIS)
Jeach, J.L.
1976-01-01
When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables
Spotting software errors sooner
International Nuclear Information System (INIS)
Munro, D.
1989-01-01
Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)
International Nuclear Information System (INIS)
Kop, L.
2001-01-01
On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses
Medical Errors Reduction Initiative
National Research Council Canada - National Science Library
Mutter, Michael L
2005-01-01
The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...
Energy Technology Data Exchange (ETDEWEB)
Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)
2000-07-01
The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to
DEFF Research Database (Denmark)
Rasmussen, Jens
1983-01-01
An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....
Institute of Scientific and Technical Information of China (English)
CHENG Shi-lun; YANG Zhen
2008-01-01
To maximize throughput and to satisfy users' requirements in cognitive radios, a cross-layer optimization problem combining adaptive modulation and power control at the physical layer and truncated automatic repeat request at the medium access control layer is proposed. Simulation results show the combination of power control, adaptive modulation, and truncated automatic repeat request can regulate transmitter powers and increase the total throughput effectively.
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon
2017-12-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.
International Nuclear Information System (INIS)
Workman, R. L.; Tiator, L.; Wunderlich, Y.; Doring, M.; Haberzettl, H.
2017-01-01
Here, we compare the methods of amplitude reconstruction, for a complete experiment and a truncated partial-wave analysis, applied to the photoproduction of pseudoscalar mesons. The approach is pedagogical, showing in detail how the amplitude reconstruction (observables measured at a single energy and angle) is related to a truncated partial-wave analysis (observables measured at a single energy and a number of angles).
Energy Technology Data Exchange (ETDEWEB)
Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)
2015-05-15
Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.
International Nuclear Information System (INIS)
Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y.; Pullepu, Babuji
2015-01-01
Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.
H.B. Kekre; Sudeep Thepade; Karan Dhamejani; Sanchit Khandelwal; Adnan Azmi
2012-01-01
The paper presents a performance analysis of Multilevel Block Truncation Coding based Face Recognition among widely used color spaces. In [1], Multilevel Block Truncation Coding was applied on the RGB color space up to four levels for face recognition. Better results were obtained when the proposed technique was implemented using Kekre’s LUV (K’LUV) color space [25]. This was the motivation to test the proposed technique using assorted color spaces. For experimental analysis, two face databas...
2008-01-01
One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177
Thermodynamics of Error Correction
Directory of Open Access Journals (Sweden)
Pablo Sartori
2015-12-01
Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Directory of Open Access Journals (Sweden)
Andrea Caliciotti
2018-04-01
Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].
The Statistical Analysis of Failure Time Data
Kalbfleisch, John D
2011-01-01
Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.
Directory of Open Access Journals (Sweden)
Antonio Boldrini
2013-06-01
Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research
Modified Truncated Multiplicity Analysis to Improve Verification of Uranium Fuel Cycle Materials
International Nuclear Information System (INIS)
LaFleur, A.; Miller, K.; Swinhoe, M.; Belian, A.; Croft, S.
2015-01-01
Accurate verification of 235U enrichment and mass in UF6 storage cylinders and the UO2F2 holdup contained in the process equipment is needed to improve international safeguards and nuclear material accountancy at uranium enrichment plants. Small UF6 cylinders (1.5'' and 5'' diameter) are used to store the full range of enrichments from depleted to highly-enriched UF6. For independent verification of these materials, it is essential that the 235U mass and enrichment measurements do not rely on facility operator declarations. Furthermore, in order to be deployed by IAEA inspectors to detect undeclared activities (e.g., during complementary access), it is also imperative that the measurement technique is quick, portable, and sensitive to a broad range of 235U masses. Truncated multiplicity analysis is a technique that reduces the variance in the measured count rates by only considering moments 1, 2, and 3 of the multiplicity distribution. This is especially important for reducing the uncertainty in the measured doubles and triples rates in environments with a high cosmic ray background relative to the uranium signal strength. However, we believe that the existing truncated multiplicity analysis throws away too much useful data by truncating the distribution after the third moment. This paper describes a modified truncated multiplicity analysis method that determines the optimal moment to truncate the multiplicity distribution based on the measured data. Experimental measurements of small UF6 cylinders and UO2F2 working reference materials were performed at Los Alamos National Laboratory (LANL). The data were analyzed using traditional and modified truncated multiplicity analysis to determine the optimal moment to truncate the multiplicity distribution to minimize the uncertainty in the measured count rates. The results from this analysis directly support nuclear safeguards at enrichment plants and provide a more accurate verification method for UF6
Flow equation of quantum Einstein gravity in a higher-derivative truncation
International Nuclear Information System (INIS)
Lauscher, O.; Reuter, M.
2002-01-01
Motivated by recent evidence indicating that quantum Einstein gravity (QEG) might be nonperturbatively renormalizable, the exact renormalization group equation of QEG is evaluated in a truncation of theory space which generalizes the Einstein-Hilbert truncation by the inclusion of a higher-derivative term (R 2 ). The beta functions describing the renormalization group flow of the cosmological constant, Newton's constant, and the R 2 coupling are computed explicitly. The fixed point properties of the 3-dimensional flow are investigated, and they are confronted with those of the 2-dimensional Einstein-Hilbert flow. The non-Gaussian fixed point predicted by the latter is found to generalize to a fixed point on the enlarged theory space. In order to test the reliability of the R 2 truncation near this fixed point we analyze the residual scheme dependence of various universal quantities; it turns out to be very weak. The two truncations are compared in detail, and their numerical predictions are found to agree with a surprisingly high precision. Because of the consistency of the results it appears increasingly unlikely that the non-Gaussian fixed point is an artifact of the truncation. If it is present in the exact theory QEG is probably nonperturbatively renormalizable and ''asymptotically safe.'' We discuss how the conformal factor problem of Euclidean gravity manifests itself in the exact renormalization group approach and show that, in the R 2 truncation, the investigation of the fixed point is not afflicted with this problem. Also the Gaussian fixed point of the Einstein-Hilbert truncation is analyzed; it turns out that it does not generalize to a corresponding fixed point on the enlarged theory space
LIBERTARISMO & ERROR CATEGORIAL
Directory of Open Access Journals (Sweden)
Carlos G. Patarroyo G.
2009-01-01
Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.
Libertarismo & Error Categorial
PATARROYO G, CARLOS G
2009-01-01
En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
Directory of Open Access Journals (Sweden)
Philippe Roch
2004-01-01
Full Text Available We previously reported the crucial role displayed by loop 3 of defensin isolated from the Mediterranean mussel, Mytilus galloprovincialis, in antibacterial and antifungal activities. We now investigated antiprotozoan and antiviral activities of some previously reported fragments B, D, E, P and Q. Two fragments (D and P efficiently killed Trypanosoma brucei (ID50 4–12 μM and Leishmania major (ID50 12–45 μM in a time/dose-dependent manner. Killing of T. brucei started as early as 1 h after initiation of contact with fragment D and reached 55% mortality after 6 h. Killing was temperature dependent and a temperature of 4°C efficiently impaired the ability to kill T. brucei. Fragments bound to the entire external epithelium of T. brucei. Prevention of HIV-1 infestation was obtained only with fragments P and Q at 20 μM. Even if fragment P was active on both targets, the specificity of fragments D and Q suggest that antiprotozoan and antiviral activities are mediated by different mechanisms. Truncated sequences of mussel defensin, including amino acid replacement to maintain 3D structure and increased positive net charge, also possess antiprotozoan and antiviral capabilities. New alternative and/or complementary antibiotics can be derived from the vast reservoir of natural antimicrobial peptides (AMPs contained in marine invertebrates.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
Parametric instability analysis of truncated conical shells using the Haar wavelet method
Dai, Qiyi; Cao, Qingjie
2018-05-01
In this paper, the Haar wavelet method is employed to analyze the parametric instability of truncated conical shells under static and time dependent periodic axial loads. The present work is based on the Love first-approximation theory for classical thin shells. The displacement field is expressed as the Haar wavelet series in the axial direction and trigonometric functions in the circumferential direction. Then the partial differential equations are reduced into a system of coupled Mathieu-type ordinary differential equations describing dynamic instability behavior of the shell. Using Bolotin's method, the first-order and second-order approximations of principal instability regions are determined. The correctness of present method is examined by comparing the results with those in the literature and very good agreement is observed. The difference between the first-order and second-order approximations of principal instability regions for tensile and compressive loads is also investigated. Finally, numerical results are presented to bring out the influences of various parameters like static load factors, boundary conditions and shell geometrical characteristics on the domains of parametric instability of conical shells.
Chin, Wen Cheong; Lee, Min Cherng; Yap, Grace Lee Ching
2016-01-01
High frequency financial data modelling has become one of the important research areas in the field of financial econometrics. However, the possible structural break in volatile financial time series often trigger inconsistency issue in volatility estimation. In this study, we propose a structural break heavy-tailed heterogeneous autoregressive (HAR) volatility econometric model with the enhancement of jump-robust estimators. The breakpoints in the volatility are captured by dummy variables after the detection by Bai-Perron sequential multi breakpoints procedure. In order to further deal with possible abrupt jump in the volatility, the jump-robust volatility estimators are composed by using the nearest neighbor truncation approach, namely the minimum and median realized volatility. Under the structural break improvements in both the models and volatility estimators, the empirical findings show that the modified HAR model provides the best performing in-sample and out-of-sample forecast evaluations as compared with the standard HAR models. Accurate volatility forecasts have direct influential to the application of risk management and investment portfolio analysis.
Directory of Open Access Journals (Sweden)
Cláudia da Silva
2011-12-01
Full Text Available OBJETIVO: Correlacionar as variáveis: erros, tempo, velocidade e compreensão de leitura de escolares com distúrbios de aprendizagem e escolares sem dificuldade de aprendizagem. MÉTODOS: Participaram deste estudo 40 escolares de 8 a 12 anos de idade, de ambos os gêneros, de 2ª a 4ª série do Ensino Fundamental Municipal, divididos em GI: composto por 20 escolares sem dificuldade de aprendizagem e GII: composto por 20 escolares com distúrbio de aprendizagem. Foram utilizados textos selecionados a partir da indicação de professores da 2ª à 4ª série da Rede Municipal de Ensino, para a realização de leitura oral. A compreensão foi realizada por meio de quatro perguntas apresentadas após a leitura do texto, às quais os escolares deveriam responder oralmente. RESULTADOS: Houve diferença entre GI e GII no número de erros, velocidade e compreensão de leitura e tempo total de leitura. A correlação entre tempo total de leitura e erros cometidos durante a leitura foi positiva, e entre as variáveis tempo total de leitura e velocidade de leitura foi negativa. Para o grupo GII, houve diferença com correlação negativa entre as variáveis tempo total de leitura e velocidade de leitura. CONCLUSÃO: Para os escolares com distúrbio de aprendizagem, o desempenho nas variáveis que foram correlacionadas encontra-se alterado interferindo no desenvolvimento em leitura e, consequentemente, na compreensão do texto lido.PURPOSE: To correlate the variables error, time, speed and reading comprehension of students with learning disorders and students without learning disorders. METHODS: The participants of this study were 40 students, aged from 8 to 12 years old, of both genders, from 2nd to 4th grades of municipal elementary education, divided into GI: comprising 20 students without learning disorders, and GII: comprising 20 students with learning disorders. As procedure we used a selection of texts indicated by teachers of 2nd to 4th grades of
The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.
Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J
2016-04-01
The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.
NLO error propagation exercise: statistical results
International Nuclear Information System (INIS)
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods
A high-order time-accurate interrogation method for time-resolved PIV
International Nuclear Information System (INIS)
Lynch, Kyle; Scarano, Fulvio
2013-01-01
A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In
Indian Academy of Sciences (India)
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells
Barua, Dipak; Hlavacek, William S.
2013-01-01
In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated
Closed-form kinetic parameter estimation solution to the truncated data problem
International Nuclear Information System (INIS)
Zeng, Gengsheng L; Kadrmas, Dan J; Gullberg, Grant T
2010-01-01
In a dedicated cardiac single photon emission computed tomography (SPECT) system, the detectors are focused on the heart and the background is truncated in the projections. Reconstruction using truncated data results in biased images, leading to inaccurate kinetic parameter estimates. This paper has developed a closed-form kinetic parameter estimation solution to the dynamic emission imaging problem. This solution is insensitive to the bias in the reconstructed images that is caused by the projection data truncation. This paper introduces two new ideas: (1) it includes background bias as an additional parameter to estimate, and (2) it presents a closed-form solution for compartment models. The method is based on the following two assumptions: (i) the amount of the bias is directly proportional to the truncated activities in the projection data, and (ii) the background concentration is directly proportional to the concentration in the myocardium. In other words, the method assumes that the image slice contains only the heart and the background, without other organs, that the heart is not truncated, and that the background radioactivity is directly proportional to the radioactivity in the blood pool. As long as the background activity can be modeled, the proposed method is applicable regardless of the number of compartments in the model. For simplicity, the proposed method is presented and verified using a single compartment model with computer simulations using both noiseless and noisy projections.
Transiently truncated and differentially regulated expression of midkine during mouse embryogenesis
International Nuclear Information System (INIS)
Chen Qin; Yuan Yuanyang; Lin Shuibin; Chang Youde; Zhuo Xinming; Wei Wei; Tao Ping; Ruan Lingjuan; Li Qifu; Li Zhixing
2005-01-01
Midkine (MK) is a retinoic acid response cytokine, mostly expressed in embryonic tissues. Aberrant expression of MK was found in numerous cancers. In human, a truncated MK was expressed specifically in tumor/cancer tissues. Here we report the discovery of a novel truncated form of MK transiently expressed during normal mouse embryonic development. In addition, MK is concentrated at the interface between developing epithelium and mesenchyme as well as highly proliferating cells. Its expression, which is closely coordinated with angiogenesis and vasculogenesis, is spatiotemporally regulated with peaks in extensive organogenesis period and undifferentiated cells tailing off in maturing cells, implying its role in nascent blood vessel (endothelial) signaling of tissue differentiation and stem cell renewal/differentiation.. Cloning and sequencing analysis revealed that the embryonic truncated MK, in which the conserved domain is in-frame deleted, presumably producing a novel secreted small peptide, is different from the truncated form in human cancer tissues, whose deletion results in a frame-shift mutation. Our data suggest that MK may play a role in epithelium-mesenchyme interactions, blood vessel signaling, and the decision of proliferation vs differentiation. Detection of the transiently expressed truncated MK reveals its novel function in development and sheds light on its role in carcinogenesis
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation
International Nuclear Information System (INIS)
Chen, Ming; Yu, Hengyong
2015-01-01
Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units