Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Kukush, Alexander; Schneeweiss, Hans
2004-01-01
We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Estimation of bias errors in measured airplane responses using maximum likelihood method
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
无
2009-01-01
In order to restrain the mid-spatial frequency error in magnetorheological finishing (MRF) process, a novel part-random path is designed based on the theory of maximum entropy method (MEM). Using KDMRF-1000F polishing machine, one flat work piece (98 mm in diameter) is polished. The mid-spatial frequency error in the region using part-random path is much lower than that by using common raster path. After one MRF iteration (7.46 min), peak-to-valley (PV) is 0.062 wave (1 wave =632.8 nm), root-mean-square (RMS) is 0.010 wave and no obvious mid-spatial frequency error is found. The result shows that the part-random path is a novel path, which results in a high form accuracy and low mid-spatial frequency error in MRF process.
Challenge and Error: Critical Events and Attention-Related Errors
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Uncertainty relations and approximate quantum error correction
Renes, Joseph M.
2016-09-01
The uncertainty principle can be understood as constraining the probability of winning a game in which Alice measures one of two conjugate observables, such as position or momentum, on a system provided by Bob, and he is to guess the outcome. Two variants are possible: either Alice tells Bob which observable she measured, or he has to furnish guesses for both cases. Here I derive uncertainty relations for both, formulated directly in terms of Bob's guessing probabilities. For the former these relate to the entanglement that can be recovered by action on Bob's system alone. This gives an explicit quantum circuit for approximate quantum error correction using the guessing measurements for "amplitude" and "phase" information, implicitly used in the recent construction of efficient quantum polar codes. I also find a relation on the guessing probabilities for the latter game, which has application to wave-particle duality relations.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Factoring Algebraic Error for Relative Pose Estimation
Lindstrom, P; Duchaineau, M
2009-03-09
We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.
In-medium dispersion relations of charmonia studied by the maximum entropy method
Ikeda, Atsuro; Asakawa, Masayuki; Kitazawa, Masakiyo
2017-01-01
We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and the weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find a significant increase of the masses of charmonia in medium. We also find that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at T ≃1.6 Tc for all the channels we analyze.
In-medium dispersion relations of charmonia studied by maximum entropy method
Ikeda, Atsuro; Kitazawa, Masakiyo
2016-01-01
We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find significant increase of the masses of charmonia in medium. It is also found that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at $T\\simeq1.6T_c$ for all the channels we analyzed.
Impacts of motivational valence on the error-related negativity elicited by full and partial errors.
Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki
2016-02-01
Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors.
Alcohol dependence and anxiety increase error-related brain activity
Schellekens, A.F.A.; Bruijn, E.R.A. de; Lankveld, C.A.A. van; Hulstijn, W.; Buitelaar, J.K.; Jong, C.A.J. de; Verkes, R.J.
2010-01-01
Aims Detection of errors is crucial for efficient goal-directed behaviour. The ability to monitor behaviour is found to be diminished in patients with substance dependence, as reflected in decreased error-related brain activity, i.e. error-related negativity (ERN). The ERN is also decreased in other
Alcohol dependence and anxiety increase error-related brain activity.
Schellekens, A.F.A.; Bruijn, E.R. de; Lankveld, C.A. van; Hulstijn, W.; Buitelaar, J.K.; Jong, C.A.J. de; Verkes, R.J.
2010-01-01
AIMS: Detection of errors is crucial for efficient goal-directed behaviour. The ability to monitor behaviour is found to be diminished in patients with substance dependence, as reflected in decreased error-related brain activity, i.e. error-related negativity (ERN). The ERN is also decreased in othe
A Relative View on Tracking Error
W.G.P.M. Hallerbach (Winfried); I. Pouchkarev (Igor)
2005-01-01
textabstractWhen delegating an investment decisions to a professional manager, investors often anchor their mandate to a specific benchmark. The manager’s exposure to risk is controlled by means of a tracking error volatility constraint. It depends on market conditions whether this constraint is eas
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing
2014-04-04
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Qing-ping Deng; Xue-jun Xu; Shu-min Shen
2000-01-01
This paper deals with Crouzeix-Raviart nonconforming finite element approxi mation of Navier-Stokes equation in a plane bounded domain, by using the so-called velocity-pressure mixed formulation. The quasi-optimal maximum norm error es timates of the velocity and its first derivatives and of the pressure are derived for nonconforming C-R scheme of stationary Navier-Stokes problem. The analysis is based on the weighted inf-sup condition and the technique of weighted Sobolev norm. By the way, the optimal L2-error estimate for nonconforming finite element approximation is obtained.
Lee, C.-H.; Herget, C. J.
1976-01-01
This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Xi Liu
2016-09-01
Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Stahl, Jutta; Acharki, Manuela; Kresimon, Miriam; Völler, Frederike; Gibbons, Henning
2015-08-01
Showing excellent performance and avoiding poor performance are the main characteristics of perfectionists. Perfectionism-related variations (N=94) in neural correlates of performance monitoring were investigated in a flanker task by assessing two perfectionism-related trait dimensions: Personal standard perfectionism (PSP), reflecting intrinsic motivation to show error-free performance, and evaluative concern perfectionism (ECP), representing the worry of being poorly evaluated based on bad performance. A moderating effect of ECP and PSP on error processing - an important performance monitoring system - was investigated by examining the error (-related) negativity (Ne/ERN) and the error positivity (Pe). The smallest Ne/ERN difference (error-correct) was obtained for pure-ECP participants (high-ECP-low-PSP), whereas the highest difference was shown for those with high-ECP-high-PSP (i.e., mixed perfectionists). Pe was positively correlated with PSP only. Our results encouraged the cognitive-bias hypothesis suggesting that pure-ECP participants reduce response-related attention to avoid intense error processing by minimising the subjective threat of negative evaluations. The PSP-related variations in late error processing are consistent with the participants' high in PSP goal-oriented tendency to optimise their behaviour.
Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.
Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Angels
2013-01-01
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.
Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.
Macarena Suárez-Pellicioni
Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.
Scaling Relation for Occulter Manufacturing Errors
Sirbu, Dan; Shaklan, Stuart B.; Kasdin, N. Jeremy; Vanderbei, Robert J.
2015-01-01
An external occulter is a spacecraft own along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. The shape of an external occulter must be specially designed to optimally suppress starlight and deviations from the ideal shape due to manufacturing errors can result loss of suppression in the shadow. Due to the long separation distances and large dimensions involved for a space occulter, laboratory testing is conducted with scaled versions of occulters etched on silicon wafers. Using numerical simulations for a flight Fresnel occulter design, we show how the suppression performance of an occulter mask scales with the available propagation distance for expected random manufacturing defects along the edge of the occulter petal. We derive an analytical model for predicting performance due to such manufacturing defects across the petal edges of an occulter mask and compare this with the numerical simulations. We discuss the scaling of an extended occulter test-bed.
Relate the earthquake parameters to the maximum tsunami runup
Sharghivand, Naeimeh; Kânoǧlu, Utku
2016-04-01
Considering the 1 September 1992 Nicaraguan tsunami manifested itself with an initial shoreline recession, there was paradigm shift from solitary wave to an N-wave (Tadepalli and Synolakis, 1994, Proc. R. Soc. A: Math. Phys. Eng. Sci., 445, 99-112) to define the initial waveform of tsunamis (Kanoglu et al., 2015, Phil. Trans. R. Soc. A, 373: 20140369). The N-wave initial waveform shows specific features, which might enhance maximum runup at a target coastline. Tadepalli & Synolakis (1994) showed that the leading depression N-wave (LEN) run up higher than its mirror image, the leading elevation N-wave (LEN). Later, Kanoglu et al. (2013, Proc. R. Soc. A: Math. Phys. Eng. Sci., 469, 20130015) considered two-dimensional propagation of a finite crest length N-wave over a flat bottom and showed that focusing effect of an N-wave in the direction of leading depression, which enhance the runup. Recently, Kanoglu (2016, EGU Abstract)'s preliminary results suggest that later waves could be higher on the leading depression side for an N-wave, i.e., sequencing defined by Okal and Synolakis (2016, Geophys. J. Int. 204, 719-735) is more pronounced on the leading depression side. Here, we consider submarine earthquakes and estimate the initial ocean surface profiles through Okada's formulation (1985, Bull. Seismol. Soc. Am. 75, 1135-1040). We parameterize earthquake source parameters, such as the length and the width of the fault, the focal depth, the rake (slip) and the dip angles, and the slip amount. Then, we relate ocean surface profiles calculated through Okada (1985) to the generalized N-wave profile defined by Tadepalli and Synolakis (1994) and identify N-wave parameters. Since, for an N-wave type initial condition, Tadepalli and Synolakis (1994) presented maximum runup for a canonical problem -wave propagating over a constant depth segment first and then over a sloping beach- and Kanoglu (2004, J. Fluid Mech., 513, 363-372) for a sloping beach their results allow us to
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Training errors and running related injuries
Nielsen, Rasmus Østergaard; Buist, Ida; Sørensen, Henrik
2012-01-01
The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries.......The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries....
Training errors and running related injuries
Nielsen, Rasmus Østergaard; Buist, Ida; Sørensen, Henrik;
2012-01-01
The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries.......The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries....
Relative azimuth inversion by way of damped maximum correlation estimates
Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.
2012-01-01
Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.
Assessment of relative error sources in IR DIAL measurement accuracy
Menyuk, N.; Killinger, D. K.
1983-01-01
An assessment is made of the role the various error sources play in limiting the accuracy of infrared differential absorption lidar measurements used for the remote sensing of atmospheric species. An overview is presented of the relative contribution of each error source including the inadequate knowledge of the absorption coefficient, differential spectral reflectance, and background interference as well as measurement errors arising from signal fluctuations.
Medical error and related factors during internship and residency.
Ahmadipour, Habibeh; Nahid, Mortazavi
2015-01-01
It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.
Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Massidda, Scott; Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Vay, Jean-Luc [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2017-04-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2016-11-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Error-disturbance uncertainty relations studied in neutron optics
Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji
2016-09-01
Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.
Maximum Relative Entropy Updating and the Value of Learning
Patryk Dziurosz-Serafinowicz
2015-03-01
Full Text Available We examine the possibility of justifying the principle of maximum relative entropy (MRE considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one’s degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one’s conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one’s prior degrees of belief might not be equal to the expectation of one’s posterior degrees of belief. This, in turn, points towards a more general moral: that the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive.
System-related factors contributing to diagnostic errors.
Thammasitboon, Satid; Thammasitboon, Supat; Singhal, Geeta
2013-10-01
Several studies in primary care, internal medicine, and emergency departments show that rates of errors in test requests and result interpretations are unacceptably high and translate into missed, delayed, or erroneous diagnoses. Ineffective follow-up of diagnostic test results could lead to patient harm if appropriate therapeutic interventions are not delivered in a timely manner. The frequency of system-related factors that contribute directly to diagnostic errors depends on the types and sources of errors involved. Recent studies reveal that the errors and patient harm in the diagnostic testing loop have occurred mainly at the pre- and post-analytic phases, which are directed primarily by clinicians who may have limited expertise in the rapidly expanding field of clinical pathology. These errors may include inappropriate test requests, failure/delay in receiving results, and erroneous interpretation and application of test results to patient care. Efforts to address system-related factors often focus on technical errors in laboratory testing or failures in delivery of intended treatment. System-improvement strategies related to diagnostic errors tend to focus on technical aspects of laboratory medicine or delivery of treatment after completion of the diagnostic process. System failures and cognitive errors, more often than not, coexist and together contribute to the incidents of errors in diagnostic process and in laboratory testing. The use of highly structured hand-off procedures and pre-planned follow-up for any diagnostic test could improve efficiency and reliability of the follow-up process. Many feedback pathways should be established so that providers can learn if or when a diagnosis is changed. Patients can participate in the effort to reduce diagnostic errors. Providers should educate their patients about diagnostic probabilities and uncertainties. The patient-safety strategies focusing on the interface between diagnostic system and therapeutic
Medication Errors In Relation To Education & Years of Nursing Experience
Shweta D Singh
2012-06-01
Full Text Available Medication error is defined as any preventable event that might cause or lead to an inappropriate use orharming of the patient. The purpose of this study was to determine the relationship between the level ofeducation and medication errors; years of work experience and medication errors. With a betterunderstanding of these relationships, nursing professionals can learn what characteristics tend to make anurse prone to medication errors and can develop methods and procedures to reduce incidence. Thesurvey was conducted in 6 hospitals in Anand city. Approval had been obtained from the hospitalswhere the study was to be conducted. The survey form was divided into 5 different sections. Eachsection comprises of minimum 3 questions which relates to their basic information and their perceptionstowards medication error. The results of the study suggested that there is a direct relationship betweeneducation/experiences and medication errors. The study showed that medication error occurs due to lackof qualified nursing staff. The results showed that medication error were reported due to increaseworkload on nurses because of lack of number of nurses in hospitals.
Assessment of the relative error in the automation task by sessile drop method
T. О. Levitskaya
2015-11-01
Full Text Available Assessment of the relative error in the sessile drop method automation. Further development of the sessile drop method is directly related to the development of new techniques and specially developed algorithms enabling automatic computer calculation of surface properties. The sessile drop method mathematical apparatus improvement, drop circuit equation transformation to a form suitable for working, the drop surface calculation method automation, analysis of relative errors in the calculation of surface tension are relevant and are important in experimental determinations. The surface tension measurement relative error, as well as the error caused by the drop ellipsoidness in the plan were determined in the task of the sessile drop automation. It should be noted that if the drop maximum diameter (l is big or if the ratio of l to the drop height above the equatorial diameter(h is big, the relative error in the measurement of surface tension by sessile drop method does not depend much on the equatorial diameter of the drop and ellipsoidness of the drop. In this case, the accuracy of determination of the surface tension varies from 1,0 to 0,5%. At lower values the drop ellipsoidness begins to affect the relative error of surface tension (from 1,2 to 0,8%, but in this case the drop ellipsoidness is less. Therefore, in subsequent experiments, we used larger drops. On the basis of the assessment of the relative error in determining the liquid surface tension by sessile drop method caused by drop ellipsoidness in the plan, the tables showing the limits of the drop parameters (h and l measurement necessary accuracy to get the overall relative error have been made up. Previously, the surface tension used to be calculated with the relative error in the range of 2-3%
Error-disturbance uncertainty relations in neutron spin measurements
Sponar, Stephan
2016-05-01
Heisenberg’s uncertainty principle in a formulation of uncertainties, intrinsic to any quantum system, is rigorously proven and demonstrated in various quantum systems. Nevertheless, Heisenberg’s original formulation of the uncertainty principle was given in terms of a reciprocal relation between the error of a position measurement and the thereby induced disturbance on a subsequent momentum measurement. However, a naive generalization of a Heisenberg-type error-disturbance relation for arbitrary observables is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid, Ozawa’s relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance under certain conditions. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg’s original EDUR is violated, and Ozawa’s and Branciard’s EDURs are valid in a wide range of experimental parameters, as well as the tightness of Branciard’s relation.
Quantum rms error and Heisenberg’s error-disturbance relation
Busch Paul
2014-01-01
Full Text Available Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012] include claims of a violation of Heisenberg’s error-disturbance relation. In contrast, a Heisenberg-type tradeoff relation for joint measurements of position and momentum has been formulated and proven in [Phys. Rev. Lett. 111, 160405 (2013]. Here I show how the apparent conflict is resolved by a careful consideration of the quantum generalization of the notion of root-mean-square error. The claim of a violation of Heisenberg’s principle is untenable as it is based on a historically wrong attribution of an incorrect relation to Heisenberg, which is in fact trivially violated. We review a new general trade-off relation for the necessary errors in approximate joint measurements of incompatible qubit observables that is in the spirit of Heisenberg’s intuitions. The experiments mentioned may directly be used to test this new error inequality.
Nieuwenhuis, Sander; Ridderinkhof, K. Richard; Talsma, Durk; Coles, Michael G.H.; Holroyd, Clay B.; Kok, Albert; Molen, van der Maurits W.
2002-01-01
When participants commit errors or receive feedback signaling that they have made an error, a negative brain potential is elicited. According to Holroyd and Coles’s (in press) neurocomputational model of error processing, this error-related negativity (ERN) is elicited when the brain first detects t
Nieuwenhuis, S.; Ridderinkhof, K.R.; Talsma, D.; Coles, M.G.; Holroyd, C.B.; Kok, A.
2002-01-01
When participants commit errors or receive feedback signaling that they have made an error, a negative brain potential is elicited. According to Holroyd and Coles's (in press) neurocomputational model of error processing, this error-related negativity (ERN) is elicited when the brain first detects t
Measurement errors and scaling relations in astrophysics: a review
Andreon, S
2012-01-01
This review article considers some of the most common methods used in astronomy for regressing one quantity against another in order to estimate the model parameters or to predict an observationally expensive quantity using trends between object values. These methods have to tackle some of the awkward features prevalent in astronomical data, namely heteroscedastic (point-dependent) errors, intrinsic scatter, non-ignorable data collection and selection effects, data structure and non-uniform population (often called Malmquist bias), non-Gaussian data, outliers and mixtures of regressions. We outline how least square fits, weighted least squares methods, Maximum Likelihood, survival analysis, and Bayesian methods have been applied in the astrophysics literature when one or more of these features is present. In particular we concentrate on errors-in-variables regression and we advocate Bayesian techniques.
THE EFFECT OF THE STATIC RELATIVE STRENGTH ON THE MAXIMUM RELATIVE RECEIVING OF OXYGEN
Abdulla Elezi
2011-09-01
Full Text Available Based on research on the sample of 263 students of age- 18 years, and used batteries of 9 tests for evaluation of the static relative strength and the criterion variable- maximum relative receiving of oxygen (VO2 ml / kg / min based on the Astrand test ,and on regression analysis to determine the influence of the static relative strength on the criterion variable maximum relative oxygen receiving, can be generally concluded that from 9 predictor variables statistically significant partial effect have 2variables. In hierarchical order, they are: the variable of static relative leg strength - endurance of the fingers (the angle of the lower leg and thigh 900 (SRL2 which arithmetic mean is 25.04 seconds and variable ctatic relative strength of arms and shoulders – push-up endurance in the balance beam (angle of the forearm and upper arm 900 ( SRA2 with arithmetic mean of 17.75 seconds. From the statistically influential significant predictor variables on the criterion variable one is from the static relative leg strength (SRL2 and the other is from the static relative strength of arm and shoulder area (SRA2. With the analysis of these relations we can conclude that the isometric contractions of the four headed thigh muscle and the isometric contractions of the three headed upper arm muscle are predominantly responsible for the successful execution of doing actions on a bicycle ergometer and not on the maximum relative receiving of oxygen.
Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher
2017-05-18
Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.
Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating
Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen
2012-01-01
This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…
Hicks, Rodney W; Becker, Shawn C
2006-01-01
Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Maximum relative height of elastic interfaces in random media.
Rambeau, Joachim; Bustingorry, Sebastian; Kolton, Alejandro B; Schehr, Grégory
2011-10-01
The distribution of the maximal relative height (MRH) of self-affine one-dimensional elastic interfaces in a random potential is studied. We analyze the ground-state configuration at zero driving force, and the critical configuration exactly at the depinning threshold, both for the random-manifold and random-periodic universality classes. These configurations are sampled by exact numerical methods, and their MRH distributions are compared with those with the same roughness exponent and boundary conditions, but produced by independent Fourier modes with normally distributed amplitudes. Using Pickands' theorem we derive an exact analytical description for the right tail of the latter. After properly rescaling the MRH distributions we find that corrections from the Gaussian independent modes approximation are, in general, small, as previously found for the average width distribution of depinning configurations. In the large size limit all corrections are finite except for the ground state in the random-periodic class whose MRH distribution becomes, for periodic boundary conditions, indistinguishable from the Airy distribution. We find that the MRH distributions are, in general, sensitive to changes of boundary conditions.
Relative Effects of Trajectory Prediction Errors on the AAC Autoresolver
Lauderdale, Todd
2011-01-01
Trajectory prediction is fundamental to automated separation assurance. Every missed alert, false alert and loss of separation can be traced to one or more errors in trajectory prediction. These errors are a product of many different sources including wind prediction errors, inferred pilot intent errors, surveillance errors, navigation errors and aircraft weight estimation errors. This study analyzes the impact of six different types of errors on the performance of an automated separation assurance system composed of a geometric conflict detection algorithm and the Advanced Airspace Concept Autoresolver resolution algorithm. Results show that, of the error sources considered in this study, top-of-descent errors were the leading contributor to missed alerts and failed resolution maneuvers. Descent-speed errors were another significant contributor, as were cruise-speed errors in certain situations. The results further suggest that increasing horizontal detection and resolution standards are not effective strategies for mitigating these types of error sources.
Circumventing rain-related errors in scatterometer wind observations
Kilpatrick, Thomas J.; Xie, Shang-Ping
2016-08-01
Satellite scatterometer observations of surface winds over the global oceans are critical for climate research and applications like weather forecasting. However, rain-related errors remain an important limitation, largely precluding satellite study of winds in rainy areas. Here we utilize a novel technique to compute divergence and curl from satellite observations of surface winds and surface wind stress in rainy areas. This technique circumvents rain-related errors by computing line integrals around rainy patches, using valid wind vector observations that border the rainy patches. The area-averaged divergence and wind stress curl inside each rainy patch are recovered via the divergence and curl theorems. We process the 10 year Quick Scatterometer (QuikSCAT) data set and show that the line-integral method brings the QuikSCAT winds into better agreement with an atmospheric reanalysis, largely removing both the "divergence bias" and "anticyclonic curl bias" in rainy areas noted in previous studies. The corrected QuikSCAT wind stress curl reduces the North Pacific midlatitude Sverdrup transport by 20-30%. We test several methods of computing divergence and curl on winds from an atmospheric model simulation and show that the line-integral method has the smallest errors. We anticipate that scatterometer winds processed with the line-integral method will improve ocean model simulations and help illuminate the coupling between atmospheric convection and circulation.
Relative and Interraction Effects of Errors in Physics Practical
Owolabi Olabode Thomas
2013-07-01
Full Text Available The importance of physics in human endeavour cannot be glossed over, for it places a vital role and essential part of all human endeavour, especially in science and technology. The study was designed to see the relative and interraction effects of errors in physics practical in Nigeria Secondary Schools. A Quasi experimental design of the three group pre-test, post-test, control design was employed. The sample for the study consisted of sixty (60 students from three selected secondary schools in Nigeria. Equal male and female students were selected using stratified random sampling technique. Physics Practical Questions (PPQ were validated and used before and after treatment in the groups. The findings revealed that when students are exposed to the idea of errors in practical physics, the degree of accuracy will increase and therefore enhance their performance in the subject. Physics and related courses had been recommended for both male and female students in secondary schools, since sex is not the major issue in physics practical works. If the students are taught how to get accurate results and errors are minimised in physics practical, this will enhance good performance in the subject. Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; text-align:justify; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}
Medication Errors In Relation To Education & Years of Nursing Experience
2012-01-01
Medication error is defined as any preventable event that might cause or lead to an inappropriate use orharming of the patient. The purpose of this study was to determine the relationship between the level ofeducation and medication errors; years of work experience and medication errors. With a betterunderstanding of these relationships, nursing professionals can learn what characteristics tend to make anurse prone to medication errors and can develop methods and procedures to reduce incidenc...
Error-Related Negativities During Spelling Judgments Expose Orthographic Knowledge
Harris, Lindsay N.; Perfetti, Charles A.; Rickles, Benjamin
2014-01-01
In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects’ spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. PMID:24389506
Error-related negativities during spelling judgments expose orthographic knowledge.
Harris, Lindsay N; Perfetti, Charles A; Rickles, Benjamin
2014-02-01
In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects' spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Maximum-error Specification Oriented Gross Error Identification Method%一种面向最大值指标的粗大误差处理方法
普仕凡; 韩旭; 李智生; 李钊
2014-01-01
A maximum-error specification oriented gross error identification method based on general Paǔta criterion is proposed, which provides a reference for gross error identification in maximum-error specification. It is assumed that the target stochastic observa-tion sequence is subject to IID normal distribution. Then, through a risk analysis on mistaking the maximum observation value as the gross error data, some modifications are made to the classic Paǔta criterion, and the general Paǔta criterion is introduced. The gross error identification threshold calculation method is also given. Practical application test results show that the method is feasible.%提出了一种面向最大值指标的广义拉依达准则粗差处理方法，为最大值指标下粗大误差的有效鉴别提供了参考依据。该方法假设观测序列服从独立同分布的正态分布，从最大观测值被误作为粗差数据的风险分析入手，对拉依达准则的判定标准进行了改进，推导并给出了广义拉依达准则的粗差判决条件。实践应用的结果表明，该方法是可行的。
Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex
2012-06-01
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕb. In the presence of large voltage errors, δU≫ΔEb, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Massidda, Scott [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Lidia, Steven M.; Seidl, Peter [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, {Delta}{Epsilon}{sub b}. In the presence of large voltage errors, {delta}U Double-Nested-Greater-Than {Delta}E{sub b}, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Vilmos Simon
2013-01-01
The aim of this study is to define optimal tooth modifications,introduced by appropriately chosen head-cutter geometry and machine tool setting,to simultaneously minimize tooth contact pressure and angular displacement error of the driven gear (transmission error) of face-hobbed spiral bevel gears.As a result of these modifications,the gear pair becomes mismatched,and a point contact replaces the theoretical line contact.In the applied loaded tooth contact analysis it is assumed that the point contact under load is spreading over a surface along the whole or part of the “potential” contact line.A computer program was developed to implement the formulation provided above.By using this program the influence of tooth modifications introduced by the variation in machine tool settings and in head cutter data on load and pressure distributions,transmission errors,and fillet stresses is investigated and discussed.The correlation between the ease-off obtained by pinion tooth modifications and the corresponding tooth contact pressure distribution is investigated and the obtained results are presented.
雷达组网的精确极大似然误差配准算法%An Exact Maximum Likelihood Error Registration Algorithm for Radar Network
丰昌政; 薛强
2012-01-01
针对最小二乘法和卡尔曼滤波方法在雷达网系统中的误差配准问题,提出一种雷达组网的精确极大似然误差配准算法.采用基于圆极投影的极大似然配准算法,利用各雷达站的几何关系,通过极大似然混合高斯-牛顿迭代方法估计出雷达网的系统误差,并进行仿真.仿真结果证明:该配准方法具有良好的一致性,可以用于多雷达组网的误差配准.%For the least square method and Caiman filter method in radar network system's error registration problems, put forward a kind of radar netting exact maximum likelihood error registration algorithm. Using maximum likelihood registration algorithm based on circular polar projection, according to the radar station geometric relationship, to estimate the error of radar network system by maximum likelihood mixed Gauss-Newton iterative method, and carried out a simulation. The simulation results show that the algorithm has good compatibility, can be used for multi radar netted registration.
T. Gnanasekaran
2008-01-01
Full Text Available Problem statement: In this study we propose a method to improve the performance of Maximum A-Posteriori Probability Algorithm, which is used in turbo decoder. Previously the performance of turbo decoder is improved by means of scaling the channel reliability value. Approach: A modification in MAP algorithm proposed in this study, which achieves further improvement in forward error correction by means of scaling the extrinsic information in both decoders without introducing any complexity. The encoder is modified with a new puncturing matrix, which yields Unequal Error Protection (UEP. This modified MAP algorithm is analyzed with the traditional turbo code system Equal Error Protection (EEP and also with Unequal Error Protection (UEP both in AWGN channel and fading channel. Result: MAP and modified MAP achieve coding gain of 0.6 dB over EEP in AWGN channel. The MAP and modified MAP achieve coding gain of 0.4 dB and 0.9dB over EEP respectively in Rayleigh fading channel. Modified MAP in UEP class 1 and class 2 gained 0.8 dB and 0.6 dB respectively in AWGN channel where as in fading channel class 1 and 2 gained 0.4 dB and 0.6 dB respectively. Conclusion/Recommendations: The modified MAP algorithm improves the Bit Error Rate (BER performance in EEP as well as UEP both in AWGN and fading channels. We propose modified MAP error correction algorithm with UEP for broad band communication.
22 CFR 201.68 - Maximum prices for commodity-related services.
2010-04-01
.... 201.68 Section 201.68 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT RULES AND PROCEDURES APPLICABLE TO COMMODITY TRANSACTIONS FINANCED BY USAID Price Provisions § 201.68 Maximum prices for commodity... each such service is eligible for USAID-financing under § 201.67 or § 201.68(a) of this part....
Preliminary investigation on the relation between maximum wave height and wave spectra
Tao, Aifeng; Wen, Cheng; Wu, Yuqing; Wu, Haoran; Li, Shuo; Cao, Guangsui
2016-04-01
The maximum wave height is important not only for the determination of design wave parameters but also for the marine disaster defense. While it cannot be predicted straightforwardly at present, since the general numerical models for wave forecasting are all based on phase averaged spectra model. Then it becomes very useful to make clear the relationship between the maximum wave height and wave spectra parameters, such as average wave steepness, spectra width and spectra type, such as one single peak spectra or multi peaks spectra. In order to perform this research procedure, plenty of observed wave data are required. We collected ten years wave data measured from a ship in North Sea, one year wave pressure data from nine points around Korea, four years buoy data from three points along Chinese coast. The preliminary investigation results on the relations between maximum waves and spectra via the mention observed data will be present here.
Relative measurement error analysis in the process of the Nakagami-m fading parameter estimation
Milentijević Vladeta
2011-01-01
Full Text Available An approach to the relative measurement error analysis in the process of the Nakagami-m fading signal moments estimation will be presented in this paper. Relative error expressions will be also derived for the cases when MRC (Maximal Ratio Combining diversity technique is performed at the receiver. Capitalizing on them, results will be graphically presented and discussed to show the influence of various parameters, such as diversity order and fading severity on the relative measurement error bounds.
CREME96 and Related Error Rate Prediction Methods
Adams, James H., Jr.
2012-01-01
Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and
Descriptive vector, relative error matrix, and interaction analysis of multivariable plants
Monshizadeh-Naini, Nima; Fatehi, Alireza; Kahki-Sedigh, Ali
In this paper, we introduce a vector which is able to describe the Niederlinski Index (NI), Relative Gain array (RGA), and the characteristic equation of the relative error matrix. The spectral radius and the structured singular value of the relative error matrix are investigated. The cases where
Gharekhani, Afshin; Kanani, Negin; Khalili, Hossein; Dashti-Khavidaki, Simin
2014-09-01
Medication errors are ongoing problems among hospitalized patients especially those with multiple co-morbidities and polypharmacy such as patients with renal diseases. This study evaluated the frequency, types and direct related cost of medication errors in nephrology ward and the role played by clinical pharmacists. During this study, clinical pharmacists detected, managed, and recorded the medication errors. Prescribing errors including inappropriate drug, dose, or treatment durations were gathered. To assess transcription errors, the equivalence of nursery charts and physician's orders were evaluated. Administration errors were assessed by observing drugs' preparation, storage, and administration by nurses. The changes in medications costs after implementing clinical pharmacists' interventions were compared with the calculated medications costs if the medication errors were continued up to patients' discharge time. More than 85% of patients experienced medication error. The rate of medication errors was 3.5 errors per patient and 0.18 errors per ordered medication. More than 95% of medication errors occurred at prescription nodes. Most common prescribing errors were omission (26.9%) or unauthorized drugs (18.3%) and low drug dosage or frequency (17.3%). Most of the medication errors happened on cardiovascular drugs (24%) followed by vitamins and electrolytes (22.1%) and antimicrobials (18.5%). The number of medication errors was correlated with the number of ordered medications and length of hospital stay. Clinical pharmacists' interventions decreased patients' direct medication costs by 4.3%. About 22% of medication errors led to patients' harm. In conclusion, clinical pharmacists' contributions in nephrology wards were of value to prevent medication errors and to reduce medications cost.
Application of the maximum relative entropy method to the physics of ferromagnetic materials
Giffin, Adom; Cafaro, Carlo; Ali, Sean Alan
2016-08-01
It is known that the Maximum relative Entropy (MrE) method can be used to both update and approximate probability distributions functions in statistical inference problems. In this manuscript, we apply the MrE method to infer magnetic properties of ferromagnetic materials. In addition to comparing our approach to more traditional methodologies based upon the Ising model and Mean Field Theory, we also test the effectiveness of the MrE method on conventionally unexplored ferromagnetic materials with defects.
Amplitude of Accommodation and its Relation to Refractive Errors
Abraham Lekha
2005-01-01
Full Text Available Aims: To evaluate the relationship between amplitude of accommodation and refractive errors in the peri-presbyopic age group. Materials and Methods: Three hundred and sixteen right eyes of 316 consecutive patients in the age group 35-50 years who attended our outpatient clinic were studied. Emmetropes, hypermetropes and myopes with best-corrected visual acuity of 6/6 J1 in both eyes were included. The amplitude of accommodation (AA was calculated by measuring the near point of accommodation (NPA. In patients with more than ± 2 diopter sphere correction for distance, the NPA was also measured using appropriate soft contact lenses. Results: There was a statistically significant difference in AA between myopes and hypermetropes ( P P P P P P >0.5. Conclusion: Our study showed higher amplitude of accommodation among myopes between 35 and 44 years compared to emmetropes and hypermetropes
Error processing in heroin addicts:an event-related potential study
林彬
2012-01-01
Objective To investigate the relationship between impulsive behaviors and the error related negativity (ERN) component of event-related potentials of error processing in heroin addicts. Methods Using the paradigms for psychological experiment,the Iowa gambling task(IGT) was performed both in heroin
Yasso, B; Li, Y; Alexander, A; Mel'nikova, N B; Mukhina, I V
2014-01-01
A comparison of the relative bioavailability and intensity of penetration of glucosamine sulfate in oral, injection and topical administration of the dosage form Hondroxid Maximum as a cream containing micellar system for transdermal delivery of glucosamine in the experiment by Sprague-Dawley rats was carried out. On the base on the pharmacokinetic profiles data of glucosamine in rat blood plasma with daily administration in 3 times a day for 1 week by cream Hondroxid Maximum 400 mg/kg and the single injection solution of 4% Glucosamine sulfate 400 mg/kg was found that the relative bioavailability was 61.6%. Calculated penetration rate of glucosamine in the plasma through the rats skin in 4 hours, equal to 26.9 μg/cm2 x h, and the penetration of glucosamine through the skin into the plasma after a single dose of cream in 4 hours was 4.12%. Comparative analysis of literature and experimental data and calculations based on them suggest that medicine Hondroxid Maximum, cream with transdermal glucosamine complex in the treatment in accordance with the instructions can provide an average concentration of glucosamine in the synovial fluid of an inflamed joint in the range (0.7 - 1.5) μg/ml, much higher than the concentration of endogenous glucosamine human synovial joint fluid (0.02 - 0.07 μg/ml). By theoretical calculations taking into account experimental data it is shown that the medicine Hondroxid Maximum can reach the bioavailability level of the modern injection forms and exceed the bioavailability level of modern oral forms of glucosamine up to 2 times.
A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.
Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco
2011-01-01
The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion.
Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?
Meyer-Vernet, Nicole; Rospars, Jean-Pierre
2016-12-01
Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.
Relating faults in diagnostic reasoning with diagnostic errors and patient harm.
Zwaan, L.; Thijs, A.; Wagner, C.; Wal, G. van der; Timmermans, D.R.M.
2012-01-01
Purpose: The relationship between faults in diagnostic reasoning, diagnostic errors, and patient harm has hardly been studied. This study examined suboptimal cognitive acts (SCAs; i.e., faults in diagnostic reasoning), related them to the occurrence of diagnostic errors and patient harm, and studied
Error-related EEG patterns during tactile human-machine interaction
Lehne, M.; Ihme, K.; Brouwer, A.M.; Erp, J.B.F. van; Zander, T.O.
2009-01-01
Recently, the use of brain-computer interfaces (BCIs) has been extended from active control to passive detection of cognitive user states. These passive BCI systems can be especially useful for automatic error detection in human-machine systems by recording EEG potentials related to human error proc
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...
Assessment of the relative error in sessile drop method automation task
Levitskaya T.О.
2015-01-01
Assessment of the relative error in the sessile drop method automation. Further development of the sessile drop method is directly related to the development of new techniques and specially developed algorithms enabling automatic computer calculation of surface properties. The sessile drop method mathematical apparatus improvement, drop circuit equation transformation to a form suitable for working, the drop surface calculation method automation, analysis of relative errors in the calculation...
Relative measurement error analysis in the process of the Nakagami-m fading parameter estimation
Milentijević Vladeta; Denić Dragan; Stefanović Mihajlo; Panić Stefan R.; Radenković Dragan
2011-01-01
An approach to the relative measurement error analysis in the process of the Nakagami-m fading signal moments estimation will be presented in this paper. Relative error expressions will be also derived for the cases when MRC (Maximal Ratio Combining) diversity technique is performed at the receiver. Capitalizing on them, results will be graphically presented and discussed to show the influence of various parameters, such as diversity order and fading severity on the relative measurement...
Jiovanna Contreras Roura
2012-06-01
growth, recurrent infections, self-mutilation, immunodeficiencies, unexplainable haemolytic anemia, gout-related arthritis, family history, consanguinity and adverse reactions to those drugs that are analogous of purines. The study of these diseases generally begins by quantifying serum uric acid and uric acid present in the urine which is the final product of purine metabolism in human beings. Diet and drug consumption are among the pathological, physiological and clinical conditions capable of changing the level of this compound. This review was intended to disseminate information on the inborn purine metabolism errors as well as to facilitate the interpretation of the uric acid levels and other biochemical markers making the diagnosis of these diseases possible. The tables relating these diseases to the excretory levels of uric acid and other biochemical markers, the altered enzymes, the clinical symptoms, the model of inheritance, and in some cases, the suggested treatment. This paper allowed us to affirm that variations in the uric acid levels and the presence of other biochemical markers in urine are important tools in screening some inborn purine metabolism errors, and also other related pathological conditions.
Special relativity and theory of gravity via maximum symmetry and localization
2008-01-01
Like Euclid,Riemann and Lobachevski geometries are on an almost equal footing,based on the principle of relativity of maximum symmetry proposed by Professor Lu Qikeng and the postulate on invariant universal constants c and R,the de Sitter/anti-de Sitter（dS/AdS）special relativity on dS/AdS-space with radius R can be set up on an almost equal footing with Einstein’s special relativity on the Minkowski-space in the case of R→∞. Thus the dS-space is coin-like:a law of inertia in Beltrami atlas with Beltrami time simultaneity for the principle of relativity on one side,and the proper-time simultaneity and a Robertson-Walker-like dS-space with entropy and an accelerated expanding S3 fitting the cosmological principle on another side. If our universe is asymptotic to the Robertson-Walker-like dS-space of R（?）（3/Λ）1/2,it should be slightly closed in O（A）with entropy bound S（?）3πc3kB/ΛGh.Contrarily,via its asymptotic behavior, it can fix on Beltrami inertial frames without‘an argument in a circle’and acts as the origin of inertia. There is a triality of conformal extensions of three kinds of special relativity and their null physics on the projective boundary of a 5-d AdS-space,a null cone modulo projective equivalence[N]（?）p（AdS5）. Thus there should be a dS-space on the boundary of S5×AdS5 as a vacuum of supergravity. In the light of Einstein’s‘Galilean regions’,gravity should be based on the localized principle of relativity of full maximum symmetry with a gauge-like dynamics.Thus,this may lead to the theory of gravity of corresponding local symmetry.A simple model of dS-gravity characterized by a dimensionless constant g（?）（AGh/3c3）1/2～10-61shows the features on umbilical manifolds of local dS-invariance. Some gravitational effects out of general relativity may play a role as dark matter. The dark universe and its asymptotic behavior may already indicate that the dS special relativity and dS-gravity be the
Special relativity and theory of gravity via maximum symmetry and localization
GUO HanYing
2008-01-01
Like Euclid,Riemann and Lobachevski geometries are on an almost equal footing,based on the principle of relativity of maximum symmetry proposed by Professor Lu Qikeng and the postulate on invariant universal constants c and R,the de Sitter/anti-de Sitter (dS/AdS) special relativity on dS/AdS-space with radius R can be set up on an almost equal footing with Einstein's special relativity on the Minkowski-space in the case of R →∞.Thus the dS-space is coin-like: a law of inertia in Beltrami atlas with Beltrami time simultaneity for the principle of relativity on one side,and the proper-time simultaneity and a Robertson-Walker-like dS-space with entropy and an accelerated expanding S3 fitting the cosmological principle on another side.If our universe is asymptotic to the Robertson-Walker-like dS-space of R≈(3/∧)1/2,it should be slightly closed in O(A) with entropy bound S≈3πc3kB/∧Gh.Contrarily,via its asymptotic behavior,it can fix on Beltrami inertial frames without 'an argument in a circle' and acts as the origin of inertia.There is a triality of conformal extensions of three kinds of special relativity and their null physics on the projective boundary of a 5-d AdS-space,a null cone modulo projective equivalence [N]≈(e)p(AdS5).Thus there should be a dS-space on the boundary of S5 × AdS5 as a vacuum of supergravity.In the light of Einstein's 'Galilean regions',gravity should be based on the localized principle of relativity of full maximum symmetry with a gauge-like dynamics.Thus,this may lead to the theory of gravity of corresponding local symmetry.A simple model of dS-gravity characterized by a dimensionless constant g≈(∧Gh/3c3)1/2 ～ 10-61 shows the features on umbilical manifolds of local dS-invariance.Some gravitational effects out of general relativity may play a role as dark matter.The dark universe and its asymptotic behavior may already indicate that the dS special relativity and dS-gravity be the foundation of large scale physics.
Mohammad H. Radfar
2006-11-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Dansereau Richard M
2007-01-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Lin, Yanli; Moran, Tim P; Schroder, Hans S; Moser, Jason S
2015-10-01
Anxious apprehension/worry is associated with exaggerated error monitoring; however, the precise mechanisms underlying this relationship remain unclear. The current study tested the hypothesis that the worry-error monitoring relationship involves left-lateralized linguistic brain activity by examining the relationship between worry and error monitoring, indexed by the error-related negativity (ERN), as a function of hand of error (Experiment 1) and stimulus orientation (Experiment 2). Results revealed that worry was exclusively related to the ERN on right-handed errors committed by the linguistically dominant left hemisphere. Moreover, the right-hand ERN-worry relationship emerged only when stimuli were presented horizontally (known to activate verbal processes) but not vertically. Together, these findings suggest that the worry-ERN relationship involves left hemisphere verbal processing, elucidating a potential mechanism to explain error monitoring abnormalities in anxiety. Implications for theory and practice are discussed.
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Online detection of error-related potentials boosts the performance of mental typewriters
2012-01-01
Published by BioMed Central Schmidt, Nico M. ; Blankertz, Benjamin ; Treder, Matthias S. : Online detection of error-related potentials boosts the performance of mental typewriters. - In: BMC Neuroscience. - ISSN 1471-2202 (online). - 13 (2012), art. 19. - doi:10.1186/1471-2202-13-19. Background: Increasing the communication speed of brain-computer interfaces (BCIs) is a major aim of current BCI-research. The idea to automatically detect error-related potentials (ErrPs) in order to veto...
SEBA SUSAN; NANDINI AGGARWAL; SHEFALI CHAND; AYUSH GUPTA
2016-12-01
In this paper we investigate information-theoretic image coding techniques that assign longer codes to improbable, imprecise and non-distinct intensities in the image. The variable length coding techniques when applied to cropped facial images of subjects with different facial expressions, highlight the set of low probability intensities that characterize the facial expression such as the creases in the forehead, the widening of the eyes and the opening and closing of the mouth. A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization experiments
Maximum jaw opening capacity in adolescents in relation to general joint mobility.
Westling, L; Helkimo, E
1992-09-01
Mandibular jaw opening was related with general joint mobility in a non-patient adolescent group. The angular rotation of the mandible at maximum jaw opening was slightly larger in females than in males and significantly larger in hypermobile individuals. No significant relationship between linear measuring of maximal mandibular opening capacity and peripheral joint mobility was found either at active (AROM) or at passive range of mandibular opening (PROM). PROM was strongly correlated to the mandibular length. Clinical signs in the great jaw closer muscles could not be associated to decreased AROM. The mean value of the difference between PROM-AROM (DPA) was 1.2 mm. Frequent clenching and/or grinding was correlated to increased DPA only in hypermobile adolescents (r = 0.49***). Those with DPA exceeding 5mm had all reciprocal clicking.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-06-16
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Punishment has a lasting impact on error-related brain activity.
Riesel, Anja; Weinberg, Anna; Endrass, Tanja; Kathmann, Norbert; Hajcak, Greg
2012-02-01
The current study examined whether punishment has direct and lasting effects on error-related brain activity, and whether this effect is larger with increasing trait anxiety. Participants were told that errors on a flanker task would be punished in some blocks but not others. Punishment was applied following 50% of errors in punished blocks during the first half of the experiment (i.e., acquisition), but never in the second half (i.e., extinction). The ERN was enhanced in the punished blocks in both experimental phases--this enhancement remained stable throughout the extinction phase. More anxious individuals were characterized by larger punishment-related modulations in the ERN. The study reveals evidence for lasting, punishment-based modulations of the ERN that increase with anxiety. These data suggest avenues for research to examine more specific learning-related mechanisms that link anxiety to overactive error monitoring.
Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder
Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.
2012-01-01
Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…
Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder
Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.
2012-01-01
Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…
An event-related potential investigation of error monitoring in adults with a history of psychosis.
Chan, Chi C; Trachik, Benjamin J; Bedwell, Jeffrey S
2015-09-01
Previous research suggests that deficits in error monitoring contribute to psychosis and poor functioning. Consistent with the NIMH Research Domain Criteria initiative, this study examined electrophysiological brain activity, appraisal of self-performance, and personality traits related to psychosis during error monitoring in individuals with and without a history of psychosis across disorders. Error-related negativity (ERN), correct response negativity (CRN), error positivity (Pe), and correct response positivity (Pc) were recorded in 14 individuals with a history of psychosis (PSY) and 12 individuals with no history of psychosis (CTR) during a flanker task. Participants continuously rated their performance and completed the Schizotypal Personality Questionnaire-Brief Revised (SPQ-BR). Compared with CTR, PSY exhibited reduced ERN and Pe amplitudes and was also less accurate at evaluating their performance. Group differences were specific to error trials. Across all participants, smaller Pe amplitudes were associated with greater scores on the SPQ-BR Cognitive-Perceptual factor and less accuracy in subjective identification of errors. Individuals with a history of psychosis, regardless of diagnosis, demonstrated abnormal neural activity and imprecise confidence in response during error monitoring. Results suggest that disruptions in neural circuitry may underlie specific clinical symptoms across diagnostic categories. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Event-related potentials for post-error and post-conflict slowing.
Andrew Chang
Full Text Available In a reaction time task, people typically slow down following an error or conflict, each called post-error slowing (PES and post-conflict slowing (PCS. Despite many studies of the cognitive mechanisms, the neural responses of PES and PCS continue to be debated. In this study, we combined high-density array EEG and a stop-signal task to examine event-related potentials of PES and PCS in sixteen young adult participants. The results showed that the amplitude of N2 is greater during PES but not PCS. In contrast, the peak latency of N2 is longer for PCS but not PES. Furthermore, error-positivity (Pe but not error-related negativity (ERN was greater in the stop error trials preceding PES than non-PES trials, suggesting that PES is related to participants' awareness of the error. Together, these findings extend earlier work of cognitive control by specifying the neural correlates of PES and PCS in the stop signal task.
Reppert, Michael; Tokmakoff, Andrei
The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.
Rotating proto-neutron stars: spin evolution, maximum mass and I-Love-Q relations
Martinon, Grégoire; Gualtieri, Leonardo; Ferrari, Valeria
2014-01-01
Shortly after its birth in a gravitational collapse, a proto-neutron star enters in a phase of quasi-stationary evolution characterized by large gradients of the thermodynamical variables and intense neutrino emission. In few tens of seconds the gradients smooth out while the star contracts and cools down, until it becomes a neutron star. In this paper we study this phase of the proto-neutron star life including rotation, and employing finite temperature equations of state. We model the evolution of the rotation rate, and determine the relevant quantities characterizing the star. Our results show that an isolated neutron star cannot reach, at the end of the evolution, the maximum values of mass and rotation rate allowed by the zero-temperature equation of state. Moreover, a mature neutron star evolved in isolation cannot rotate too rapidly, even if it is born from a proto-neutron star rotating at the mass-shedding limit. We also show that the I-Love-Q relations are violated in the first second of life, but th...
Torpey, Dana C.; Hajcak, Greg; Kim, Jiyon; Kujawa, Autumn J.; Dyson, Margaret W.; Olino, Thomas M.; Klein, Daniel N.
2013-01-01
Background: There is increasing interest in error-related brain activity in anxiety disorders. The error-related negativity (ERN) is a negative deflection in the event-related potential approximately 50 [milliseconds] after errors compared to correct responses. Recent studies suggest that the ERN may be a biomarker for anxiety, as it is positively…
Senior High School Students' Errors on the Use of Relative Words
Bao, Xiaoli
2015-01-01
Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…
Spatial reconstruction by patients with hippocampal damage is dominated by relational memory errors.
Watson, Patrick D; Voss, Joel L; Warren, David E; Tranel, Daniel; Cohen, Neal J
2013-07-01
Hippocampal damage causes profound yet circumscribed memory impairment across diverse stimulus types and testing formats. Here, within a single test format involving a single class of stimuli, we identified different performance errors to better characterize the specifics of the underlying deficit. The task involved study and reconstruction of object arrays across brief retention intervals. The most striking feature of patients' with hippocampal damage performance was that they tended to reverse the relative positions of item pairs within arrays of any size, effectively "swapping" pairs of objects. These "swap errors" were the primary error type in amnesia, almost never occurred in healthy comparison participants, and actually contributed to poor performance on more traditional metrics (such as distance between studied and reconstructed location). Patients made swap errors even in trials involving only a single pair of objects. The selectivity and severity of this particular deficit creates serious challenges for theories of memory and hippocampus.
Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.
2013-12-01
Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of
Niemeyer, Kyle E; Raju, Mandhapati P
2016-01-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane ...
Adom Giffin
2014-09-01
Full Text Available In this paper, we continue our efforts to show how maximum relative entropy (MrE can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF. However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA. Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.
Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits
J. Michael Maurer
2016-06-01
Full Text Available Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing in a sample of incarcerated juvenile male offenders (n = 100 who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV. The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA. Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude.
Nutrient maximums related to low oxygen concentrations in the southern Canada Basin
JIN Ming-ming; SHI Jiuxin; LU Yong; CHEN Jianfang; GAO Guoping; WU Jingfeng; ZHANG Haisheng
2005-01-01
The phenomenon of nutrient maximums at 70～200 m occurred only in the region of the Canada Basin among the world oceans. The prevailing hypothesis was that the direct injection of the low-temperature high-nutrient brines from the Chukchi Sea shelf (＜50 m) in winter provided the nutrient maximums. However, we found that there are five problems in the direct injection process. Formerly Jin et al. considered that the formation of nutrient maximums can be a process of locally long-term regeneration. Here we propose a regeneration-mixture process. Data of temperature, salinity, oxygen and nutrients were collected at three stations in the southern Canada Basin during the summer 1999 cruise. We identified the cores of the surface, near-surface, potential temperature maximum waters and Arctic Bottom Water by the diagrams and vertical profiles of salinity, potential temperature, oxygen and nutrients. The historical 129Ⅰ data indicated that the surface and near-surface waters were Pacific-origin, but the waters below the potential temperature maximum core depth was Atlantic-origin. Along with the correlation of nutrient maximums and very low oxygen contents in the near-surface water, we hypothesize that, the putative organic matter was decomposed to inorganic nutrients; and the Pacific water was mixed with the Atlantic water in the transition zone. The idea of the regeneration-mixture process agrees with the historical observations of no apparent seasonal changes, the smooth nutrient profiles, the lowest saturation of CaCO3 above 400 m, low rate of CFC-11 ventilation and 3H-3He ages of 8～18 a around the nutrient maximum depths.
Marcelo Matida Hamata
2009-02-01
Full Text Available Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous origin and bruxism were divided into 2 groups treated with splints in maximum intercuspation (I or centric relation (II. Clinical, electrognathographic and electromyographic examinations were performed before and 3 months after therapy. Data were analyzed by the Student's t test. Differences at 5% level of probability were considered statistically significant. There was a remarkable reduction in pain symptomatology, without statistically significant differences (p>0.05 between the groups. There was mandibular repositioning during therapy, as demonstrated by the change in occlusal contacts on the splints. Electrognathographic examination demonstrated a significant increase in maximum left lateral movement for group I and right lateral movement for group II (p0.05 in the electromyographic activities at rest after utilization of both splints. In conclusion, both occlusal splints were effective for pain control and presented similar action. The results suggest that maximum intercuspation may be used for fabrication of occlusal splints in patients with occlusal stability without large discrepancies between centric relation and maximum intercuspation. Moreover, this technique is simpler and less expensive.
Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation.
Baek, So-Young; Kaneda, Fumihiro; Ozawa, Masanao; Edamatsu, Keiichi
2013-01-01
The uncertainty principle formulated by Heisenberg in 1927 describes a trade-off between the error of a measurement of one observable and the disturbance caused on another complementary observable such that their product should be no less than the limit set by Planck's constant. However, Ozawa in 1988 showed a model of position measurement that breaks Heisenberg's relation and in 2003 revealed an alternative relation for error and disturbance to be proven universally valid. Here, we report an experimental test of Ozawa's relation for a single-photon polarization qubit, exploiting a more general class of quantum measurements than the class of projective measurements. The test is carried out by linear optical devices and realizes an indirect measurement model that breaks Heisenberg's relation throughout the range of our experimental parameter and yet validates Ozawa's relation.
Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation
Baek, So-Young; Kaneda, Fumihiro; Ozawa, Masanao; Edamatsu, Keiichi
2013-07-01
The uncertainty principle formulated by Heisenberg in 1927 describes a trade-off between the error of a measurement of one observable and the disturbance caused on another complementary observable such that their product should be no less than the limit set by Planck's constant. However, Ozawa in 1988 showed a model of position measurement that breaks Heisenberg's relation and in 2003 revealed an alternative relation for error and disturbance to be proven universally valid. Here, we report an experimental test of Ozawa's relation for a single-photon polarization qubit, exploiting a more general class of quantum measurements than the class of projective measurements. The test is carried out by linear optical devices and realizes an indirect measurement model that breaks Heisenberg's relation throughout the range of our experimental parameter and yet validates Ozawa's relation.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Developmental changes in error monitoring : An event-related potential study
Wiersema, Jan R.; van der Meere, Jacob J.; Roeyers, Herbert; Wiersema, R.J
2007-01-01
The aim of the study was to investigate the developmental trajectory of error monitoring. For this purpose, children (age 7-8), young adolescents (age 13-14) and adults (age 23-24) performed a Go/No-Go task and were compared on overt reaction time (RT) performance and on event-related potentials (ER
Experimental test of error-disturbance uncertainty relations by weak measurement.
Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi
2014-01-17
We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
Tracing Error-Related Knowledge in Interview Data: Negative Knowledge in Elder Care Nursing
Gartmeier, Martin; Gruber, Hans; Heid, Helmut
2010-01-01
This paper empirically investigates elder care nurses' negative knowledge. This form of experiential knowledge is defined as the outcome of error-related learning processes, focused on how something is not, on what not to do in certain situations or on deficits in one's knowledge or skills. Besides this definition, we presume the existence of…
Age-related changes in error processing in young children: A school-based investigation
Jennie K. Grammer
2014-07-01
Full Text Available Growth in executive functioning (EF skills play a role children's academic success, and the transition to elementary school is an important time for the development of these abilities. Despite this, evidence concerning the development of the ERP components linked to EF, including the error-related negativity (ERN and the error positivity (Pe, over this period is inconclusive. Data were recorded in a school setting from 3- to 7-year-old children (N = 96, mean age = 5 years 11 months as they performed a Go/No-Go task. Results revealed the presence of the ERN and Pe on error relative to correct trials at all age levels. Older children showed increased response inhibition as evidenced by faster, more accurate responses. Although developmental changes in the ERN were not identified, the Pe increased with age. In addition, girls made fewer mistakes and showed elevated Pe amplitudes relative to boys. Based on a representative school-based sample, findings indicate that the ERN is present in children as young as 3, and that development can be seen in the Pe between ages 3 and 7. Results varied as a function of gender, providing insight into the range of factors associated with developmental changes in the complex relations between behavioral and electrophysiological measures of error processing.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
Error-related ERP components and individual differences in punishment and reward sensitivity
Boksem, Maarten A. S.; Tops, Mattie; Wester, Anne E.; Meijman, Theo F.; Lorist, Monique M.
2006-01-01
Although the focus of the discussion regarding the significance of the error related negatively (ERN/Ne) has been on the cognitive factors reflected in this component, there is now a growing body of research that describes influences of motivation, affective style and other factors of personality on
TGAS Error Renormalization from the RR Lyrae Period-Luminosity Relation
Gould, Andrew; Sesar, Branimir
2016-01-01
The Gaia team has applied a renormalization to their internally-derived parallax errors $\\sigma_{\\rm int}(\\pi)$ $$ \\sigma_{tgas}(\\pi) = \\sqrt{[A\\sigma_{int}(\\pi)]^2 + \\sigma_0^2}; \\ \\ \\ \\ (A,\\sigma_0) = (1.4,0.20\\ \\rm mas) $$ based on comparison to Hipparcos astrometry. We use a completely independent method based on the RR Lyrae $K$-band period-luminosity relation to derive a substantially different result, with smaller ultimate errors $$ (A,\\sigma_0) = (1.1,0.12\\ \\rm mas) \\ \\ \\ \\ (this\\ paper). $$ We argue that our estimate is likely to be more accurate and therefore that the reported TGAS parallax errors should be reduced according to the prescription: $$ \\sigma_{true}(\\pi) = \\sqrt{(0.79\\sigma_{tgas}(\\pi))^2 - (0.10\\ \\rm mas)^2}. $$
Error-related brain activity reveals self-centric motivation: culture matters.
Kitayama, Shinobu; Park, Jiyoung
2014-02-01
To secure the interest of the personal self (vs. social others) is considered a fundamental human motive, but the nature of the motivation to secure the self-interest is not well understood. To address this issue, we assessed electrocortical responses of European Americans and Asians as they performed a flanker task while instructed to earn as many reward points as possible either for the self or for their same-sex friend. For European Americans, error-related negativity (ERN)-an event-related-potential component contingent on error responses--was significantly greater in the self condition than in the friend condition. Moreover, post-error slowing--an index of cognitive control to reduce errors--was observed in the self condition but not in the friend condition. Neither of these self-centric effects was observed among Asians, consistent with prior cross-cultural behavioral evidence. Interdependent self-construal mediated the effect of culture on the ERN self-centric effect. Our findings provide the first evidence for a neural correlate of self-centric motivation, which becomes more salient outside of interdependent social relations.
Fishman, Inna; Ng, Rowena
2013-04-01
While the personality trait of extraversion has been linked to enhanced reward sensitivity and its putative neural correlates, little is known about whether extraverts' neural circuits are particularly sensitive to social rewards, given their preference for social engagement and social interactions. Using event-related potentials (ERPs), this study examined the relationship between the variation on the extraversion spectrum and a feedback-related ERP component (the error-related negativity or ERN) known to be sensitive to the value placed on errors and reward. Participants completed a forced-choice task, in which either rewarding or punitive feedback regarding their performance was provided, through either social (facial expressions) or non-social (verbal written) mode. The ERNs elicited by error trials in the social - but not in non-social - blocks were found to be associated with the extent of one's extraversion. However, the directionality of the effect was in contrast with the original prediction: namely, extraverts exhibited smaller ERNs than introverts during social blocks, whereas all participants produced similar ERNs in the non-social, verbal feedback condition. This finding suggests that extraverts exhibit diminished engagement in response monitoring - or find errors to be less salient - in the context of social feedback, perhaps because they find social contexts more predictable and thus more pleasant and less anxiety provoking.
Zrinka Sosic-Vasic
Full Text Available The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI. A second strong positive correlation was observed in the anterior cingulate gyrus (ACC. Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.
Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg
2012-01-01
The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.
25(OHD3 Levels Relative to Muscle Strength and Maximum Oxygen Uptake in Athletes
Książek Anna
2016-04-01
Full Text Available Vitamin D is mainly known for its effects on the bone and calcium metabolism. The discovery of Vitamin D receptors in many extraskeletal cells suggests that it may also play a significant role in other organs and systems. The aim of our study was to assess the relationship between 25(OHD3 levels, lower limb isokinetic strength and maximum oxygen uptake in well-trained professional football players. We enrolled 43 Polish premier league soccer players. The mean age was 22.7±5.3 years. Our study showed decreased serum 25(OHD3 levels in 74.4% of the professional players. The results also demonstrated a lack of statistically significant correlation between 25(OHD3 levels and lower limb muscle strength with the exception of peak torque of the left knee extensors at an angular velocity of 150°/s (r=0.41. No significant correlations were found between hand grip strength and maximum oxygen uptake. Based on our study we concluded that in well-trained professional soccer players, there was no correlation between serum levels of 25(OHD3 and muscle strength or maximum oxygen uptake.
Siegert, S.; Herrojo Ruiz, M.; Brücke, C.; Hueble, J.; Schneider, H.G.; Ullsperger, M.; Kühn, A.A.
2014-01-01
Error monitoring is essential for optimizing motor behavior. It has been linked to the medial frontal cortex, in particular to the anterior midcingulate cortex (aMCC). The aMCC subserves its performance-monitoring function in interaction with the basal ganglia (BG) circuits, as has been demonstrated
Li, Dingcheng
2011-01-01
Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…
Sambrook, Thomas D; Goslin, Jeremy
2014-08-01
Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an expected and obtained reward. There is evidence that the brain computes RPEs, but an outstanding question is whether positive RPEs ("better than expected") and negative RPEs ("worse than expected") are represented in a single integrated system. An electrophysiological component, feedback related negativity, has been claimed to encode an RPE but its relative sensitivity to the utility of positive and negative RPEs remains unclear. This study explored the question by varying the utility of positive and negative RPEs in a design that controlled for other closely related properties of feedback and could distinguish utility from salience. It revealed a mediofrontal sensitivity to utility, for positive RPEs at 275-310ms and for negative RPEs at 310-390ms. These effects were preceded and succeeded by a response consistent with an unsigned prediction error, or "salience" coding.
Retrieval of relative humidity profiles and its associated error from Megha-Tropiques measurements
Sivira, R.; Brogniez, H.; Mallet, C.; Oussar, Y.
2013-05-01
The combination of the two microwave radiometers, SAPHIR and MADRAS, on board the Megha-Tropiques platform is explored to define a retrieval method that estimates not only the relative humidity profile but also the associated confidence intervals. A comparison of three retrieval models was performed, in equal conditions of input and output data sets, through their statistical values (error variance, correlation coefficient and error mean) obtaining a profile of seven layers of relative humidity. The three models show the same behavior with respect to layers, mid-tropospheric layers reaching the best statistical values suggesting a model-independent problem. Finally, the study of the probability density function of the relative humidity at a given atmospheric pressure further gives insight of the confidence intervals.
Cavanagh, James F
2015-04-15
Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making.
无
2010-01-01
In order to solve the problems that the novel wide area differential method on the satellite clock and ephemeris relative correction (CERC) in the non-geostationary orbit satellite constellation, a virtual reference satellite (VRS) differential principle using relative correction of satellite ephemeris errors is proposed. It is referred to be as the VRS differential principle, and the elaboration is focused on the construction of pseudo-range errors of VRS. Through qualitative analysis, it can be found that the impact of the satellite’s clock and ephemeris errors on positioning can basically be removed and the users’ positioning errors are near zero. Through simulation analysis of the differential performance, it is verified that the differential method is universal in all kinds of satellite navigation systems with geostationary orbit (GEO) constellation, Medium orbit (MEO) constellation or hybrid orbit constellation, and it has insensitivity to abnormal aspects of a satellite ephemeris and clock. Moreover, the real time positioning accuracy of differential users can be maintained within several decimeters after the pseudo-range measurement noise is effectively weakened or eliminated.
Cai, Chenglin; Li, Xiaohui; Wu, Haitao
2010-12-01
In order to solve the problems that the novel wide area differential method on the satellite clock and ephemeris relative correction (CERC) in the non-geostationary orbit satellite constellation, a virtual reference satellite (VRS) differential principle using relative correction of satellite ephemeris errors is proposed. It is referred to be as the VRS differential principle, and the elaboration is focused on the construction of pseudo-range errors of VRS. Through qualitative analysis, it can be found that the impact of the satellite's clock and ephemeris errors on positioning can basically be removed and the users' positioning errors are near zero. Through simulation analysis of the differential performance, it is verified that the differential method is universal in all kinds of satellite navigation systems with geostationary orbit (GEO) constellation, Medium orbit (MEO) constellation or hybrid orbit constellation, and it has insensitivity to abnormal aspects of a satellite ephemeris and clock. Moreover, the real time positioning accuracy of differential users can be maintained within several decimeters after the pseudo-range measurement noise is effectively weakened or eliminated.
Van der Borght, Liesbet; Houtman, Femke; Burle, Boris; Notebaert, Wim
2016-03-01
Electrophysiologically, errors are characterized by a negative deflection, the error related negativity (ERN), which is followed by the error positivity (Pe). However, it has been suggested that this latter component consists of two subcomponents, with an early frontocentral Pe reflecting a continuation of the ERN, and a centro-parietal Pe reflecting error awareness. Using Laplacian transformed averages, a correct-related negativity (CRN; similar to the ERN), can be found on correct trials. As this technique allows for the decomposition of the recorded scalp potentials resulting in a better dissociation of the underlying brain activities, Laplacian transformation was used in the present study to differentiate between both the ERN/CRN and both Pe components. Additionally, task difficulty was manipulated. Our results show a clearly distinguishable early and late Pe. Both the ERN/CRN and the early Pe varied with task difficulty, showing decreased ERN/early Pe in the difficult condition. However, the late Pe was not influenced by our difficulty manipulation. This suggests that the early and the late Pe reflect qualitatively different processes.
Littel, Marianne; van den Berg, Ivo; Luijten, Maartje; van Rooij, Antonius J; Keemink, Lianne; Franken, Ingmar H A
2012-09-01
Excessive computer gaming has recently been proposed as a possible pathological illness. However, research on this topic is still in its infancy and underlying neurobiological mechanisms have not yet been identified. The determination of underlying mechanisms of excessive gaming might be useful for the identification of those at risk, a better understanding of the behavior and the development of interventions. Excessive gaming has been often compared with pathological gambling and substance use disorder. Both disorders are characterized by high levels of impulsivity, which incorporates deficits in error processing and response inhibition. The present study aimed to investigate error processing and response inhibition in excessive gamers and controls using a Go/NoGo paradigm combined with event-related potential recordings. Results indicated that excessive gamers show reduced error-related negativity amplitudes in response to incorrect trials relative to correct trials, implying poor error processing in this population. Furthermore, excessive gamers display higher levels of self-reported impulsivity as well as more impulsive responding as reflected by less behavioral inhibition on the Go/NoGo task. The present study indicates that excessive gaming partly parallels impulse control and substance use disorders regarding impulsivity measured on the self-reported, behavioral and electrophysiological level. Although the present study does not allow drawing firm conclusions on causality, it might be that trait impulsivity, poor error processing and diminished behavioral response inhibition underlie the excessive gaming patterns observed in certain individuals. They might be less sensitive to negative consequences of gaming and therefore continue their behavior despite adverse consequences. © 2012 The Authors, Addiction Biology © 2012 Society for the Study of Addiction.
Bissonette, Gregory B; Roesch, Matthew R
2016-01-01
Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.
Roesch, Matthew R.
2017-01-01
Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036
Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research
Bakker, Marjan; Wicherts, Jelte M.
2014-01-01
Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606
Novel relations between the ergodic capacity and the average bit error rate
Yilmaz, Ferkan
2011-11-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.
Novel Relations between the Ergodic Capacity and the Average Bit Error Rate
Yilmaz, Ferkan
2012-01-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their...
Terhune, Claire E; Hylander, William L; Vinyard, Christopher J; Taylor, Andrea B
2015-05-01
Maximum jaw gape is a performance variable related to feeding and non-feeding oral behaviors, such as canine gape displays, and is influenced by several factors including jaw-muscle fiber architecture, muscle position on the skull, and jaw morphology. Maximum gape, jaw length, and canine height are strongly correlated across catarrhine primates, but relationships between gape and other aspects of masticatory apparatus morphology are less clear. We examine the effects of jaw-adductor fiber architecture, jaw-muscle leverage, and jaw form on gape in an intraspecific sample of sexually dimorphic crab-eating macaques (Macaca fascicularis). As M. fascicularis males have relatively larger maximum gapes than females, we predict that males will have muscle and jaw morphologies that facilitate large gape, but these morphologies may come at some expense to bite force. Male crab-eating macaques have relatively longer jaw-muscle fibers, masseters with decreased leverage, and temporomandibular joint morphologies that facilitate the production of wide gapes. Because relative canine height is correlated with maximum gape in catarrhines, and males have relatively longer canines than females, these results support the hypothesis that male M. fascicularis have experienced selection to increase maximum gape. The sexes do not differ in relative masseter physiologic cross-sectional area (PCSA), but males compensate for a potential trade-off between muscle excursion versus muscle force with increased temporalis weight and PCSA. This musculoskeletal configuration is likely functionally significant for behaviors involving aggressive canine biting and displays in male M. fascicularis and provides additional evidence supporting the multifactorial nature of the catarrhine masticatory apparatus. Our results have implications for the evolution of craniofacial morphology in catarrhine primates and reinforce the importance of evaluating additional factors other than feeding behavior and diet
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
Software platform for managing the classification of error- related potentials of observers
Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.
2015-09-01
Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc
2012-01-01
and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...... emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy....
Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation
Tahavori, Maryamsadat; Shaker, Hamid Reza
2012-01-01
A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....
何洋; 纪昌明; 田开华; 张验科; 李传刚
2016-01-01
为了更好的研究径流预报误差的分布规律，应用最大熵原理，建立径流预报误差分布的最大熵模型；以官地水库径流预报系列为例，计算其不同预见期的径流预报误差概率密度函数及分布曲线，将该分布曲线与理论正态分布曲线和样本直方图进行对比，结果表明最大熵法求得的误差分布能更好地描述径流预报误差的分布特性。考虑流域径流年内的丰枯变化，以枯水期、汛期和过渡期对径流系列进行划分，分别分析各个时期的误差分布规律，并给出预报误差在不同置信区间下的置信度，从而更好地掌握径流预报误差的分布规律，为提高径流预报精度提供一条新途径。%To deeply study the distribution law of runoff forecast error, the maximum entropy principle is applied and the maximum entropy model for the distribution of runoff prediction error is established in this paper. The authors use the runoff forecast series in Guandi Reservoir as an example and calculate the probability density function and distribution curve of the runoff forecast error for different forecasting periods. The distribution curves are compared with the theoretical normal distribution curves and the histogram of the samples. The results show that the distribution characteristics of the error distribution calculated by the maximum entropy method can describe the runoff forecasting error better. Considering the change of runoff years, the runoff series are divided into droughts, flood and transition seasons. The error distribution rule of each period is analyzed, and the confidence of forecasting error at different confidence interval offered, thus mastering the distribution rule of runoff forecasting error better and providing a new way to improve the accuracy of runoff forecasting.
Ferreira,Amanda de Freitas; Henriques,João César Guimarães; Almeida,Guilherme de Araújo; Machado,Asbel Rodrigues; Machado, Naila Aparecida de Godoi; Fernandes Neto,Alfredo Júlio
2009-01-01
This research consisted of a quantitative assessment, and aimed to measure the possible discrepancies between the maxillomandibular positions for centric relation (CR) and maximum intercuspation (MI), using computed tomography volumetric cone beam (cone beam method). The sample of the study consisted of 10 asymptomatic young adult patients divided into two types of standard occlusion: normal occlusion and Angle Class I occlusion. In order to obtain the centric relation, a JIG device and mandi...
Hu, X.; Prabhu, S.; Atamturktur, S.; Cogan, S.
2017-02-01
Model-based damage detection entails the calibration of damage-indicative parameters in a physics-based computer model of an undamaged structural system against measurements collected from its damaged counterpart. The approach relies on the premise that changes identified in the damage-indicative parameters during calibration reveal the structural damage in the system. In model-based damage detection, model calibration has traditionally been treated as a process, solely operating on the model output without incorporating available knowledge regarding the underlying mechanistic behavior of the structural system. In this paper, the authors propose a novel approach for model-based damage detection by implementing the Extended Constitutive Relation Error (ECRE), a method developed for error localization in finite element models. The ECRE method was originally conceived to identify discrepancies between experimental measurements and model predictions for a structure in a given healthy state. Implementing ECRE for damage detection leads to the evaluation of a structure in varying healthy states and determination of discrepancy between model predictions and experiments due to damage. The authors developed an ECRE-based damage detection procedure in which the model error and structural damage are identified in two distinct steps and demonstrate feasibility of the procedure in identifying the presence, location and relative severity of damage on a scaled two-story steel frame for damage scenarios of varying type and severity.
Sheng, Shiqi; Tu, Z C
2015-02-01
We present a unified perspective on nonequilibrium heat engines by generalizing nonlinear irreversible thermodynamics. For tight-coupling heat engines, a generic constitutive relation for nonlinear response accurate up to the quadratic order is derived from the stalling condition and the symmetry argument. By applying this generic nonlinear constitutive relation to finite-time thermodynamics, we obtain the necessary and sufficient condition for the universality of efficiency at maximum power, which states that a tight-coupling heat engine takes the universal efficiency at maximum power up to the quadratic order if and only if either the engine symmetrically interacts with two heat reservoirs or the elementary thermal energy flowing through the engine matches the characteristic energy of the engine. Hence we solve the following paradox: On the one hand, the quadratic term in the universal efficiency at maximum power for tight-coupling heat engines turned out to be a consequence of symmetry [Esposito, Lindenberg, and Van den Broeck, Phys. Rev. Lett. 102, 130602 (2009); Sheng and Tu, Phys. Rev. E 89, 012129 (2014)]; On the other hand, typical heat engines such as the Curzon-Ahlborn endoreversible heat engine [Curzon and Ahlborn, Am. J. Phys. 43, 22 (1975)] and the Feynman ratchet [Tu, J. Phys. A 41, 312003 (2008)] recover the universal efficiency at maximum power regardless of any symmetry.
Online detection of error-related potentials boosts the performance of mental typewriters
Schmidt Nico M
2012-02-01
Full Text Available Abstract Background Increasing the communication speed of brain-computer interfaces (BCIs is a major aim of current BCI-research. The idea to automatically detect error-related potentials (ErrPs in order to veto erroneous decisions of a BCI has been existing for more than one decade, but this approach was so far little investigated in online mode. Methods In our study with eleven participants, an ErrP detection mechanism was implemented in an electroencephalography (EEG based gaze-independent visual speller. Results Single-trial ErrPs were detected with a mean accuracy of 89.1% (AUC 0.90. The spelling speed was increased on average by 49.0% using ErrP detection. The improvement in spelling speed due to error detection was largest for participants with low spelling accuracy. Conclusion The performance of BCIs can be increased by using an automatic error detection mechanism. The benefit for patients with motor disorders is potentially high since they often have rather low spelling accuracies compared to healthy people.
Individual differences in driver inattention: the attention-related driving errors scale.
Ledesma, Rubén D; Montes, Silvana A; Poó, Fernando M; López-Ramón, María F
2010-04-01
Driver inattention is one of the most common causes of traffic collisions. The aim of this work was to study the reliability and validity of the Attention-Related Driving Errors Scale (ARDES), a novel self-report measure that assesses individual differences in driving errors resulting from failures of attention. The relationship between driver inattention and general psychological variables that could be connected to these phenomena was also explored. Participants were a convenience sample of drivers drawn from the general population of Mar del Plata, Argentina (n = 301). Drivers responded to ARDES items, a sociodemographic questionnaire, and several validation measures. The internal structure of ARDES was assessed by factor analysis and internal consistency analysis. Analysis of covariance (ANCOVA) was applied to examine differences in ARDES scores due to sociodemographic variables. Logistic regression analysis was used to determine the association between ARDES and self-reported traffic crashes and tickets. Pearson's correlations were calculated between ARDES and validation measures. Factor analysis suggested the existence of one underlying factor. The 19 items proved to have discriminative power. The scale's internal consistency was high (Cronbach's alpha = .86). ARDES discriminated those who had reported road crashes and traffic tickets from those who had not. Correlations with validation measures were robust and theoretically consistent. Findings suggested that driving errors are strongly associated with general error proneness, lack of attention when performing everyday activities, and dissociative personality traits. The present study provides preliminary evidence for the validity and reliability of the ARDES scores. Further validation studies should be conducted applying other methodologies and sources of information, such as traffic records, driving simulations, or naturalistic methodologies.
Cano Rodilla, Carmen; Beauducel, André; Leue, Anja
2016-01-01
In their innovative study, Inzlicht and Al-Khindi (2012) demonstrated that participants who were allowed to misattribute their arousal and negative affect induced by errors to a placebo beverage had a reduced error-related negativity (ERN/Ne) compared to controls not being allowed to misattribute their arousal following errors. These results contribute to the ongoing debate that affect and motivation are interwoven with the cognitive processing of errors. Evidence that the misattribution of negative affect modulates the ERN/Ne is essential for understanding the mechanisms behind ERN/Ne. Therefore, and because of the growing debate on reproducibility of empirical findings, we aimed at replicating the misattribution effects on the ERN/Ne in a go/nogo task. Students were randomly assigned to a misattribution group (n = 48) or a control group (n = 51). Participants of the misattribution group consumed a beverage said to have side effects that would increase their physiological arousal, so that they could misattribute the negative affect induced by errors to the beverage. Participants of the control group correctly believed that the beverage had no side effects. As Inzlicht and Al-Khindi (2012), we did not observe performance differences between both groups. However, ERN/Ne differences between misattribution and control group could not be replicated, although the statistical power of the replication study was high. Evidence regarding the replication of performance and the non-replication of ERN/Ne findings was confirmed by Bayesian statistics. PMID:27708571
Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces
Iturrate, I.; Montesano, L.; Minguez, J.
2013-04-01
Objective. A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user’s mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials.Approach. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance.Results and significance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.
Online detection of error-related potentials boosts the performance of mental typewriters
2012-01-01
Abstract Background Increasing the communication speed of brain-computer interfaces (BCIs) is a major aim of current BCI-research. The idea to automatically detect error-related potentials (ErrPs) in order to veto erroneous decisions of a BCI has been existing for more than one decade, but this approach was so far little investigated in online mode. Methods In our study with eleven participants, an ErrP detection mechanism was implemented in an electroencephalography (EEG) based gaze-independ...
Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation
Tahavori, Maryamsadat; Shaker, Hamid Reza
2012-01-01
A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... to the other analogous counterparts, the proposed method shows to provide more accurate results in terms of time weighted norms when applied to the practical examples. It is shown that important properties of the time-weighted stochastic balanced reduction technique are extended to the mixed reduction method...
Error-related negativity in the skilled brain of pianists reveals motor simulation.
Proverbio, Alice Mado; Cozzi, Matteo; Orlandi, Andrea; Carminati, Manuel
2017-03-27
Evidences have been provided of a crucial role of multimodal audio-visuomotor processing in subserving the musical ability. In this paper we investigated whether musical audiovisual stimulation might trigger the activation of motor information in the brain of professional pianists, due to the presence of permanent gestures/sound associations. At this aim EEG was recorded in 24 pianists and naive participants engaged in the detection of rare targets while watching hundreds of video clips showing a pair of hands in the act of playing, along with a compatible or incompatible piano soundtrack. Hands size and apparent distance allowed self-ownership and agency illusions, and therefore motor simulation. Event-related potentials (ERPs) and relative source reconstruction showed the presence of an Error-related negativity (ERN) to incongruent trials at anterior frontal scalp sites, only in pianists, with no difference in naïve participants. ERN was mostly explained by an anterior cingulate cortex (ACC) source. Other sources included "hands" IT regions, the superior temporal gyrus (STG) involved in conjoined auditory and visuomotor processing, SMA and cerebellum (representing and controlling motor subroutines), and regions involved in body parts representation (somatosensory cortex, uncus, cuneus and precuneus). The findings demonstrate that instrument-specific audiovisual stimulation is able to trigger error shooting and correction neural responses via motor resonance and mirroring, being a possible aid in learning and rehabilitation. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L
2016-03-23
The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100 ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated.
Izawa, Kazuhiro P.; Watanabe, Satoshi; Hirano, Yasuyuki; Matsushima, Shinya; Suzuki, Tomohiro; Oka, Koichiro; Kida, Keisuke; Suzuki, Kengo; Osada, Naohiko; Omiya, Kazuto; Brubaker, Peter H.; Shimizu, Hiroyuki; Akashi, Yoshihiro J.
2015-01-01
Abstract Maximum gait speed and physical activity (PA) relate to mortality and morbidity, but little is known about gender-related differences in these factors in elderly hospitalized cardiac inpatients. This study aimed to determine differences in maximum gait speed and daily measured PA based on sex and the relationship between these measures in elderly cardiac inpatients. A consecutive 268 elderly Japanese cardiac inpatients (mean age, 73.3 years) were enrolled and divided by sex into female (n = 75, 28%) and male (n = 193, 72%) groups. Patient characteristics and maximum gait speed, average step count, and PA energy expenditure (PAEE) in kilocalorie per day for 2 days assessed by accelerometer were compared between groups. Gait speed correlated positively with in-hospital PA measured by average daily step count (r = 0.46, P < 0.001) and average daily PAEE (r = 0.47, P < 0.001) in all patients. After adjustment for left ventricular ejection fraction, step counts and PAEE were significantly lower in females than males (2651.35 ± 1889.92 vs 4037.33 ± 1866.81 steps, P < 0.001; 52.74 ± 51.98 vs 99.33 ± 51.40 kcal, P < 0.001), respectively. Maximum gait speed was slower and PA lower in elderly female versus male inpatients. Minimum gait speed and step count values in this study might be minimum target values for elderly male and female Japanese cardiac inpatients. PMID:25789953
Ferreira, Amanda de Freitas; Henriques, João César Guimarães; Almeida, Guilherme Araújo; Machado, Asbel Rodrigues; Machado, Naila Aparecida de Godoi; Fernandes Neto, Alfredo Júlio
2009-01-01
This research consisted of a quantitative assessment, and aimed to measure the possible discrepancies between the maxillomandibular positions for centric relation (CR) and maximum intercuspation (MI), using computed tomography volumetric cone beam (cone beam method). The sample of the study consisted of 10 asymptomatic young adult patients divided into two types of standard occlusion: normal occlusion and Angle Class I occlusion. In order to obtain the centric relation, a JIG device and mandible manipulation were used to deprogram the habitual conditions of the jaw. The evaluations were conducted in both frontal and lateral tomographic images, showing the condyle/articular fossa relation. The images were processed in the software included in the NewTom 3G device (QR NNT software version 2.00), and 8 tomographic images were obtained per patient, four laterally and four frontally exhibiting the TMA's (in CR and MI, on both sides, right and left). By means of tools included in another software, linear and angular measurements were performed and statistically analyzed by student t test. According to the methodology and the analysis performed in asymptomatic patients, it was not possible to detect statistically significant differences between the positions of centric relation and maximum intercuspation. However, the resources of cone beam tomography are of extreme relevance to the completion of further studies that use heterogeneous groups of samples in order to compare the results.
Amanda de Freitas Ferreira
2009-01-01
Full Text Available This research consisted of a quantitative assessment, and aimed to measure the possible discrepancies between the maxillomandibular positions for centric relation (CR and maximum intercuspation (MI, using computed tomography volumetric cone beam (cone beam method. The sample of the study consisted of 10 asymptomatic young adult patients divided into two types of standard occlusion: normal occlusion and Angle Class I occlusion. In order to obtain the centric relation, a JIG device and mandible manipulation were used to deprogram the habitual conditions of the jaw. The evaluations were conducted in both frontal and lateral tomographic images, showing the condyle/articular fossa relation. The images were processed in the software included in the NewTom 3G device (QR NNT software version 2.00, and 8 tomographic images were obtained per patient, four laterally and four frontally exhibiting the TMA's (in CR and MI, on both sides, right and left. By means of tools included in another software, linear and angular measurements were performed and statistically analyzed by student t test. According to the methodology and the analysis performed in asymptomatic patients, it was not possible to detect statistically significant differences between the positions of centric relation and maximum intercuspation. However, the resources of cone beam tomography are of extreme relevance to the completion of further studies that use heterogeneous groups of samples in order to compare the results.
Ledesma, Rubén Daniel; Montes, Silvana Andrea; Poó, Fernando Martín; López-Ramón, María Fernanda
2015-03-01
The aim of this research was (a) to study driver inattention as a trait-like variable and (b) to provide new evidence of validity for the Attention-Related Driving Errors Scale (ARDES). Driving inattention is approached from an individual differences perspective. We are interested in how drivers vary in their propensity to experience failures of attention and in the methods to measure these differences. In a first sample (n = 301), we tested, via confirmatory factor analysis, a new theoretical model for the ARDES. In a second sample (n = 201), we evaluated the relationship between inattention and internal and external sources of distraction and social desirability bias in ARDES responses. A subsample (n = 65) was reevaluated to study temporal stability of the ARDES scores. Errors measured by the ARDES can be classified according to the driving task level at which they occur (navigation, maneuvering, or control). Differences in ARDES scores based on collision history were observed. ARDES was related to internal sources of distraction and was independent of the level of exposure to distracting activities. Test-retest showed a high degree of stability in ARDES scores. Low correlations were found with a social desirability measure. ARDES appears to measure a personal trait that remains relatively stable over time and is relatively independent of distracting activities. New evidence of validity emerged for this self-report. ARDES can be used to measure individual differences in driving inattention and to help tailor preventive interventions for inattentive drivers. It can serve as an instrument of driver self-assessment in educational and training contexts. © 2014, Human Factors and Ergonomics Society.
The relative impact of sizing errors on steam generator tube failure probability
Cizelj, L.; Dvorsek, T. [Jozef Stefan Inst., Ljubljana (Slovenia)
1998-07-01
The Outside Diameter Stress Corrosion Cracking (ODSCC) at tube support plates is currently the major degradation mechanism affecting the steam generator tubes made of Inconel 600. This caused development and licensing of degradation specific maintenance approaches, which addressed two main failure modes of the degraded piping: tube rupture; and excessive leakage through degraded tubes. A methodology aiming at assessing the efficiency of a given set of possible maintenance approaches has already been proposed by the authors. It pointed out better performance of the degradation specific over generic approaches in (1) lower probability of single and multiple steam generator tube rupture (SGTR), (2) lower estimated accidental leak rates and (3) less tubes plugged. A sensitivity analysis was also performed pointing out the relative contributions of uncertain input parameters to the tube rupture probabilities. The dominant contribution was assigned to the uncertainties inherent to the regression models used to correlate the defect size and tube burst pressure. The uncertainties, which can be estimated from the in-service inspections, are further analysed in this paper. The defect growth was found to have significant and to some extent unrealistic impact on the probability of single tube rupture. Since the defect growth estimates were based on the past inspection records they strongly depend on the sizing errors. Therefore, an attempt was made to filter out the sizing errors and to arrive at more realistic estimates of the defect growth. The impact of different assumptions regarding sizing errors on the tube rupture probability was studied using a realistic numerical example. The data used is obtained from a series of inspection results from Krsko NPP with 2 Westinghouse D-4 steam generators. The results obtained are considered useful in safety assessment and maintenance of affected steam generators. (author)
Tomislav Milekovic
Full Text Available BACKGROUND: Brain-machine interfaces (BMIs can translate the neuronal activity underlying a user's movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i errors can be corrected online after being detected and (ii adaptive BMI decoding algorithm can be updated to make fewer errors in the future. METHODOLOGY/PRINCIPAL FINDINGS: Here, we show that error events can be detected from human electrocorticography (ECoG during a continuous task with high precision, given a temporal tolerance of 300-400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. CONCLUSIONS/SIGNIFICANCE: The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation.
Seo-Hee Kim
Full Text Available The present study used event-related potentials (ERPs to investigate deficits in error-monitoring by college students with schizotypal traits. Scores on the Schizotypal Personality Questionnaire (SPQ were used to categorize the participants into schizotypal-trait (n = 17 and normal control (n = 20 groups. The error-monitoring abilities of the participants were evaluated using the Simon task, which consists of congruent (locations of stimulus and response are the same and incongruent (locations of stimulus and response are different conditions. The schizotypal-trait group committed more errors on the Simon task and exhibited smaller error-related negativity (ERN amplitudes than did the control group. Additionally, ERN amplitude measured at FCz was negatively correlated with the error rate on the Simon task in the schizotypal-trait group but not in the control group. The two groups did not differ in terms of correct-related potentials (CRN, error positivity (Pe and correct-related positivity (Pc amplitudes. The present results indicate that individuals with schizotypal traits have deficits in error-monitoring and that reduced ERN amplitudes may represent a biological marker of schizophrenia.
Using brain potentials to understand prism adaptation: the error-related negativity and the P300
Stephane Joseph Maclean
2015-06-01
Full Text Available Prism adaptation (PA is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN – a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuo-motor responding is shifted to the opposite direction of that initially induced by the prisms. This visuo-motor aftereffect has been used to study visuo-motor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided space. In order to optimize PA’s use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs were recorded at the termination of each reach (screen-touch, then binned according to accuracy (hit vs. miss and phase of exposure block (early, middle, late. Results show that two ERP components were evoked by screen-touch: an early error-related negativity (ERN, and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN.
Rebecca J. Brooker
2014-07-01
Full Text Available Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children's risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN, an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems.
Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test
Christophides, Damianos; Davies, Alex; Fleckney, Mark
2016-12-01
Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.
Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole
2009-11-01
This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Larecki, Wieslaw; Banach, Zbigniew
2014-01-01
This paper analyzes the propagation of the waves of weak discontinuity in a phonon gas described by the four-moment maximum entropy phonon hydrodynamics involving a nonlinear isotropic phonon dispersion relation. For the considered hyperbolic equations of phonon gas hydrodynamics, the eigenvalue problem is analyzed and the condition of genuine nonlinearity is discussed. The speed of the wave front propagating into the region in thermal equilibrium is first determined in terms of the integral formula dependent on the phonon dispersion relation and subsequently explicitly calculated for the Dubey dispersion-relation model: |k|=ωc-1(1+bω2). The specification of the parameters c and b for sodium fluoride (NaF) and semimetallic bismuth (Bi) then makes it possible to compare the calculated dependence of the wave-front speed on the sample’s temperature with the empirical relations of Coleman and Newman (1988) describing for NaF and Bi the variation of the second-sound speed with temperature. It is demonstrated that the calculated temperature dependence of the wave-front speed resembles the empirical relation and that the parameters c and b obtained from fitting respectively the empirical relation and the original material parameters of Dubey (1973) are of the same order of magnitude, the difference being in the values of the numerical factors. It is also shown that the calculated temperature dependence is in good agreement with the predictions of Hardy and Jaswal’s theory (Hardy and Jaswal, 1971) on second-sound propagation. This suggests that the nonlinearity of a phonon dispersion relation should be taken into account in the theories aiming at the description of the wave-type phonon heat transport and that the Dubey nonlinear isotropic dispersion-relation model can be very useful for this purpose.
Kimmel, David G.; McGlaughon, Benjamin D.; Leonard, Jeremy; Paerl, Hans W.; Taylor, J. Christopher; Cira, Emily K.; Wetz, Michael S.
2015-05-01
Estuaries often have distinct zones of high chlorophyll a concentrations, known as chlorophyll maximum (CMAX). The persistence of these features is often attributed to physical (mixing and light availability) and chemical (nutrient availability) features, but the role of mesozooplankton grazing is rarely explored. We measured the spatial and temporal variability of the CMAX and mesozooplankton community in the eutrophic Neuse River Estuary, North Carolina. We also conducted grazing experiments to determine the relative impact of mesozooplankton grazing on the CMAX during the phytoplankton growing season (spring through late summer). The CMAX was consistently located upriver of the zone of maximum zooplankton abundance, with an average spatial separation of 18 km. Grazing experiments in the CMAX region revealed negligible effect of mesozooplankton on chlorophyll a during March, and no effect during June or August. These results suggest that the spatial separation of the peak in chlorophyll a concentration and mesozooplankton abundance results in minimal impact of mesozooplankton grazing, contributing to persistence of the CMAX for prolonged time periods. In the Neuse River Estuary, the low mesozooplankton abundance in the CMAX region is attributed to lack of a low salinity tolerant species, predation by the ctenophore Mnemiopsis leidyi, and/or physiologic impacts on mesozooplankton growth rates due to temperature (in the case of low wintertime abundances). The consequences of this lack of overlap result in exacerbation of the effects of eutrophication; namely a lack of trophic transfer to mesozooplankton in this region and the sinking of phytodetritus to the benthos that fuels hypoxia.
Ganushchak, Lesya Y; Schiller, Niels O
2008-01-01
During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses.
Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.
Harvey, Ashley R; Carden, Randy L
2009-08-01
Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, T.; Debener, S.; Calhoun, V.D.; Specht, K.; Engel, A.K.; Hugdahl, K.; Cramon, D.Y. von; Ullsperger, M.
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional Mill and applying independent component analysis followed by deconvolution of hemodynamic responses, we
Grossi Márcio L
2007-04-01
Full Text Available Abstract Background Vertical facial pattern may be related to the direction of pull of the masticatory muscles, yet its effect on occlusal force and elastic deformation of the mandible still is unclear. This study tested whether the variation in vertical facial pattern is related to the variation in maximum occlusal force (MOF and medial mandibular flexure (MMF in 51 fully-dentate adults. Methods Data from cephalometric analysis according to the method of Ricketts were used to divide the subjects into three groups: Dolichofacial (n = 6, Mesofacial (n = 10 and Brachyfacial (n = 35. Bilateral MOF was measured using a cross-arch force transducer placed in the first molar region. For MMF, impressions of the mandibular occlusal surface were made in rest (R and in maximum opening (O positions. The impressions were scanned, and reference points were selected on the occlusal surface of the contralateral first molars. MMF was calculated by subtracting the intermolar distance in O from the intermolar distance in R. Data were analysed by ANCOVA (fixed factors: facial pattern, sex; covariate: body mass index (BMI; alpha = 0.05. Results No significant difference of MOF or MMF was found among the three facial patterns (P = 0.62 and P = 0.72, respectively. BMI was not a significant covariate for MOF or MMF (P > 0.05. Sex was a significant factor only for MOF (P = 0.007; males had higher MOF values than females. Conclusion These results suggest that MOF and MMF did not vary as a function of vertical facial pattern in this Brazilian sample.
Lauer, J. W.; Parker, G.
2005-05-01
The floodplains of meandering rivers represent reservoirs that both store and release sediment. Bed material is generally released from cut banks and replaced in nearby point bars wherever migration occurs. Measuring the associated bed material flux is important for tracing the movement of contaminants that may be mixed with the bed material. Approximations of this flux can be made using a representative channel depth and sequences of aerial photography to estimate average absolute migration rates (or reworked areas) between photographs. Error in the aerial photographs leads to a positive bias in computed release rates. A method for removing this bias is introduced that uses the apparent offset of fixed linear features such as roads (along smaller rivers) or abandoned channel courses (along larger rivers). Measuring the rate of release of fine sediment is important both for predicting the long term morphodynamic evolution of the channel/floodplain system and for tracing the movement of contaminants that may be adsorbed to the fine sediment. While fine sediment can be mixed throughout the depth of the floodplain, it is most concentrated in the upper portion of older parts of the floodplain where it has had time to accumulate through overbank deposition. Its release rate can be estimated using migration rates computed from aerial photography in combination with local measurements of bank topography, both of which are highly variable even within a given reach. Where detailed bank topography is available for an entire reach, estimating the release of fine sediment is relatively straightforward. However, detailed topography is often unavailable along the banks of large lowland rivers, forcing estimates of the fine material flux to be made using a relatively small number of physically surveyed cross-sections. It is not immediately clear how many cross sections are required for a good estimate. This study performs Monte Carlo simulations on a detailed topographic dataset
Reducing patient identification errors related to glucose point-of-care testing
Gaurav Alreja
2011-01-01
Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.
Ureña-López, L. Arturo; Robles, Victor H.; Matos, T.
2017-08-01
Recent analysis of the rotation curves of a large sample of galaxies with very diverse stellar properties reveals a relation between the radial acceleration purely due to the baryonic matter and the one inferred directly from the observed rotation curves. Assuming the dark matter (DM) exists, this acceleration relation is tantamount to an acceleration relation between DM and baryons. This leads us to a universal maximum acceleration for all halos. Using the latter in DM profiles that predict inner cores implies that the central surface density μDM=ρsrs must be a universal constant, as suggested by previous studies of selected galaxies, revealing a strong correlation between the density ρs and scale rs parameters in each profile. We then explore the consequences of the constancy of μDM in the context of the ultralight scalar field dark matter model (SFDM). We find that for this model μDM=648 M⊙ pc-2 and that the so-called WaveDM soliton profile should be a universal feature of the DM halos. Comparing with the data from the Milky Way and Andromeda satellites, we find that they are all consistent with a boson mass of the scalar field particle of the order of 10-21 eV /c2, which puts the SFDM model in agreement with recent cosmological constraints.
Polli, Frida E.; Barton, Jason J. S.; Thakkar, Katharine N.; Greve, Douglas N.; Goff, Donald C.; Rauch, Scott L.; Manoach, Dara S.
2008-01-01
To perform well on any challenging task, it is necessary to evaluate your performance so that you can learn from errors. Recent theoretical and experimental work suggests that the neural sequellae of error commission in a dorsal anterior cingulate circuit index a type of contingency- or reinforcement-based learning, while activation in a rostral…
Frequency-domain generelaized singular peruturbation method for relative error model order reduction
Hamid Reza SHAKER
2009-01-01
A new mixed method for relative error model order reduction is proposed.In the proposed method the frequency domain balanced stochastic truncation method is improved by applying the generalized singular perturbation method to the frequency domain balanced system in the reduction procedure.The frequency domain balanced stochastic truncation method,which was proposed in [15] and [17] by the author,is based on two recently developed methods,namely frequency domain balanced truncation within a desired frequency bound and inner-outer factorization techniques.The proposed method in this paper is a carry over of the frequency-domain balanced stochastic truncation and is of interest for practical model order reduction because in this context it shows to keep the accuracy of the approximation as high as possible without sacrificing the computational efficiency and important system properties.It is shown that some important properties of the frequency domain stochastic balanced reduction technique are extended to the proposed reduction method by using the concept and properties of the reciprocal systems.Numerical results show the accuracy,simplicity and flexibility enhancement of the method.
Rodriguez-Vallejo, Manuel; Monsoriu, Juan A; Ferrando, Vicente; Furlan, Walter D
2016-01-01
Purpose: To assess the peripheral refraction induced by Fractal Contact Lenses (FCLs) in myopic eyes by means of a two-dimensional Relative Peripheral Refractive Error (RPRE) map. Methods: FCLs prototypes were specially manufactured and characterized. This study involved twenty-six myopic subjects ranging from -0.50 D to -7.00 D. The two-dimensional RPRE was measured with an open-field autorefractor by means of tracking targets distributed in a square grid from -30 degrees (deg) nasal to 30 deg temporal and 15 deg superior to -15 deg inferior. Corneal topographies were taken in order to assess correlations between corneal asphericity, lens decentration and RPRE represented in vector components M, J0 and J45. Results: The mean power of the FCLs therapeutic zones was 1.32 +/- 0.28 D. Significant correlations were found between the corneal asphericity and vector components of the RPRE in the nacked eyes. FCLs were decentered a mean of 0.7 +/- 0.19 mm to the temporal cornea. M decreased asymmetrically between nas...
Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.
Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R
2010-01-01
Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.
Data on simulated interpersonal touch, individual differences and the error-related negativity
Mandy Tjew-A-Sin
2016-06-01
Full Text Available The dataset includes data from the electroencephalogram study reported in our paper: ‘Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity’ (doi:10.1016/j.neulet.2016.01.044 (Tjew-A-Sin et al., 2016 [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg.
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835
Correcting a fundamental error in greenhouse gas accounting related to bioenergy.
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy
2012-06-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.
杜金华; 王莎
2013-01-01
首先介绍3种典型的用于翻译错误检测和分类的单词后验概率特征,即基于固定位置的词后验概率、基于滑动窗的词后验概率和基于词对齐的词后验概率,分析其对错误检测性能的影响；然后,将其分别与语言学特征如词性、词及由LG句法分析器抽取的句法特征等进行组合,利用最大熵分类器预测翻译错误,并在汉英NIST数据集上进行实验验证和比较.实验结果表明,不同的单词后验概率对分类错误率的影响是显著的,并且在词后验概率基础上加入语言学特征的组合特征可以显著降低分类错误率,提高译文错误预测性能.%The authors firstly introduce three typical word posterior probabilities (WPP) for error detection and classification, which are fixed position WPP, sliding window WPP, and alignment-based WPP, and analyzes their impact on the detection performance. Then each WPP feature is combined with three linguistic features (Word, POS and LG Parsing knowledge) over the maximum entropy classifier to predict the translation errors. Experimental results on Chinese-to-English NIST datasets show that the influences of different WPP features on the classification error rate (CER) are significant, and the combination of WPP with linguistic features can significantly reduce the CER and improve the prediction capability of the classifier.
DuPuis, David; Ram, Nilam; Willner, Cynthia J; Karalunas, Sarah; Segalowitz, Sidney J; Gatzke-Kopp, Lisa M
2015-05-01
Event-related potentials (ERPs) have been proposed as biomarkers capable of reflecting individual differences in neural processing not necessarily detectable at the behavioral level. However, the role of ERPs in developmental research could be hampered by current methodological approaches to quantification. ERPs are extracted as an average waveform over many trials; however, actual amplitudes would be misrepresented by an average if there was high trial-to-trial variability in signal latency. Low signal temporal consistency is thought to be a characteristic of immature neural systems, although consistency is not routinely measured in ERP research. The present study examined the differential contributions of signal strength and temporal consistency across trials in the error-related negativity (ERN) in 6-year-old children, as well as the developmental changes that occur in these measures. The 234 children were assessed annually in kindergarten, 1st, and 2nd grade. At all assessments signal strength and temporal consistency were highly correlated with the average ERN amplitude, and were not correlated with each other. Consistent with previous findings, ERN deflections in the averaged waveform increased with age. This was found to be a function of developmental increases in signal temporal consistency, whereas signal strength showed a significant decline across this time period. In addition, average ERN amplitudes showed low-to-moderate stability across the three assessments whereas signal strength was highly stable. In contrast, signal temporal consistency did not evidence rank-order stability across these ages. Signal strength appears to reflect a stable individual trait whereas developmental changes in temporal consistency may be experientially influenced.
Michael J Larson
2013-07-01
Full Text Available Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN and error positivity (Pe components of the scalp-recorded event-related potential (ERP represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state versus trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.
Larson, Michael J; Steffen, Patrick R; Primosch, Mark
2013-01-01
Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN) and error positivity (Pe) components of the scalp-recorded event-related potential (ERP) represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group) to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state vs. trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.
Tamaki, Hirofumi; Satoh, Hiroki; Hori, Satoko; Sawada, Yasufumi
2012-01-01
Confusion of drug names is one of the most common causes of drug-related medical errors. A similarity measure of drug names, "vwhtfrag", was developed to discriminate whether drug name pairs are likely to cause confusion errors, and to provide information that would be helpful to avoid errors. The aim of the present study was to evaluate and improve vwhtfrag. Firstly, we evaluated the correlation of vwhtfrag with subjective similarity or error rate of drug name pairs in psychological experiments. Vwhtfrag showed a higher correlation to subjective similarity (college students: r=0.84) or error rate than did other conventional similarity measures (htco, cos1, edit). Moreover, name pairs that showed coincidences of the initial character strings had a higher subjective similarity than those which had coincidences of the end character strings and had the same vwhtfrag. Therefore, we developed a new similarity measure (vwhtfrag+), in which coincidence of initial character strings in name pairs is weighted by 1.53 times over coincidence of end character strings. Vwhtfrag+ showed a higher correlation to subjective similarity than did unmodified vwhtfrag. Further studies appear warranted to examine in detail whether vwhtfrag+ has superior ability to discriminate drug name pairs likely to cause confusion errors.
Menelaou, Evdokia; Paul, Latoya T. [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Perera, Surangi N. [Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States); Svoboda, Kurt R., E-mail: svobodak@uwm.edu [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States)
2015-04-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.
Björn R Lindström
Full Text Available Cognitive control is needed when mistakes have consequences, especially when such consequences are potentially harmful. However, little is known about how the aversive consequences of deficient control affect behavior. To address this issue, participants performed a two-choice response time task where error commissions were expected to be punished by electric shocks during certain blocks. By manipulating (1 the perceived punishment risk (no, low, high associated with error commissions, and (2 response conflict (low, high, we showed that motivation to avoid punishment enhanced performance during high response conflict. As a novel index of the processes enabling successful cognitive control under threat, we explored electromyographic activity in the corrugator supercilii (cEMG muscle of the upper face. The corrugator supercilii is partially controlled by the anterior midcingulate cortex (aMCC which is sensitive to negative affect, pain and cognitive control. As hypothesized, the cEMG exhibited several key similarities with the core temporal and functional characteristics of the Error-Related Negativity (ERN ERP component, the hallmark index of cognitive control elicited by performance errors, and which has been linked to the aMCC. The cEMG was amplified within 100 ms of error commissions (the same time-window as the ERN, particularly during the high punishment risk condition where errors would be most aversive. Furthermore, similar to the ERN, the magnitude of error cEMG predicted post-error response time slowing. Our results suggest that cEMG activity can serve as an index of avoidance motivated control, which is instrumental to adaptive cognitive control when consequences are potentially harmful.
Lindström, Björn R; Mattsson-Mårn, Isak Berglund; Golkar, Armita; Olsson, Andreas
2013-01-01
Cognitive control is needed when mistakes have consequences, especially when such consequences are potentially harmful. However, little is known about how the aversive consequences of deficient control affect behavior. To address this issue, participants performed a two-choice response time task where error commissions were expected to be punished by electric shocks during certain blocks. By manipulating (1) the perceived punishment risk (no, low, high) associated with error commissions, and (2) response conflict (low, high), we showed that motivation to avoid punishment enhanced performance during high response conflict. As a novel index of the processes enabling successful cognitive control under threat, we explored electromyographic activity in the corrugator supercilii (cEMG) muscle of the upper face. The corrugator supercilii is partially controlled by the anterior midcingulate cortex (aMCC) which is sensitive to negative affect, pain and cognitive control. As hypothesized, the cEMG exhibited several key similarities with the core temporal and functional characteristics of the Error-Related Negativity (ERN) ERP component, the hallmark index of cognitive control elicited by performance errors, and which has been linked to the aMCC. The cEMG was amplified within 100 ms of error commissions (the same time-window as the ERN), particularly during the high punishment risk condition where errors would be most aversive. Furthermore, similar to the ERN, the magnitude of error cEMG predicted post-error response time slowing. Our results suggest that cEMG activity can serve as an index of avoidance motivated control, which is instrumental to adaptive cognitive control when consequences are potentially harmful.
Alsharif, W; Davis, M; McGee, A; Rainford, L
2017-05-01
To investigate MR radiographers' current knowledge base and confidence level in relation to quality-related errors within MR images. Thirty-five MR radiographers within 16 MRI departments in the Kingdom of Saudi Arabia (KSA) independently reviewed a prepared set of 25 MR images, naming the error, specifying the error-correction strategy, scoring how confident they were in recognising this error and suggesting a correction strategy by using a scale of 1-100. The datasets were obtained from MRI departments in the KSA to represent the range of images which depicted excellent, acceptable and poor image quality. The findings demonstrated a low level of radiographer knowledge in identifying the type of quality errors and when suggesting an appropriate strategy to rectify those errors. The findings show that only (n = 7) 20% of the radiographers could correctly name what the quality errors were in 70% of the dataset, and none of the radiographers correctly specified the error-correction strategy in more than 68% of the MR datasets. The confidence level of radiography participants in their ability to state the type of image quality errors was significantly different (p types. The findings of this study suggest there is a need to establish a national association for MR radiographers to monitor training and the development of postgraduate MRI education in Saudi Arabia to improve the current status of the MR radiographers' knowledge and direct high quality service delivery. Copyright © 2016 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
Van Zeijl, H.W.; Bijnen, F.G.C.; Slabbekoorn, J.
2004-01-01
To validate the Front- To Backwafer Alignment (FTBA) calibration and to investigate process related overlay errors, electrical overlay test structures are used that requires FTBA [1]. Anisotropic KOH etch through the wafer is applied to transfer the backwafer pattern to the frontwafer. Consequently,
Benau, Erik M; Moelter, Stephen T
2016-09-01
The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.
Tops, Mattie; Boksem, Maarten A. S.; Wester, Anne E.; Lorist, Monicque M.; Meijman, Theo F.
2006-01-01
Previous results suggest that both cortisol. mobilization and the error-related negativity (ERN/Ne) reflect goal engagement, i.e. the mobilization and allocation of attentional and physiological resources. Personality measures of negative affectivity have been associated both to high cortisol levels
Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.
2012-01-01
Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…
Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.
2012-01-01
Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…
SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors
R. Sussmann
2012-10-01
Full Text Available Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2, denoted by XCO2 , retrieved from SCIAMACHY on-board ENVISAT can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. In a previous publication (Heymann et al., 2012, it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1 that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2, which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003–2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON. The validation shows a significant improvement of the new product (v2.2 in comparison to the previous product (v2.1. For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean intra-monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional-scale (radius of 500 km precision of monthly averages of 1.6 ppm and an estimated regional-scale relative accuracy of 0.8 ppm. In addition to the comparison with the limited number of TCCON sites, we also present a comparison with NOAA's global CO2 modelling
McInerney, F. A.; Bloch, J. I.; Secord, R.; Wing, S. L.; Kraus, M. J.; Boyer, D. M.
2009-12-01
The Paleocene-Eocene Thermal Maximum (PETM) presents an opportunity to characterize continental hydrologic changes during rapid and extreme global warming. The Bighorn Basin, Wyoming, USA, has long been recognized for the PETM sequences preserved there and sits in an ideal location for recording hydrologic changes in the interior of North America. The southeast Bighorn Basin is of particular interest because it contains not only alluvial paleosols and vertebrate fossils, but also macrofloral remains from the PETM. The carbon isotope excursion associated with this event is preserved in this part of the Basin in leaf wax lipids, tooth enamel, and bulk organic matter. To characterize the hydrologic changes that occurred during the PETM we are applying a suite of isotopic, paleobotanical and paleopedological approaches to sections in the southeast Bighorn Basin. Reported here are results from the combined hydrogen and oxygen isotope analysis aimed at reconstructing relative humidity. Oxygen isotope ratios (δ18O) of biogenic apatite from mammalian tooth enamel and fish scales vary with environment, physiology and diet. Because mammals are homeothermic, they primarily track surface water values with predictable physiological offsets. Hydrogen isotope ratios (δD) of leaf-wax lipids (long-chain n-alkanes) reflect both meteoric water δD values and additional D-enrichment caused by evapotranspiration. The enrichment factor between water δD and n-alkane δD can therefore be used as a proxy for relative humidity (RH). In this study, δ18O of surface water is estimated using the δ18O of Coryphodon tooth enamel. We use these δ18O values to estimate surface water δD values using the Global Meteoric Water Line (δD = 8δ18O + 10). We then calculate relative humidity from n-alkane δD values using a Craig-Gordon type isotopic model for D-enrichment caused by transpiration from leaves. Results of the combined hydrogen-oxygen isotope paleohygrometer indicate a general rise in
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Rongjun eYu
2014-05-01
Full Text Available Humans make predictions and use feedback to update their subsequent predictions. The feedback-related negativity (FRN has been found to be sensitive to negative feedback as well as negative prediction error, such that the FRN is larger for outcomes that are worse than expected. The present study examined prediction errors in both appetitive and aversive conditions. We found that the FRN was more negative for reward omission versus wins and for loss omission versus losses, suggesting that the FRN might classify outcomes in a more-or-less than expected fashion rather than in the better-or-worse than expected dimension. Our findings challenge the previous notion that the FRN only encodes negative feedback and ‘worse than expected’ negative prediction error.
Kujawa, Autumn; Weinberg, Anna; Bunford, Nora; Fitzgerald, Kate D; Hanna, Gregory L; Monk, Christopher S; Kennedy, Amy E; Klumpp, Heide; Hajcak, Greg; Phan, K Luan
2016-11-03
Increased error monitoring, as measured by the error-related negativity (ERN), has been shown to persist after treatment for obsessive-compulsive disorder in youth and adults; however, no previous studies have examined the ERN following treatment for related anxiety disorders. We used a flanker task to elicit the ERN in 28 youth and young adults (8-26years old) with primary diagnoses of generalized anxiety disorder (GAD) or social anxiety disorder (SAD) and 35 healthy controls. Patients were assessed before and after treatment with cognitive-behavioral therapy (CBT) or selective serotonin reuptake inhibitors (SSRI), and healthy controls were assessed at a comparable interval. The ERN increased across assessments in the combined sample. Patients with SAD exhibited an enhanced ERN relative to healthy controls prior to and following treatment, even when analyses were limited to SAD patients who responded to treatment. Patients with GAD did not significantly differ from healthy controls at either assessment. Results provide preliminary evidence that enhanced error monitoring persists following treatment for SAD in youth and young adults, and support conceptualizations of increased error monitoring as a trait-like vulnerability that may contribute to risk for recurrence and impaired functioning later in life. Future work is needed to further evaluate the ERN in GAD across development, including whether an enhanced ERN develops in adulthood or is most apparent when worries focus on internal sources of threat. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhenhe eZhou
2013-09-01
Full Text Available Internet addiction disorder (IAD is an impulse disorder or at least related to impulse control disorder. Deficits in executive functioning, including response monitoring, have been proposed as a hallmark feature of impulse control disorders. The error-related negativity (ERN reflects individual’s ability to monitor behavior. Since IAD belongs to a compulsive-impulsive spectrum disorder, theoretically, it should present response monitoring functional deficit characteristics of some disorders, such as substance dependence, ADHD or alcohol abuse, testing with an Erikson flanker task. Up to now, no studies on response monitoring functional deficit in IAD were reported. The purpose of the present study was to examine whether IAD displays response monitoring functional deficit characteristics in a modified Erikson flanker task.23 subjects were recruited as IAD group. 23 matched age, gender and education healthy persons were recruited as control group. All participants completed the modified Erikson flanker task while measured with event-related potentials (ERPs. IAD group made more total error rates than did controls (P < 0.01; Reactive times for total error responses in IAD group were shorter than did controls (P < 0.01. The mean ERN amplitudes of total error response conditions at frontal electrode sites and at central electrode sites of IAD group were reduced compared with control group (all P < 0.01. These results revealed that IAD displays response monitoring functional deficit characteristics and shares ERN characteristics of compulsive-impulsive spectrum disorder.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Kuo, Grace M; Touchette, Daniel R; Marinac, Jacqueline S
2013-03-01
To describe and evaluate drug errors and related clinical pharmacist interventions. Cross-sectional observational study with an online data collection form. American College of Clinical Pharmacy practice-based research network (ACCP PBRN). A total of 62 clinical pharmacists from the ACCP PBRN who provided direct patient care in the inpatient and outpatient practice settings. Clinical pharmacist participants identified drug errors in their usual practices and submitted online error reports over a period of 14 consecutive days during 2010. The 62 clinical pharmacists submitted 924 reports; of these, 779 reports from 53 clinical pharmacists had complete data. Drug errors occurred in both the inpatient (61%) and outpatient (39%) settings. Therapeutic categories most frequently associated with drug errors were systemic antiinfective (25%), hematologic (21%), and cardiovascular (19%) drugs. Approximately 95% of drug errors did not result in patient harm; however, 33 drug errors resulted in treatment or medical intervention, 6 resulted in hospitalization, 2 required treatment to sustain life, and 1 resulted in death. The types of drug errors were categorized as prescribing (53%), administering (13%), monitoring (13%), dispensing (10%), documenting (7%), and miscellaneous (4%). Clinical pharmacist interventions included communication (54%), drug changes (35%), and monitoring (9%). Approximately 89% of clinical pharmacist recommendations were accepted by the prescribers: 5% with drug therapy modifications, 28% due to clinical pharmacist prescriptive authority, and 56% without drug therapy modifications. This study provides insight into the role clinical pharmacists play with regard to drug error interventions using a national practice-based research network. Most drug errors reported by clinical pharmacists in the United States did not result in patient harm; however, severe harm and death due to drug errors were reported. Drug error types, therapeutic categories, and
Urban, Michal; Leššo, Roman; Pelclová, Daniela
2016-07-01
The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists.
Carcelero, E; Tuset, M; Martin, M; De Lazzari, E; Codina, C; Miró, J; Gatell, Jm
2011-09-01
The aim of the study was to identify antiretroviral-related errors in the prescribing of medication to HIV-infected inpatients and to ascertain the degree of acceptance of the pharmacist's interventions. An observational, prospective, 1-year study was conducted in a 750-bed tertiary-care teaching hospital by a pharmacist trained in HIV pharmacotherapy. Interactions with antiretrovirals were checked for contraindicated combinations. Inpatient antiretroviral prescriptions were compared with outpatient dispensing records for reconciliation. Renal and hepatic function was monitored to determine the need for dose adjustments. The prescriptions for 247 admissions (189 patients) were reviewed. Sixty antiretroviral-related problems were identified in 41 patients (21.7%). The most common problem was contraindicated combinations (n=20; 33.3%), followed by incorrect dose (n=10; 16.7%), dose omission (n=9; 15%), lack of dosage reduction in patients with renal or hepatic impairment (n=6; 10% and n=1; 1.7%, respectively), omission of an antiretroviral (n=6; 10%), addition of an alternative antiretroviral (n=5; 8.3%) and incorrect schedule according to outpatient treatment (n=3; 5%). Fifteen out of 20 errors were made during admission. A multivariate analysis showed that factors associated with an increased risk of antiretroviral-related problems included renal impairment [odds ratio (OR) 3.95; 95% confidence interval (CI) 1.39-11.23], treatment with atazanavir (OR 3.53; 95% CI 1.61-7.76) and admission to a unit other than an infectious diseases unit (OR 2.50; 95% CI 1.28-4.88). Use of a nonnucleoside reverse transcriptase inhibitor was a protective factor (OR 0.33; 95% CI 0.13-0.81). Ninety-two per cent of the pharmacist's interventions were accepted. Antiretroviral-related errors affected more than one-in-five patients. The most common causes of error were contraindicated or not recommended drug-drug combinations and dose-related errors. A clinical pharmacist trained in HIV
da Cunha, Antonio Ribeiro
2015-05-01
This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.
Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek
2016-07-01
Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.
Bhattacharyya, Saugat; Konar, Amit; Tibarewala, D N
2014-12-01
The paper proposes a novel approach toward EEG-driven position control of a robot arm by utilizing motor imagery, P300 and error-related potentials (ErRP) to align the robot arm with desired target position. In the proposed scheme, the users generate motor imagery signals to control the motion of the robot arm. The P300 waveforms are detected when the user intends to stop the motion of the robot on reaching the goal position. The error potentials are employed as feedback response by the user. On detection of error the control system performs the necessary corrections on the robot arm. Here, an AdaBoost-Support Vector Machine (SVM) classifier is used to decode the 4-class motor imagery and an SVM is used to decode the presence of P300 and ErRP waveforms. The average steady-state error, peak overshoot and settling time obtained for our proposed approach is 0.045, 2.8% and 44 s, respectively, and the average rate of reaching the target is 95%. The results obtained for the proposed control scheme make it suitable for designs of prosthetics in rehabilitative applications.
Kobayashi, A; Yoneda, T; Yoshikawa, M; Ikuno, M; Takenaka, H; Fukuoka, A; Narita, N; Nezu, K
2000-01-01
To assess the factors determining maximum exercise performance in patients with chronic obstructive pulmonary disease (COPD), we examined nutritional status with special reference to body composition and pulmonary function in 50 stable COPD patients. Nutritional status was evaluated by body weight and body composition, including fat mass (FM) and fat-free mass (FFM) assessed by bioelectrical impedance analysis (BIA). Exercise performance was evaluated by maximum oxygen uptake (Vo(2max)) on a cycle ergometer. A total of 50 patients (FEV(1) = 0.98 L) was divided randomly into either a study group (group A, n = 25) or validation group (group B, n = 25). Stepwise regression analysis was performed in group A to determine the best predictors of Vo(2max) from measurements of pulmonary function and nutritional status. Stepwise regression analysis revealed that Vo(2max) was predicted best by the following equation in group A: Vo(2max) (mL/min) = 10.223 x FFM (kg) + 4.188 x MVV (L/min) + 9.952 x DL(co) (mL/min/mmHg) - 127.9 (r = 0.84, p equation was then cross-validated in group B: Measured Vo(2max) (mL/min) = 1.554 x Predicted Vo(2max) (mL/min) - 324.0 (r = 0.87, p < 0.001). We conclude that FFM is an important factor in determining maximum exercise performance, along with pulmonary function parameters, in patients with COPD.
Heat production and error probability relation in Landauer reset at effective temperature
Neri, Igor; López-Suárez, Miquel
2016-09-01
The erasure of a classical bit of information is a dissipative process. The minimum heat produced during this operation has been theorized by Rolf Landauer in 1961 to be equal to kBT ln2 and takes the name of Landauer limit, Landauer reset or Landauer principle. Despite its fundamental importance, the Landauer limit remained untested experimentally for more than fifty years until recently when it has been tested using colloidal particles and magnetic dots. Experimental measurements on different devices, like micro-mechanical systems or nano-electronic devices are still missing. Here we show the results obtained in performing the Landauer reset operation in a micro-mechanical system, operated at an effective temperature. The measured heat exchange is in accordance with the theory reaching values close to the expected limit. The data obtained for the heat production is then correlated to the probability of error in accomplishing the reset operation.
Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.
Jian Weng
Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.
Meyer, Alexandria; Hajcak, Greg; Glenn, Catherine R; Kujawa, Autumn J; Klein, Daniel N
2017-04-01
Identifying biomarkers that characterize developmental trajectories leading to anxiety disorders will likely improve early intervention strategies as well as increase our understanding of the etiopathogenesis of these disorders. The error-related negativity (ERN), an event-related potential that occurs during error commission, is increased in anxious adults and children-and has been shown to predict the onset of anxiety disorders across childhood. The ERN has therefore been suggested as a biomarker of anxiety. However, it remains unclear what specific processes a potentiated ERN may reflect. We have recently proposed that the ERN may reflect trait-like differences in threat sensitivity; however, very few studies have examined the ERN in relation to other indices of this construct. In the current study, the authors measured the ERN, as well as affective modulation of the startle reflex, in a large sample (N = 155) of children. Children characterized by a large ERN also exhibited greater potentiation of the startle response in the context of unpleasant images, but not in the context of neutral or pleasant images. In addition, the ERN, but not startle response, related to child anxiety disorder status. These results suggest a relationship between error-related brain activity and aversive potentiation of the startle reflex during picture viewing-consistent with the notion that both measures may reflect individual differences in threat sensitivity. However, results suggest the ERN may be a superior biomarker of anxiety in children. (PsycINFO Database Record
Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.
2012-12-01
During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.
The influence of relatives on the efficiency and error rate of familial searching.
Rori V Rohlfs
Full Text Available We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011, for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability (80~99% of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a 3~18% probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases.
The Influence of Relatives on the Efficiency and Error Rate of Familial Searching
Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery
2013-01-01
We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076
Andrieux, A.; Vandanjon, P. O.; Lengelle, R.; Chabanon, C.
2010-12-01
Tyre-road estimation methods have been the objective of many research programmes throughout the world. Most of these methods aim at estimating the friction components such as tyre longitudinal slip rate κ and friction coefficient μ in the contact patch area. In order to estimate the maximum available friction coefficient μmax, these methods generally use a probabilistic relationship between the grip obtained for low tyre excitations (such as constant speed driving) and the grip obtained for high tyre excitations (such as emergency braking manoeuvre). Confirmation or invalidation of this relationship from experimental results is the purpose of this paper. Experiments have been carried out on a reference track including several test boards corresponding to a wide textural spectrum. The main advantage of these experiments lies in the use of a vehicle allowing us to accurately build point-by-point relationship between κ and μ. This relationship has been determined for different tyres and pavement textures. Finally, the curves obtained are analysed to check the validity of the relationship between the current friction coefficient used by the car during normal driving conditions and μmax.
Ruiz, Maria Cristina; Ayala, Victoria; Portero-Otín, Manel; Requena, Jesús R; Barja, Gustavo; Pamplona, Reinald
2005-10-01
Aging affects all organisms and its basic mechanisms are expected to be conserved across species. Oxidation of proteins has been proposed to be one of the basic mechanisms linking oxygen radicals with the basic aging process. If oxidative damage to proteins is involved in aging, long-lived animals (which age slowly) should show lower levels of markers of this kind of damage than short-lived ones. However, this possibility has not been investigated yet. In this study, steady-state levels of markers of different kinds of protein damage--oxidation (glutamic and aminoadipic semialdehydes), mixed glyco- and lipoxidation (carboxymethyl- and carboxyethyllysine), lipoxidation (malondialdehydelysine) and amino acid composition--were measured in the heart of eight mammalian species ranging in maximum life span (MLSP) from 3.5 to 46 years. Oxidation markers were directly correlated with MLSP across species. Mixed glyco- and lipoxidation markers did not correlate with MLSP. However, the lipoxidation marker malondialdehydelysine was inversely correlated with MLSP (r2=0.85; P<0.001). The amino acid compositional analysis revealed that methionine is the only amino acid strongly correlated MLSP and that such correlation is negative (r2=0.93; P<0.001). This trait may contribute to lower steady-state levels of oxidized methionine residues in cellular proteins. These results reinforce the notion that high longevity in homeothermic vertebrates is achieved in part by constitutively decreasing the sensitivity of both tissue proteins and lipids to oxidative damage. This is obtained by modifying the constituent structural components of proteins and lipids, selecting those less sensitive to oxidative modifications.
Mazur, Elizabeth; Wolchik, Sharlene
Building on prior literature on adults' and children's appraisals of stressors, this study investigated relations among negative and positive appraisal biases, negative divorce events, and children's post-divorce adjustment. Subjects were 79 custodial nonremarried mothers and their children ages 9 to 13 who had experienced parental divorce within…
Quantifying SST Errors from an OGCM in Relation to Atmospheric Forcing Variables
2009-03-03
Spatial and temporal variability of sea surface temperature (SST) is closely related to the substantial heat content of the ocean mixed layer, which...salinity from Polar Science Center (PSC) Hydrographic Clima - tology (PHC) (Steele et al., 2001). This relaxation is designed to keep the evaporation...advection, which were generally small in the 0.72° model. The accuracy of SST from each simulation was evaluated in comparison to a satellite-based clima
Cuicui Wang
2015-01-01
Full Text Available This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs were used to record investor’s brain activity and capture the event-related negativity (ERN and feedback-related negativity (FRN components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.
Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo
2015-01-01
This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing "to buy" or "not to buy," participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the "not to buy" stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.
Operator- and software-related post-experimental variability and source of error in 2-DE analysis.
Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo
2012-05-01
In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.
Baxter, Lisa K; Wright, Rosalind J; Paciorek, Christopher J; Laden, Francine; Suh, Helen H; Levy, Jonathan I
2010-01-01
In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to evaluate quantitatively these surrogates against measured pollutant concentrations to determine how their use affects the interpretation of epidemiological study results. In this study, we quantified the implications of using exposure models derived from validation studies, and other alternative surrogate models with varying amounts of measurement error on epidemiological study findings. We compared previously developed multiple regression models characterizing residential indoor nitrogen dioxide (NO(2)), fine particulate matter (PM(2.5)), and elemental carbon (EC) concentrations to models with less explanatory power that may be applied in the absence of validation studies. We constructed a hypothetical epidemiological study, under a range of odds ratios, and determined the bias and uncertainty caused by the use of various exposure models predicting residential indoor exposure levels. Our simulations illustrated that exposure models with fairly modest R(2) (0.3 to 0.4 for the previously developed multiple regression models for PM(2.5) and NO(2)) yielded substantial improvements in epidemiological study performance, relative to the application of regression models created in the absence of validation studies or poorer-performing validation study models (e.g., EC). In many studies, models based on validation data may not be possible, so it may be necessary to use a surrogate model with more measurement error. This analysis provides a technique to quantify the implications of applying various exposure models with different degrees of measurement error in epidemiological research.
Andrew D Lowther
Full Text Available Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.
Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M
2015-01-01
Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.
Rosana Huerta-Albarrán
2015-03-01
Full Text Available Objective To compare performance of children with attention deficit hyperactivity disorders-combined (ADHD-C type with control children in multi-source interference task (MSIT evaluated by means of error related negativity (ERN. Method We studied 12 children with ADHD-C type with a median age of 7 years, control children were age- and gender-matched. Children performed MSIT and simultaneous recording of ERN. Results We found no differences in MSIT parameters among groups. We found no differences in ERN variables between groups. We found a significant association of ERN amplitude with MSIT in children with ADHD-C type. Some correlation went in positive direction (frequency of hits and MSIT amplitude, and others in negative direction (frequency of errors and RT in MSIT. Conclusion Children with ADHD-C type exhibited a significant association between ERN amplitude with MSIT. These results underline participation of a cingulo-fronto-parietal network and could help in the comprehension of pathophysiological mechanisms of ADHD.
José M. Cancela
2009-11-01
Full Text Available The purpose of this study is to provide a tool, based on the knowledge of technical errors, which helps to improve the teaching and learning process of the Uki Goshi technique. With this aim, we set out to determine the most frequent errors made by 44 students when performing this technique and how these mistakes relate. In order to do so, an observational analysis was carried out using the OSJUDO-UKG instrument and the data were registered using Match Vision Studio (Castellano, Perea, Alday and Hernández, 2008. The results, analyzed through descriptive statistics, show that the absence of a correct initial unbalancing movement (45,5%, the lack of proper right-arm pull (56,8%, not blocking the faller's body (Uke against the thrower's hip -Tori- (54,5% and throwing the Uke through the Tori's side are the most usual mistakes (72,7%. Through the sequencial analysis of T-Patterns obtained with the THÈME program (Magnusson, 1996, 2000 we have concluded that not blocking the body with the Tori's hip provokes the Uke's throw through the Tori's side during the final phase of the technique (95,8%, and positioning the right arm on the dorsal region of the Uke's back during the Tsukuri entails the absence of a subsequent pull of the Uke's body (73,3%.
Stephan Hilgert
2014-02-01
Full Text Available Investigations in the context of greenhouse gas production measurements in sub-tropical reservoirs brought up the necessity to survey the in situ pore water gas and ion concentrations at many positions within a relatively short time. As several sediment cores were taken, the interest in analyzing the pore water at the same time and at the same positions forced us to develop a cost- and time saving method for the placement of dialysis pore water samplers (DPS. General prerequisites were the ability to place several DPS per day, within a flexible depth range of up to 40 m and with a low cost budget. To meet these requirements, a DPS placing system (DPSPS was developed, which would allow the precise placement of DPS in water with a depth of up to 40 m and assessing the biases of on-board measurements and possible methodological improvements. The DPSPS was transported to Brazil and tested in a measurement campaign for 10 days. The measurements were carried out during two campaigns in December 2012 and March 2013 in the Capivari Reservoir north-east of Curitiba in the State of Paraná. The system worked properly and several DPS could be placed from a 5 m class aluminum boat. The placement was performed with high accuracy regarding the positioning as well as the penetration depth of the DPS. After the recovery of the DPS, the possible biases during sampling were analyzed. Possible back-diffusion was investigated, taking oxygen concentration as one representative parameter for estimation of the sample behavior. Laboratory as well as field results showed that special care has to be taken to minimize the influence of diffusion processes during post-recovery sampling. The results also suggested that the used membranes are affected by clogging which is likely to influence the diffusion times of various ions and gases. It can be stated that the DPSPS was developed successfully as the demands in terms of handling as well as monitoring efficiency and sample
Matsubara, Kazuo; Toyama, Akira; Satoh, Hiroshi; Suzuki, Hiroshi; Awaya, Toshio; Tasaki, Yoshikazu; Yasuoka, Toshiaki; Horiuchi, Ryuya
2011-04-01
It is obvious that pharmacists play a critical role as risk managers in the healthcare system, especially in medication treatment. Hitherto, there is not a single multicenter-survey report describing the effectiveness of clinical pharmacists in preventing medical errors from occurring in the wards in Japan. Thus, we conducted a 1-month survey to elucidate the relationship between the number of errors and working hours of pharmacists in the ward, and verified whether the assignment of clinical pharmacists to the ward would prevent medical errors between October 1-31, 2009. Questionnaire items for the pharmacists at 42 national university hospitals and a medical institute included the total and the respective numbers of medication-related errors, beds and working hours of pharmacist in 2 internal medicine and 2 surgical departments in each hospital. Regardless of severity, errors were consecutively reported to the Medical Security and Safety Management Section in each hospital. The analysis of errors revealed that longer working hours of pharmacists in the ward resulted in less medication-related errors; this was especially significant in the internal medicine ward (where a variety of drugs were used) compared with the surgical ward. However, the nurse assignment mode (nurse/inpatients ratio: 1 : 7-10) did not influence the error frequency. The results of this survey strongly indicate that assignment of clinical pharmacists to the ward is critically essential in promoting medication safety and efficacy.
Tops, Mattie; Boksem, Maarten A S; Wester, Anne E; Lorist, Monicque M; Meijman, Theo F
2006-08-01
Previous results suggest that both cortisol mobilization and the error-related negativity (ERN/Ne) reflect goal engagement, i.e. the mobilization and allocation of attentional and physiological resources. Personality measures of negative affectivity have been associated both to high cortisol levels and large ERN/Ne amplitudes. However, measures of positive social adaptation and agreeableness have also been related to high cortisol levels and large ERN/Ne amplitudes. We hypothesized that, as long as they relate to concerns over social evaluation and mistakes, both personality measures reflecting positive affectivity (e.g. agreeableness) and those reflecting negative affectivity (e.g. behavioral shame proneness) would be associated with an increased likelihood of high task engagement, and hence to increased cortisol mobilization and ERN/Ne amplitudes. We had female subjects perform a flanker task while EEG was recorded. Additionally, the subjects filled out questionnaires measuring mood and personality, and salivary cortisol immediately before and after task performance was measured. The overall pattern of relationships between our measures supports the hypothesis that cortisol mobilization and ERN/Ne amplitude reflect task engagement, and both relate positively to each other and to the personality traits agreeableness and behavioral shame proneness. We discuss the potential importance of engagement-disengagement and of concerns over social evaluation for research on psychopathology, stress and the ERN/Ne.
Simpson, Matthew J. R.; Milne, Glenn A.; Huybrechts, Philippe; Long, Antony J.
2009-08-01
We constrain a three-dimensional thermomechanical model of Greenland ice sheet (GrIS) evolution from the Last Glacial Maximum (LGM, 21 ka BP) to the present-day using, primarily, observations of relative sea level (RSL) as well as field data on past ice extent. Our new model (Huy2) fits a majority of the observations and is characterised by a number of key features: (i) the ice sheet had an excess volume (relative to present) of 4.1 m ice-equivalent sea level at the LGM, which increased to reach a maximum value of 4.6 m at 16.5 ka BP; (ii) retreat from the continental shelf was not continuous around the entire margin, as there was a Younger Dryas readvance in some areas. The final episode of marine retreat was rapid and relatively late (c. 12 ka BP), leaving the ice sheet land based by 10 ka BP; (iii) in response to the Holocene Thermal Maximum (HTM) the ice margin retreated behind its present-day position by up to 80 km in the southwest, 20 km in the south and 80 km in a small area of the northeast. As a result of this retreat the modelled ice sheet reaches a minimum extent between 5 and 4 ka BP, which corresponds to a deficit volume (relative to present) of 0.17 m ice-equivalent sea level. Our results suggest that remaining discrepancies between the model and the observations are likely associated with non-Greenland ice load, differences between modelled and observed present-day ice elevation around the margin, lateral variations in Earth structure and/or the pattern of ice margin retreat.
[Survey in hospitals. Nursing errors, error culture and error management].
Habermann, Monika; Cramer, Henning
2010-09-01
Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.
Ngeow, Chow-Choong; Kanbur, Shashi M.; Bhardwaj, Anupam; Schrecengost, Zachariah; Singh, Harinder P.
2017-01-01
Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the (V ‑ R)MACHO or (V ‑ I) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five (ugriz) bands. We present the PC and AC relations at maximum and minimum light in four colors: (u ‑ g)0, (g ‑ r)0, (r ‑ i)0, and (i ‑ z)0, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the (g ‑ r)0 and (r ‑ i)0 colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.
Carr, D.; Felce, J.
2008-01-01
Background: Children who have a combination of language and developmental disabilities with autism often experience major difficulties in learning relations between objects and their graphic representations. Therefore, they would benefit from teaching procedures that minimize their difficulties in acquiring these relations. This study compared two…
Saveljev, Vladimir; Kim, Sung-Kyu; Lee, Hyoung; Kim, Hyun-Woo; Lee, Byoungho
2016-02-08
The amplitude of the moiré patterns is estimated in relation to the opening ratio in line gratings and square grids. The theory is developed; the experimental measurements are performed. The minimum and the maximum of the amplitude are found. There is a good agreement between the theoretical and experimental data. This is additionally confirmed by the visual observation. The results can be applied to the image quality improvement in autostereoscopic 3D displays, to the measurements, and to the moiré displays.
Larson, Michael J; Clayson, Peter E; Keith, Cierra M; Hunt, Isaac J; Hedges, Dawson W; Nielsen, Brent L; Call, Vaughn R A
2016-03-01
Older adults display alterations in neural reflections of conflict-related processing. We examined response times (RTs), error rates, and event-related potential (ERP; N2 and P3 components) indices of conflict adaptation (i.e., congruency sequence effects) a cognitive control process wherein previous-trial congruency influences current-trial performance, along with post-error slowing, correct-related negativity (CRN), error-related negativity (ERN) and error positivity (Pe) amplitudes in 65 healthy older adults and 94 healthy younger adults. Older adults showed generalized slowing, had decreased post-error slowing, and committed more errors than younger adults. Both older and younger adults showed conflict adaptation effects; magnitude of conflict adaptation did not differ by age. N2 amplitudes were similar between groups; younger, but not older, adults showed conflict adaptation effects for P3 component amplitudes. CRN and Pe, but not ERN, amplitudes differed between groups. Data support generalized declines in cognitive control processes in older adults without specific deficits in conflict adaptation.
Potts, Geoffrey F
2011-09-01
The error-related negativity (ERN) is thought to index a neural behavior monitoring system with its source in anterior cingulate cortex (ACC). While ACC is involved in a wide variety of cognitive and emotional tasks, there is debate as to what aspects of ACC function are indexed by the ERN. In one model the ERN indexes purely cognitive function, responding to mismatch between intended and executed actions. Another model posits that the ERN is more emotionally driven, elicited when an action is inconsistent with motivational goals. If the ERN indexes mismatch between intended and executed actions, then it should be insensitive to motivational valence, e.g. reward or punishment; in contrast if the ERN indexes the evaluation of responses relative to goals, then it might respond differentially under differing motivational valence. This study used a flanker task motivated by potential reward and potential punishment on different trials and also examined the N2 and P3 to the imperative stimulus, the response Pe, and the FRN and P3 to the outcome feedback to assess the impact of motivation valence on other stages of information processing in this choice reaction time task. Participants were slower on punishment motivated trials and both the N2 and ERN were larger on punishment motivated trials, indicating that loss aversion has an impact on multiple stages of information processing including behavior monitoring.
ZHANG De'er; Demaree Gaston
2004-01-01
In the context of historical climate records of China and early meteorological measurements of Beijing discovered recently in Europe, a study is undertaken on the 1743 hottest summer of north China over the last 700 a, covering Beijing, Tianjin, and the provinces of Hebei, Shanxi and Shandong, with the highest temperature reaching 44.4℃ in July 1743 in Beijing, in excess of the maximum climate record in the 20th century. Results show that the related weather/climate features of the 1743 heat wave, e.g., flood/ drought distribution and Meiyu activity and the external forcings, such as solar activity and equatorial Pacific SST condition are the same as those of the 1942 and 1999 heat events. It is noted that the 1743 burning summer event occurs in a relatively warm climate background prior to the Industrial Revolution, with a lower level of CO2 release.
Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N
2015-07-01
The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this
Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.
2011-01-01
Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.
Tong, Zhengrong
2014-07-01
1. The hard facts we given in text prove that, relativity theory is the fallacy from mathematical errors and experimental perjuries. 2. Conclusion of the study show that one called "fundamental gravitino" (the theoretical mass-energy value given at mw = 3.636 x 10-45 kg) is the Material composition of dark matter in the universe and also it's the material composition of all the elementary particles too. This is the root cause that the gravitation has universality. In-depth research, the results show that the fundamental gravitino" in all space is the material foundation of the electromagnetic interaction and propagation of light and other physical phenomena. Furthermore it shows that Stable elementary particles are the "droplets" under the strong gravitino pressure (strength calculated are consistent with the strong interaction) in the entire universe, similar to the droplets in the saturated gas. There are steady-state solutions in Mathematical models corresponding to the proton, the electron and the neutron.The theory for topics such as the dark matter, the dark energy, and the Higgs particle has the perfect explanation and reasonable conclusion... It seems, Chinese began to keep up with the world's physical trend, started a new physics era of fundamental gravitino the mass energy source of the universe.
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Carlson, Per J; Carlson, Per; Wannemark, Conny
2005-01-01
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
无
2001-01-01
This paper presents a method on non-linear correction of broadband LFMCW signal utilizing its relativenonlinear error. The deriving procedure and the results simulated by a computer and tested by a practical system arealso introduced. The method has two obvious advantages compared with the previous methods: (1) Correction has norelation with delay time td and sweep bandwidth B; (2) The inherent non-linear error of VCO has no influence on thecorrection and its last results.
Chiara Volpato
2016-10-01
Full Text Available Dopamine systems mediate key aspects of reward learning. Parkinson’s disease (PD represents a valuable model to study reward mechanisms because both the disease process and the anti-Parkinson medications influence dopamine neurotransmission. The aim of this pilot study was to investigate whether the level of levodopa differently modulates learning from positive and negative feedback and its electrophysiological correlate, the error related negativity (ERN, in PD. Ten PD patients and ten healthy participants performed a two-stage reinforcement learning task. In the Learning Phase they had to learn the correct stimulus within a stimulus pair on the basis of a probabilistic positive or negative feedback. Three sets of stimulus pairs were used. In the Testing Phase the participants were tested with novel combinations of the stimuli previously experienced to evaluate whether they learned more from positive or negative feedback. PD patients performed the task both ON- and OFF-levodopa in two separate sessions while they remained on stable therapy with dopamine agonists. The electroencephalogram was recorded during the task. PD patients were less accurate in negative than positive learning both OFF- and ON-levodopa. In the OFF-levodopa state they were less accurate than controls in negative learning. PD patients had a smaller ERN amplitude OFF- than ON-levodopa only in negative learning. In the OFF-levodopa state they had a smaller ERN amplitude than controls in negative learning. We hypothesize that high tonic dopaminergic stimulation due to the dopamine agonist medication, combined to the low level of phasic dopamine due to the OFF-levodopa state, could prevent phasic dopamine dips indicated by the ERN needed for learning from negative feedback.
Lila-Krasniqi, Zana D.; Shala, Kujtim Sh.; Pustina-Krasniqi, Teuta; Bicaj, Teuta; Dula, Linda J.; Guguvčevski, Ljuben
2015-01-01
Objective: To compare subjects from the group with fixed dentures, the group who present temporomandibular disorders (TMDs) and a control group considering centric relation (CR) and maximum intercuspation (MIC)/habitual occlusion (Hab. Occl.) and to analyze the related variables also compared and analyzed with electronic system T-scan III. Materials and Methods: A total of 54 subjects were divided into three groups; 17 subjects with fixed dentures, 14 with TMD and 23 controls-selection based on anamnesis-responded to a Fonseca questionnaire and clinical measurements analyzed with electronic system T-scan III. Occlusal force, presented by percentage (automatically by the T-scan electronic system) was analyzed in CR and in MIC. Results: Data were presented as mean ± standard deviation and differences in P 0.05 it was not significant in all three groups. Conclusion: In our study, it was concluded that there are not statistically significant differences between CR and MIC in the group of individuals without any symptom or sign of TMD although there are noticed in the group with TMD and fixed dentures disharmonic relation between the arches with overload of the occlusal force on the one side. PMID:26929698
Santamaria, L; Ajith, P; Bruegmann, B; Dorband, N; Hannam, M; Husa, S; Moesta, P; Pollney, D; Reisswig, C; Seiler, J; Krishnan, B
2010-01-01
We present a new phenomenological gravitational waveform model for he inspiral and coalescence of non-precessing spinning black hole binaries. Our approach is based on a frequency domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of non-precessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational wave searches.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Girder deformation related phase errors on the undulators for the European X-Ray Free Electron Laser
Yuhui Li
2015-06-01
Full Text Available In long gap tunable undulators, strong magnetic forces always lead to some amount of gap-dependent girder deformation and resulting gap-dependent phase errors. For the undulators for the European XFEL, this problem has been investigated thoroughly and quantitatively. Using the different gap dependencies of suitable shims and pole height tuning, a method is presented which can be applied to reduce the overall gap dependence of the phase error if needed. It is exemplified by tuning one of the undulator segments for the European X-Ray Free Electron Laser back to specs.
Dowdell, S; Tyler, M; McNamara, J; Sloan, K; Ceylan, A; Rinks, A
2016-11-15
Plane-parallel ionisation chambers are regularly used to conduct relative dosimetry measurements for therapeutic kilovoltage beams during commissioning and routine quality assurance. This paper presents the first quantification of the polarity effect in kilovoltage photon beams for two types of commercially available plane-parallel ionisation chambers used for such measurements. Measurements were performed at various depths along the central axis in a solid water phantom and for different field sizes at 2 cm depth to determine the polarity effect for PTW Advanced Markus and Roos ionisation chambers (PTW-Freiburg, Germany). Data was acquired for kilovoltage beams between 100 kVp (half-value layer (HVL) = 2.88 mm Al) and 250 kVp (HVL = 2.12 mm Cu) and field sizes of 3-15 cm diameter for 30 cm focus-source distance (FSD) and 4 × 4 cm(2)-20 × 20 cm(2) for 50 cm FSD. Substantial polarity effects, up to 9.6%, were observed for the Advanced Markus chamber compared to a maximum 0.5% for the Roos chamber. The magnitude of the polarity effect was observed to increase with field size and beam energy but was consistent with depth. The polarity effect is directly influenced by chamber design, with potentially large polarity effects for some plane-parallel ionisation chambers. Depending on the specific chamber used, polarity corrections may be required for output factor measurements of kilovoltage photon beams. Failure to account for polarity effects could lead to an incorrect dose being delivered to the patient.
Dowdell, S.; Tyler, M.; McNamara, J.; Sloan, K.; Ceylan, A.; Rinks, A.
2016-12-01
Plane-parallel ionisation chambers are regularly used to conduct relative dosimetry measurements for therapeutic kilovoltage beams during commissioning and routine quality assurance. This paper presents the first quantification of the polarity effect in kilovoltage photon beams for two types of commercially available plane-parallel ionisation chambers used for such measurements. Measurements were performed at various depths along the central axis in a solid water phantom and for different field sizes at 2 cm depth to determine the polarity effect for PTW Advanced Markus and Roos ionisation chambers (PTW-Freiburg, Germany). Data was acquired for kilovoltage beams between 100 kVp (half-value layer (HVL) = 2.88 mm Al) and 250 kVp (HVL = 2.12 mm Cu) and field sizes of 3-15 cm diameter for 30 cm focus-source distance (FSD) and 4 × 4 cm2-20 × 20 cm2 for 50 cm FSD. Substantial polarity effects, up to 9.6%, were observed for the Advanced Markus chamber compared to a maximum 0.5% for the Roos chamber. The magnitude of the polarity effect was observed to increase with field size and beam energy but was consistent with depth. The polarity effect is directly influenced by chamber design, with potentially large polarity effects for some plane-parallel ionisation chambers. Depending on the specific chamber used, polarity corrections may be required for output factor measurements of kilovoltage photon beams. Failure to account for polarity effects could lead to an incorrect dose being delivered to the patient.
Klopotowska, J.E.; Kuiper, R.; van Kan, H.J.; de Pont, A.C.; Dijkgraaf, M.G.; Lie-A-Huen, L.; Vroom, M.B.; Smorenburg, S.M.
2010-01-01
Introduction: Patients admitted to an intensive care unit (ICU) are at high risk for prescribing errors and related adverse drug events (ADEs). An effective intervention to decrease this risk, based on studies conducted mainly in North America, is on-ward participation of a clinical pharmacist in an
Danielmeier, C.; Eichele, T.; Forstmann, B.U.; Tittgemeyer, M.; Ullsperger, M.
2011-01-01
As Seneca the Younger put it, "To err is human, but to persist is diabolical." To prevent repetition of errors, human performance monitoring often triggers adaptations such as general slowing and/or attentional focusing. The posterior medial frontal cortex (pMFC) is assumed to monitor performance pr
Moosang Kim
2013-01-01
Full Text Available Purpose: To evaluate frequency and severity of segmentation errors of two spectral-domain optical coherence tomography (SD-OCT devices and error effect on central macular thickness (CMT measurements. Materials and Methods: Twenty-seven eyes of 25 patients with neovascular age-related macular degeneration, examined using the Cirrus HD-OCT and Spectralis HRA + OCT, were retrospectively reviewed. Macular cube 512 × 128 and 5-line raster scans were performed with the Cirrus and 512 × 25 volume scans with the Spectralis. Frequency and severity of segmentation errors were compared between scans. Results: Segmentation error frequency was 47.4% (baseline, 40.7% (1 month, 40.7% (2 months, and 48.1% (6 months for the Cirrus, and 59.3%, 62.2%, 57.8%, and 63.7%, respectively, for the Spectralis, differing significantly between devices at all examinations (P < 0.05, except at baseline. Average error score was 1.21 ± 1.65 (baseline, 0.79 ± 1.18 (1 month, 0.74 ± 1.12 (2 months, and 0.96 ± 1.11 (6 months for the Cirrus, and 1.73 ± 1.50, 1.54 ± 1.35, 1.38 ± 1.40, and 1.49 ± 1.30, respectively, for the Spectralis, differing significantly at 1 month and 2 months (P < 0.02. Automated and manual CMT measurements by the Spectralis were larger than those by the Cirrus. Conclusions: The Cirrus HD-OCT had a lower frequency and severity of segmentation error than the Spectralis HRA + OCT. SD-OCT error should be considered when evaluating retinal thickness.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin
2012-01-01
The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.
Rieger, Martina; Martinez, Fanny; Wenke, Dorit
2011-01-01
Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…
Chen, Jincan; Yan, Zijun; Wu, Liqing
1996-06-01
Considering a thermoelectric generator as a heat engine cycle, the general differential equations of the temperature field inside thermoelectric elements are established by means of nonequilibrium thermodynamics. These equations are used to study the influence of heat leak, Joule's heat, and Thomson heat on the performance of the thermoelectric generator. New expressions are derived for the power output and the efficiency of the thermoelectric generator. The maximum power output is calculated and the optimal matching condition of load is determined. The maximum efficiency is discussed by a representative numerical example. The aim of this research is to provide some novel conclusions and redress some errors existing in a related investigation.
Garrido, Nuno D; Silva, António J; Fernandes, Ricardo J; Barbosa, Tiago M; Costa, Aldo M; Marinho, Daniel A; Marques, Mário C
2012-06-01
The relationship between handgrip isometric strength and swimming performance was assessed in the four competitive swimming strokes in swimmers of different age groups and of both sexes. 78 national-level Portuguese swimmers (39 males, 39 females) were selected for this study. Grip strength, previously used as a marker of overall strength to predict future swimming performance, was measured using a hand dynamometer. The best competitive time at 100 and 200 m in all four swimming strokes were converted into 2010 FINA points. Non-parametric tests were used to evaluate differences between groups. Pearson product-moment correlations were computed to verify the association between variables. Handgrip maximum isometric strength was significantly correlated with swimming performance, particularly among female swimmers. Among female age group swimmers, the relationship between handgrip and 100-m freestyle was significant. Handgrip isometric strength seems to be related to swimming performance, especially to 100-m freestyle and in female swimmers. For all other distances and strokes, technique and training probably are more influential than semi-hereditary strength markers such as grip strength.
Oh, Seungkyung
2012-01-01
We perform the largest currently available set of direct N-body calculations of young star cluster models to study the dynamical influence, especially through the ejections of the most massive star in the cluster, on the current relation between the maximum-stellar-mass and the star-cluster-mass. We vary several initial parameters such as the initial half-mass radius of the cluster, the initial binary fraction, and the degree of initial mass segregation. Two different pairing methods are used to construct massive binaries for more realistic initial conditions of massive binaries. We find that lower mass clusters (= 1000 Msun), no most-massive star escapes the cluster within 3 Myr regardless of the initial conditions if clusters have initial half-mass radii, r_0.5, >= 0.8 pc. However, a few of the initially smaller sized clusters (r_0.5 = 0.3 pc), which have a higher density, eject their most massive star within 3 Myr. If clusters form with a compact size and their massive stars are born in a binary system wit...
EFSA Panel on Contaminants in the Food Chain (CONTAM
2013-12-01
Full Text Available The European Food Safety Authority (EFSA was asked to deliver a scientific opinion on the risks for public health related to a possible increase of the maximum level (ML of deoxynivalenol (DON for certain semi-processed cereal products from 750 µg/kg to 1000 µg/kg. For this statement, EFSA relied on existing occurrence data on DON in food collected between 2007 and 2012 and reported by 21 European countries. Due to the lack of appropriate occurrence data from pre-market monitoring, the impact of increasing the ML was estimated using a simulation approach, resulting in an expected increase in mean levels of the respective food products by a factor of 1.14-1.16. Based on median chronic exposure in several age classes, the percentage of consumers exceeding the group provisional maximum tolerable daily intake (PMTDI of 1 μg/kg body weight (b.w. for the sum of DON and its 3- and 15-acetyl-derivatives, established by the Joint FAO/WHO Expert Committee on Food Additives (JECFA in 2010, is approximately 2-fold higher with the suggested increased ML than with the current ML. Several acute exposure scenarios resulted in exceedance of the group acute reference dose (ARfD of 8 µg/kg b.w. established by JECFA with up to 25.9 % of the consumption days above the group ARfD. The EFSA Scientific Panel on Contaminants in the Food Chain notes that the group health based guidance values (HBGVs include 3-Ac-DON and 15-Ac-DON. The exposure from the acetyl-derivatives has not been covered in this statement, since the acetyl-derivatives are not included in the current or suggested increased ML and because only few occurrence data are available. An increase of the DON ML can be expected to be associated with an increase of the levels of DON and Ac-DONs, and can therefore increase the exposure and consequently the exceedances of the group HBGVs.
Doran, C F; Gates, S J; Hübsch, T; Iga, K M; Landweber, G D
2008-01-01
Previous work has shown that the classification of indecomposable off-shell representations of N-supersymmetry, depicted as Adinkras, may be factored into specifying the topologies available to Adinkras, and then the height-assignments for each topological type. The latter problem being solved by a recursive mechanism that generates all height-assignments within a topology, it remains to classify the former. Herein we show that this problem is equivalent to classifying certain (1) graphs and (2) error-correcting codes.
An error assessment of the kriging based approximation model using a mean square error
Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.
王鼎; 潘苗; 吴瑛
2011-01-01
Aim at the self-calibration of direction-dependent gm-phase errors in case of deterministic signal model, the maximum likelihood method(MLM) for calibrating the direction-dependent gain-phase errors with carry-on instrumental sensors was presented. In order to maximize the high-dimensional nonlinear cost function appearing in the MLM, an improved alternative projection iteration algorithm, which could optimize the azimuths and direc6on-dependent gain-phase errors was proposed. The closed-form expressions of the Cramér-Rao bound(CRB) for azimuths and gain-phase errors were derived. Simulation experiments show the effectiveness and advantage of the novel method.%针对确定信号模型条件下方位依赖幅相误差的自校正问题,给出了一种基于辅助阵元的方位依赖幅相误差最大似然自校正方法;针对最大似然估计器中出现的高维非线性优化问题,推导了一种改进型交替投影迭代算法,从而实现了信号方位和方位依赖幅相误差的优化计算.此外,还推导了信号方位和方位依赖幅相误差的无偏克拉美罗界(CRB).仿真实验结果验证了新方法的有效性和优越性.
Kaufmann, Tobias; Kübler, Andrea
2014-10-01
Objective. The speed of brain-computer interfaces (BCI), based on event-related potentials (ERP), is inherently limited by the commonly used one-stimulus paradigm. In this paper, we introduce a novel paradigm that can increase the spelling speed by a factor of 2, thereby extending the one-stimulus paradigm to a two-stimulus paradigm. Two different stimuli (a face and a symbol) are presented at the same time, superimposed on different characters and ERPs are classified using a multi-class classifier. Here, we present the proof-of-principle that is achieved with healthy participants. Approach. Eight participants were confronted with the novel two-stimulus paradigm and, for comparison, with two one-stimulus paradigms that used either one of the stimuli. Classification accuracies (percentage of correctly predicted letters) and elicited ERPs from the three paradigms were compared in a comprehensive offline analysis. Main results. The accuracies slightly decreased with the novel system compared to the established one-stimulus face paradigm. However, the use of two stimuli allowed for spelling at twice the maximum speed of the one-stimulus paradigms, and participants still achieved an average accuracy of 81.25%. This study introduced an alternative way of increasing the spelling speed in ERP-BCIs and illustrated that ERP-BCIs may not yet have reached their speed limit. Future research is needed in order to improve the reliability of the novel approach, as some participants displayed reduced accuracies. Furthermore, a comparison to the most recent BCI systems with individually adjusted, rapid stimulus timing is needed to draw conclusions about the practical relevance of the proposed paradigm. Significance. We introduced a novel two-stimulus paradigm that might be of high value for users who have reached the speed limit with the current one-stimulus ERP-BCI systems.
Lee, Nam-Ju; Cho, Eunhee; Bakken, Suzanne
2010-03-01
The purposes of this study were to develop a taxonomy for detection of errors related to hypertension management and to apply the taxonomy to retrospectively analyze the documentation of nurses in Advanced Practice Nurse (APN) training. We developed the Hypertension Diagnosis and Management Error Taxonomy and applied it in a sample of adult patient encounters (N = 15,862) that were documented in a personal digital assistant-based clinical log by registered nurses in APN training. We used Standard Query Language queries to retrieve hypertension-related data from the central database. The data were summarized using descriptive statistics. Blood pressure was documented in 77.5% (n = 12,297) of encounters; 21% had high blood pressure values. Missed diagnosis, incomplete diagnosis and misdiagnosis rates were 63.7%, 6.8% and 7.5% respectively. In terms of treatment, the omission rates were 17.9% for essential medications and 69.9% for essential patient teaching. Contraindicated anti-hypertensive medications were documented in 12% of encounters with co-occurring diagnoses of hypertension and asthma. The Hypertension Diagnosis and Management Error Taxonomy was useful for identifying errors based on documentation in a clinical log. The results provide an initial understanding of the nature of errors associated with hypertension diagnosis and management of nurses in APN training. The information gained from this study can contribute to educational interventions that promote APN competencies in identification and management of hypertension as well as overall patient safety and informatics competencies. Copyright © 2010 Korean Society of Nursing Science. Published by . All rights reserved.
Klopotowska, Joanna E; Kuiper, Rob; van Kan, Hendrikus J; de Pont, Anne-Cornelie; Dijkgraaf, Marcel G; Lie-A-Huen, Loraine; Vroom, Margreeth B; Smorenburg, Susanne M
2010-01-01
Patients admitted to an intensive care unit (ICU) are at high risk for prescribing errors and related adverse drug events (ADEs). An effective intervention to decrease this risk, based on studies conducted mainly in North America, is on-ward participation of a clinical pharmacist in an ICU team. As the Dutch Healthcare System is organized differently and the on-ward role of hospital pharmacists in Dutch ICU teams is not well established, we conducted an intervention study to investigate whether participation of a hospital pharmacist can also be an effective approach in reducing prescribing errors and related patient harm (preventable ADEs) in this specific setting. A prospective study compared a baseline period with an intervention period. During the intervention period, an ICU hospital pharmacist reviewed medication orders for patients admitted to the ICU, noted issues related to prescribing, formulated recommendations and discussed those during patient review meetings with the attending ICU physicians. Prescribing issues were scored as prescribing errors when consensus was reached between the ICU hospital pharmacist and ICU physicians. During the 8.5-month study period, medication orders for 1,173 patients were reviewed. The ICU hospital pharmacist made a total of 659 recommendations. During the intervention period, the rate of consensus between the ICU hospital pharmacist and ICU physicians was 74%. The incidence of prescribing errors during the intervention period was significantly lower than during the baseline period: 62.5 per 1,000 monitored patient-days versus 190.5 per 1,000 monitored patient-days, respectively (P Medication Error Reporting and Prevention severity categories E and F) were reduced from 4.0 per 1,000 monitored patient-days during the baseline period to 1.0 per 1,000 monitored patient-days during the intervention period (P = 0.25). Per monitored patient-day, the intervention itself cost €3, but might have saved €26 to €40 by preventing
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Dobeš, Josef; Grábner, Martin; Puričer, Pavel; Vejražka, František; Míchal, Jan; Popp, Jakub
2017-05-01
Nowadays, there exist relatively precise pHEMT models available for computer-aided design, and they are frequently compared to each other. However, such comparisons are mostly based on absolute errors of drain-current equations and their derivatives. In the paper, a novel method is suggested based on relative root-mean-square errors of both drain current and its derivatives up to the third order. Moreover, the relative errors are subsequently relativized to the best model in each category to further clarify obtained accuracies of both drain current and its derivatives. Furthermore, one our older and two newly suggested models are also included in comparison with the traditionally precise Ahmed, TOM-2 and Materka ones. The assessment is performed using measured characteristics of a pHEMT operating up to 110 GHz. Finally, a usability of the proposed models including the higher-order derivatives is illustrated using s-parameters analysis and measurement at more operating points as well as computation and measurement of IP3 points of a low-noise amplifier of a multi-constellation satellite navigation receiver with ATF-54143 pHEMT.
Diagnostic errors in pediatric radiology
Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)
2011-03-15
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Errors in Radiologic Reporting
Esmaeel Shokrollahi
2010-05-01
Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological
Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif
2012-04-01
Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.
Measuring Error Analysis of Relative Humidity with Dry and Wet Bulb Method%干湿球法测量相对湿度的误差分析
薛相美; 许文革
2011-01-01
根据干湿球法测量相对湿度的原理,分析了干球温度、湿球温度、风速对相对湿度测量误差影响,对相对湿度测量合理选择测试仪表有一定的指导意义。%Based on the dry and wet bulb method,the influence of dry bulb temperature,wet bulb temperature and wind velocity on measurement error of relative humidity were analyzed,the results have demonstrated that the measurement error is higher in low temperature and high humidity than that of in high temperature and low humidity.The studies are practicably instructive for selecting reasonable humidity measurement instruments.
... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...
... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...
Hargrave, B K
2014-11-01
Speculation as to optical malfunction has led to dissatisfaction with the theory that the lens is the sole agent in accommodation and to the suggestion that other parts of the eye are also conjointly involved. Around half-a-century ago, Robert Brooks Simpkins suggested that the mechanical features of the human eye were precisely such as to allow for a lengthening of the globe when the eye accommodated. Simpkins was not an optical man but his theory is both imaginative and comprehensive and deserves consideration. It is submitted here that accommodation is in fact a twofold process, and that although involving the lens, is achieved primarily by means of a give - and - take interplay between adducting and abducting external muscles, whereby an elongation of the eyeball is brought about by a stretching of the delicate elastic fibres immediately behind the cornea. The three muscles responsible for convergence (superior, internal and inferior recti) all pull from in front backwards, while of the three abductors (external rectus and the two obliques) the obliques pull from behind forwards, allowing for an easy elongation as the eye turns inwards and a return to its original length as the abducting muscles regain their former tension, returning the eye to distance vision. In refractive errors, the altered length of the eyeball disturbs the harmonious give - and - take relationship between adductors and abductors. Such stresses are likely to be perpetuated and the error exacerbated. Speculation is not directed towards a search for a possible cause of the muscular imbalance, since none is suspected. Muscles not used rapidly lose tone, as evidenced after removal of a limb from plaster. Early attention to the need for restorative exercise is essential and results usually impressive. If flexibility of the external muscles of the eyes is essential for continuing good sight, presbyopia can be avoided and with it the supposed necessity of glasses in middle life. Early attention
Kott Phillip S.
2014-09-01
Full Text Available This article describes a two-step calibration-weighting scheme for a stratified simple random sample of hospital emergency departments. The first step adjusts for unit nonresponse. The second increases the statistical efficiency of most estimators of interest. Both use a measure of emergency-department size and other useful auxiliary variables contained in the sampling frame. Although many survey variables are roughly a linear function of the measure of size, response is better modeled as a function of the log of that measure. Consequently the log of size is a calibration variable in the nonresponse-adjustment step, while the measure of size itself is a calibration variable in the second calibration step. Nonlinear calibration procedures are employed in both steps. We show with 2010 DAWN data that estimating variances as if a one-step calibration weighting routine had been used when there were in fact two steps can, after appropriately adjusting the finite-population correct in some sense, produce standard-error estimates that tend to be slightly conservative.
Kaiadi, Mehrzad; Tunestål, Per; Johansson, Bengt
2010-01-01
High EGR rates combined with turbocharging has been identified as a promising way to increase the maximum load and efficiency of heavy duty spark ignition Natural Gas engines. With stoichiometric conditions a three way catalyst can be used which means that regulated emissions can be kept at very low levels. Most of the heavy duty NG engines are diesel engines which are converted for SI operation. These engine's components are in common with the diesel-engine which put limits on higher exh...
Radford, T
2004-01-01
"Ben Varcoe wants to find a relatively small mistake in Einstein's theory of special relativity. To do this, he will slow light down from 300,000 km per second to 10 metres per second - about the speed of Darren Campbell - and see how it behaves" (1 page)
Kecklund, Lena Jacobsson; Svenson, Ola
1997-04-01
The present study investigated the relationships between the operator's appraisal of his own work situation and the quality of his own work performance as well as self-reported errors in a nuclear power plant control room. In all, 98 control room operators from two nuclear power units filled out a questionnaire and several diaries during two operational conditions, annual outage and normal operation. As expected, the operators reported higher work demands in annual outage as compared to normal operation. In response to the increased demands, the operators reported that they used coping strategies such as increased effort, decreased aspiration level for work performance quality and increased use of delegation of tasks to others. This way of coping does not reflect less positive motivation for the work during the outage period. Instead, the operators maintain the same positive motivation for their work, and succeed in being more alert during morning and night shifts. However, the operators feel less satisfied with their work result. The operators also perceive the risk of making minor errors as increasing during outage. The decreased level of satisfaction with work result during outage is a fact despite the lowering of aspiration level for work performance quality during outage. In order to decrease relative frequencies for minor errors, special attention should be given to reduce work demands, such as time pressure and memory demands. In order to decrease misinterpretation errors special attention should be given to organizational factors such as planning and shift turnovers in addition to training. In summary, the outage period seems to be a significantly more vulnerable window in the management of a nuclear power plant than the normal power production state. Thus, an increased focus on the outage period and human factors issues, addressing the synergetic effects or work demands, organizational factors and coping resources is an important area for improvement
Immediate error correction process following sleep deprivation
HSIEH, SHULAN; CHENG, I‐CHEN; TSAI, LING‐LING
2007-01-01
...) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event‐related potentials (ERPs...
Ichikawa, Naho; Siegle, Greg J; Dombrovski, Alexandre; Ohira, Hideki
2010-12-01
In this study, we examined whether the feedback-related negativity (FRN) is associated with both subjective and objective (model-estimated) reward prediction errors (RPE) per trial in a reinforcement learning task in healthy adults (n=25). The level of RPE was assessed by 1) subjective ratings per trial and by 2) a computational model of reinforcement learning. As results, model-estimated RPE was highly correlated with subjective RPE (r=.82), and the grand-averaged ERP waves based on the trials with high and low model-estimated RPE showed the significant difference only in the time period of the FRN component (pcontingency.
Refractive error sensing from wavefront slopes.
Navarro, Rafael
2010-01-01
The problem of measuring the objective refractive error with an aberrometer has shown to be more elusive than expected. Here, the formalism of differential geometry is applied to develop a theoretical framework of refractive error sensing. At each point of the pupil, the local refractive error is given by the wavefront curvature, which is a 2 × 2 symmetric matrix, whose elements are directly related to sphere, cylinder, and axis. Aberrometers usually measure the local gradient of the wavefront. Then refractive error sensing consists of differentiating the gradient, instead of integrating as in wavefront sensing. A statistical approach is proposed to pass from the local to the global (clinically meaningful) refractive error, in which the best correction is assumed to be the maximum likelihood estimation. In the practical implementation, this corresponds to the mode of the joint histogram of the 3 different elements of the curvature matrix. Results obtained both in computer simulations and with real data provide a close agreement and consistency with the main optical image quality metrics such as the Strehl ratio.
Tightness of the recentered maximum of the two-dimensional discrete Gaussian Free Field
Bramson, Maury
2010-01-01
We consider the maximum of the discrete two dimensional Gaussian free field (GFF) in a box, and prove that its maximum, centered at its mean, is tight, settling a long-standing conjecture. The proof combines a recent observation of Bolthausen, Deuschel and Zeitouni with elements from (Bramson 1978) and comparison theorems for Gaussian fields. An essential part of the argument is the precise evaluation, up to an error of order 1, of the expected value of the maximum of the GFF in a box. Related Gaussian fields, such as the GFF on a two-dimensional torus, are also discussed.
1985-01-01
More than 2,000 healthy Americans die each year during general anesthesia, and at least half of these deaths may be preventable. Anesthetists and equipment manufacturers have made considerable progress in improving anesthesia safety. However, much more needs to be done, especially in "human-factors" areas such as improved training, consistent use of preanesthesia checklists, and anesthetists' willingness to enhance their vigilance by using appropriate monitoring equipment. While defective equipment and supplies are the direct cause of relatively few deaths, inexpensive oxygen analyzers and disconnect alarms could, if available in more ORs, warn anesthetists in time to convert many deaths to near misses. Some anesthetists are using other monitoring technologies that are more costly, but can detect a wider range of problems. The anesthesia community could expand its anesthesia-safety leadership and guidance, by improving technology-related training and by developing practice standards for anesthetists and safety standards for equipment. The Joint Commission on Accreditation of Hospitals could impose specific safety requirements on hospitals; malpractice insurance carriers could require anesthetists and hospitals to use monitors and alarms during all procedures; and the Food and Drug Administration could actively stimulate and oversee these efforts and perhaps provide seed money for some of them. The necessary equipment costs would likely be offset by long-term savings in malpractice premiums, as anesthesia incidents are the most costly of all types of malpractice claims. Concerted efforts such as these could greatly reduce the number of avoidable anesthesia-related deaths.
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Measurement Error Models in Astronomy
Kelly, Brandon C
2011-01-01
I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
Liu Li-Li; Jiang Cheng-Bao
2011-01-01
The oxidation microstructure and maximum energy product (BH)max loss of a Sm(Co0.76,Fe0.7,Cu0.1,Zr0.04)7magnet oxidized at 500 ℃ were systematically investigated.Three different oxidation regions were formed in the oxidized magnet:a continuous external oxide scale,an internal reaction layer,and a diffusion zone.Both room-temperature and high-temperature (BH)max losses exhibited the same parabolic increase with oxidation time.An oxygen diffusion model was proposed to simulate the dependence of (BH)max loss on oxidation time.It is found that the external oxide scale has little effect on the (BH)max loss,and both the internal reaction layer and diffusion zone result in the (BH)max loss.Moreover,the diffusion zone leads to more (BH)max loss than the internal reaction layer.The values of the oxidation rate constant k for internal reaction layer and oxygen diffusion coefficient D for diffusion zone were obtained,which are about 1.91× 10-10 cm2/s and 6.54× 10-11 cm2/s,respectively.
Kimball, R.F.; Boling, M.E.; Perdue, S.W.
1977-01-01
Haemophilus influenzae Rd and its derivatives are mutated either not at all or to only a very small extent by ultraviolet (uv) radiation, x rays, methyl methanesulfonate, and nitrogen mustard, though they are readily mutated by such agents as N-methyl-N'-nitro-N-nitrosoguanidine, ethyl methanesulfonate, and nitrosocarbaryl (NC). In these respects H. influenzae Rd resembles the lexA mutants of Escherichia coli that lack the SOS or reclex uv-inducible error-prone repair system. This similarity is further brought out by the observation that chloramphenicol has little or no effect on post-replication repair after uv irradiation. In E. coli, chloramphenicol has been reported to considerably inhibit post-replication repair in the wild type but not in the lexA mutant. Earlier work has suggested that most or all the mutations induced in H. influenzae by NC result from error-prone repair. Combined treatment with NC and either x rays or uv shows that the NC error-prone repair system does not produce mutations from the lesions induced by these radiations even while it is producing them from its own lesions. It is concluded that the NC error-prone repair system or systems and the reclex error-prone system are different.
Daniel de Almeida
2014-05-01
Full Text Available Traditional GARCH models fail to explain at least two of the stylized facts found in financial series: the asymmetry of the distribution of errors and the leverage effect. The leverage effect stems from the fact that losses have a greater influence on future volatilities than do gains. Asymmetry means that the distribution of losses has a heavier tail than the distribution of gains. We test whether these features are present in some series related to the Brazilian market. To test for the presence of these features, the series were fitted by GARCH(1,1, TGARCH(1,1, EGARCH(1,1, and GJR-GARCH(1,1 models with standardized Student t distribution errors with and without asymmetry. Information criteria and statistical tests of the significance of the symmetry and leverage parameters are used to compare the models. The estimates of the VaR (value-at-risk are also used in the comparison. The conclusion is that both stylized facts are present in some series, mostly simultaneously.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Clemens eMaidhof
2013-07-01
Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek
2016-11-01
The aims of this study were to quantify the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE) and to explore any differences between respondents. A cross-sectional survey of patient-facing doctors, nurses and pharmacists within three major hospitals of Abu Dhabi, the UAE. An online questionnaire was developed based on the Theoretical Domains Framework (TDF, a framework of behaviour change theories). Principal component analysis (PCA) was used to identify components and internal reliability determined. Ethical approval was obtained from a UK university and all hospital ethics committees. Two hundred and ninety-four responses were received. Questionnaire items clustered into six components of knowledge and skills, feedback and support, action and impact, motivation, effort and emotions. Respondents generally gave positive responses for knowledge and skills, feedback and support and action and impact components. Responses were more neutral for the motivation and effort components. In terms of emotions, the component with the most negative scores, there were significant differences in terms of years registered as health professional (those registered longest most positive, p = 0.002) and age (older most positive, p Theoretical Domains Framework to quantify the behavioural determinants of health professional reporting of medication errors. • Questionnaire items relating to emotions surrounding reporting generated the most negative responses with significant differences in terms of years registered as health professional (those registered longest most positive) and age (older most positive) with no differences for gender and health profession. • Interventions based on behaviour change techniques mapped to emotions should be prioritised for development.
Nam-Ju Lee, DNSc, RN
2010-03-01
Conclusion: The Hypertension Diagnosis and Management Error Taxonomy was useful for identifying errors based on documentation in a clinical log. The results provide an initial understanding of the nature of errors associated with hypertension diagnosis and management of nurses in APN training. The information gained from this study can contribute to educational interventions that promote APN competencies in identification and management of hypertension as well as overall patient safety and informatics competencies.
Haberman, Shelby J.
2004-01-01
The usefulness of joint and conditional maximum-likelihood is considered for the Rasch model under realistic testing conditions in which the number of examinees is very large and the number is items is relatively large. Conditions for consistency and asymptotic normality are explored, effects of model error are investigated, measures of prediction…
Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie
2015-10-28
Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world.
Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?
Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.
2007-01-01
This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
2013-01-01
ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....
Allodji, Rodrigue S; Leuraud, Klervi; Thiébaut, Anne C M; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques
2012-05-01
Measurement error (ME) can lead to bias in the analysis of epidemiologic studies. Here a simulation study is described that is based on data from the French Uranium Miners' Cohort and that was conducted to assess the effect of ME on the estimated excess relative risk (ERR) of lung cancer death associated with radon exposure. Starting from a scenario without any ME, data were generated containing successively Berkson or classical ME depending on time periods, to reflect changes in the measurement of exposure to radon ((222)Rn) and its decay products over time in this cohort. Results indicate that ME attenuated the level of association with radon exposure, with a negative bias percentage on the order of 60% on the ERR estimate. Sensitivity analyses showed the consequences of specific ME characteristics (type, size, structure, and distribution) on the ERR estimates. In the future, it appears important to correct for ME upon analyzing cohorts such as this one to decrease bias in estimates of the ERR of adverse events associated with exposure to ionizing radiation.
Medication prescribing errors in a public teaching hospital in India: A prospective study.
Pote S
2007-03-01
Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom
2016-04-01
Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA optimization.
Beam positioning error budget in ICF driver
Shi Zhi Quan; Su Jing Qin
2002-01-01
The author presents the method of linear weight sum to beam positioning budget on the basis of ICF request on targeting, the approach of equal or unequal probability to allocate errors to each optical element. Based on the relationship between the motion of the optical components and beam position on target, the position error of the optical components was evaluated, which was referred to as the maximum range. Lots of ray trace were performed, the position error budget were modified by law of the normal distribution. An overview of position error budget of the components is provided
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-02-10
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of FST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of FST. In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Kim, Jae Whan; Park, Jin Kyun; Jung, Won Dea
2008-02-15
This report provides the task types and error types involved in the unplanned reactor trip events that have occurred during 1986 - 2006. The events that were caused by the secondary system of the nuclear power plants amount to 67 %, and the remaining 33 % was by the primary system. The contribution of the activities of the plant personnel was identified as the following order: corrective maintenance (25.7 %), planned maintenance (22.8 %), planned operation (19.8 %), periodic preventive maintenance (14.9 %), response to a transient (9.9 %), and design/manufacturing/installation (9.9%). According to the analysis of error modes, the error modes such as control failure (22.2 %), wrong object (18.5 %), omission (14.8 %), wrong action (11.1 %), and inadequate (8.3 %) take up about 75 % of all the unplanned trip events. The analysis of the cognitive functions involved showed that the planning function makes the highest contribution to the human actions leading to unplanned reactor trips, and it is followed by the observation function (23.4%), the execution function (17.8 %), and the interpretation function (10.3 %). The results of this report are to be used as important bases for development of the error reduction measures or development of the error mode prediction system for the test and maintenance tasks in nuclear power plants.
Yuan-Hong Jiang
Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
Feng, Chi; Li, Dong; Gao, Shan; Daniel, Ketui
2016-11-01
This paper presents a CFD (Computation Fluid Dynamic) simulation and experimental results for the reflected radiation error from turbine vanes when measuring turbine blade's temperature using a pyrometer. In the paper, an accurate reflection model based on discrete irregular surfaces is established. Double contour integral method is used to calculate view factor between the irregular surfaces. Calculated reflected radiation error was found to change with relative position between blades and vanes as temperature distribution of vanes and blades was simulated using CFD. Simulation results indicated that when the vanes suction surface temperature ranged from 860 K to 1060 K and the blades pressure surface average temperature is 805 K, pyrometer measurement error can reach up to 6.35%. Experimental results show that the maximum pyrometer absolute error of three different targets on the blade decreases from 6.52%, 4.15% and 1.35% to 0.89%, 0.82% and 0.69% respectively after error correction.
Error in the description of foot kinematics due to violation of rigid body assumptions.
Nester, C J; Liu, A M; Ward, E; Howard, D; Cocheba, J; Derrick, T
2010-03-03
Kinematic data from rigid segment foot models inevitably includes errors because the bones within each segment move relative to each other. This study sought to define error in foot kinematic data due to violation of the rigid segment assumption. The research compared kinematic data from 17 different mid and forefoot rigid segment models to kinematic data of the individual bones comprising these segments. Kinematic data from a previous dynamic cadaver model study was used to derive individual bone as well as foot segment kinematics. Mean and maximum errors due to violation of the rigid body assumption varied greatly between models. The model with least error was the combination of navicular and cuboid (mean errors kinematics research study being undertaken.
孙宇[; 李纯莲; 钟经华
2016-01-01
Braille error tolerance rate includes two aspects: the scheme error tolerance rate corresponding to Braille scheme and the spelling error tolerance rate corresponding to readers.In order to reasonably evaluate the spelling efficiency of Chinese Braille scheme and further improve it, this paper presents a concept of scheme error tolerance rate and makes a statistical analysis on it.The results show that the error tolerance rate is objective necessary and controllable, pointing out the Braille scheme with the greater error tolerance rate will be easier to use and popularize.Finally, it gives an optimization function of scheme error tolerance rate, which is helpful to improve the current Braille scheme.Meanwhile, it discusses the influences of readers′psychological factors on Braille error tolerance rate when reading and reveals the relations of mutual influence, mutual promotion and mutual compensation between the scheme error tolerance rate of Braille scheme and the spelling error tolerance rate of Braille readers.%盲文容错率包括盲文方案的方案容错率和盲文读者的拼读容错率两个方面。为了合理评估汉语盲文方案的拼读效率、进一步改进盲文方案，提出盲文方案的方案容错率概念并对其进行统计学分析，得出容错率存在的必然性和可控性，指出容错率较大的盲文方案较容易使用和推广，最后给出了盲文方案容错率的优化函数以利于改进现有盲文方案。同时还分析了读者在阅读盲文时，其心理因素对盲文容错率的影响，揭示了盲文方案的方案容错率和盲文读者的拼读容错率之间相互影响、相互促进、相互代偿的关系。
Ohteru, Shoko; Kishine, Keiji
The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.
Van Malderen, Roeland; Allaart, Marc A. F.; De Backer, Hugo; Smit, Herman G. J.; De Muer, Dirk
2016-08-01
The ozonesonde stations at Uccle (Belgium) and De Bilt (the Netherlands) are separated by only 175 km but use different ozonesonde types (or different manufacturers for the same electrochemical concentration cell (ECC) type), operating procedures, and correction strategies. As such, these stations form a unique test bed for the Ozonesonde Data Quality Assessment (O3S-DQA) activity, which aims at providing a revised, homogeneous, consistent dataset with an altitude-dependent estimated uncertainty for each revised profile. For the ECC ozonesondes at Uccle mean relative uncertainties in the 4-6 % range are obtained. To study the impact of the corrections on the ozone profiles and trends, we compared the Uccle and De Bilt average ozone profiles and vertical ozone trends, calculated from the operational corrections at both stations and the O3S-DQA corrected profiles. In the common ECC 1997-2014 period, the O3S-DQA corrections effectively reduce the differences between the Uccle and De Bilt ozone partial pressure values with respect to the operational corrections only for the stratospheric layers below the ozone maximum. The upper-stratospheric ozone measurements at both sites are substantially different, regardless of the correction methodology used. The origin of this difference is not clear. The discrepancies in the tropospheric ozone concentrations between both sites can be ascribed to the problematic background measurement and correction at De Bilt, especially in the period before November 1998. The Uccle operational correction method, applicable to both ozonesonde types used, diminishes the relative stratospheric ozone differences of the Brewer-Mast sondes in the 1993-1996 period with De Bilt to less than 5 % and to less than 6 % in the free troposphere for the De Bilt operational corrections. Despite their large impact on the average ozone profiles, the different (sensible) correction strategies do not change the ozone trends significantly, usually only within
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Classification of Spreadsheet Errors
Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian
2008-01-01
This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...
Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model
无
2006-01-01
In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.
Human Error Mechanisms in Complex Work Environments
Rasmussen, Jens
1988-01-01
will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...
Zhang, Min-juan; Wang, Zhi-bin; Li, Xiao; Li, Jin-hua; Wang, Yan-chao
2015-05-01
In order to improve the accuracy and stability of the rebuilt spectrums, it is necessary that stability analysis and nicety measuring of the maximum optical path difference of interferograms in the photo-elastic modulator Fourier transform spectrometers(PEM-FTS). The maximum optical difference of interferograms is uncertain parameter, and it is relate to the resonant state, characteristic of frequency-thermal drift and driving voltage of PEM. Therefore, based on the principle of photo-elastic modulator Fourier transform interferometer, the model of the freguency-thermal drift is built, and the variety of the maximum optical path difference is analyzed; A measuring method of the maximum optical path difference is put forward, which is zero-crossing counting of laser's interference signal when the driving signal of PEM is as the standard. In the method the dual channel high-speed comparator and FPGA are used to transform sine wave to square wave, to realize zero-crossing trigger counting and errors compensation. On the condition that the 670. 8 nm laser is as the power source to produce the reference interferograms by the PEM interferometer, the 77. 471 µm maximum optical path difference could be measured by the zero-crossing counting the measuring errors is less than 0. 167 nm, the rebuilt spectral peak wavelength errors of the infrared blackbody is less than 2 nm. the result is content with PEM-FTS.
Sensitivity of LIDAR Canopy Height Estimate to Geolocation Error
Tang, H.; Dubayah, R.
2010-12-01
Many factors affect the quality of canopy height structure data derived from space-based lidar such as DESDynI. Among these is geolocation accuracy. Inadequate geolocation information hinders subsequent analyses because a different portion of the canopy is observed relative to what is assumed. This is especially true in mountainous terrain where the effects of slope magnify geolocation errors. Mission engineering design must trade the expense of providing more accurate geolocation with the potential improvement in measurement accuracy. The objective of our work is to assess the effects of small errors in geolocation on subsequent retrievals of maximum canopy height for a varying set of canopy structures and terrains. Dense discrete lidar data from different forest sites (from La Selva Biological Station, Costa Rica, Sierra National Forest, California, and Hubbard Brook and Bartlett Experimental Forests in New Hampshire) are used to simulate DESDynI height retrievals using various geolocation accuracies. Results show that canopy height measurement errors generally increase as the geolocation error increases. Interestingly, most of the height errors are caused by variation of canopy height rather than topography (slope and aspect).
赵永翔; 王金诺; 高庆
2001-01-01
same probabilistic level, the curves is described by a general form of mean and standard deviation curves of the logarithm of fatigue life, in which four material constants are at most contained. The constants in the curves are estimated by a mathematical programming method to be in agreement with the maximum likelihood principle. Availability of the approach has been indicated by an analysis of the S-N data of 45# carbon stell-notched specimens (kt=2.0) subjected to fully reversed axial loads. The analysis reveals that an appropriate relation should be determined by comparing the fit, the fitted error and the safety in practice, of the three relations. The fit is best for three-parameter relation, slightly inferior for the Langer one and poor for the Basquin. Considering the fitted error and the safety in practice, the Basquin one is not an appropriate relation for the data. In addition, classical maximum likelihood method-based predictions might be affected by the local statistical characteristics of test data at the reference load to be non-conservatively. To avoid the affects, an improved method, which could reduce the affects to a minimum, should be worthily explored.
Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2011-01-01
In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....
Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2011-01-01
In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....
Error processing in Huntington's disease.
Christian Beste
Full Text Available BACKGROUND: Huntington's disease (HD is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related negativity (Ne/ERN, a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD. Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. METHODOLOGY/PRINCIPLE FINDINGS: We assessed the error negativity (Ne in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. CONCLUSIONS/SIGNIFICANCE: The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated "cognitive" biomarker in HD.
A New Method of Error Compensation for Numerical Control System
夏蔚军; 吴智铭; 李济顺; 张洛平
2003-01-01
This paper presents a method of rapid machine tool error modeling, separation, and compensation using grating ruler. A robust modeling procedure for geometric errors is developed and a fast data processing algorithm is designed by using the error separation technique. After compensation with the new method, the maximum position error of the experiment workbench can be reduced from 400μm to 15μm. The experimental results show the effectiveness and accuracy of this method.
Desmet, Charlotte; Deschrijver, Eliane; Brass, Marcel
2014-04-01
Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other's mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies.
Game Design Principles based on Human Error
Guilherme Zaffari
2016-03-01
Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
陈英; 杨智宽
2012-01-01
在正视眼或者低度远视眼中,周边视网膜呈相对远视屈光状态者比周边视网膜呈相对近视屈光状态者更容易发展为近视眼.正视眼周边视网膜呈轻度相对近视屈光状态,未矫正的远视眼周边视网膜呈稍大程度的相对近视屈光状态,未矫正的近视眼周边视网膜呈轻度相对远视屈光状态或比正视眼程度轻的相对近视状态.这两种观点已经被学者广泛接受.动物实验也证明异常视觉信号不仅能引起中央视网膜屈光不正,也能改变眼球后极部眼球形态和周边视网膜相对屈光不正的类型,并且黄斑切除并不影响正视化过程.相反,周边视网膜能单独调控眼球正视化过程并能对异常视觉信号起作用进而发展为各种屈光不正.临床研究表明,矫正视网膜周边远视性离焦的框架镜片对近视眼进展能起到一定的控制作用.虽然,周边视网膜远视性离焦是否促进近视进展的确切原因还不能肯定,但目前的研究倾向于认为两者之间可能有某种关系.%It has been shown that emmetropic and low hyperopic eyes in which eyes with relative hyperopic peripheral refractive error may be at greater risk of developing myopia than those with similar refractions but relative myopic peripheral refractive errors.Emmetropic eyes show slightly relative myopic peripheral refractive errors; uncorrected hyperopic eyes show more significant relative myopic peripheral refractive errors.However,uncorrected myopes show slight relative hyperopic peripheral refractive errors or smaller myopic shifts in the periphery than emmetropes.These two ideas are widely accepted by scientists.Animal studies have proven that in addition to producing central refractive errors,abnormal visual signals can alter the shape of the posterior globe of the eye and the type of peripheral refractive errors.In addition,foveal ablation has little influence on emmetropization.On the contrary,the peripheral
Nute, Christine
2014-11-25
Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.
Comparison of Prediction-Error-Modelling Criteria
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....
Shape Error Analysis of Functional Surface Based on Isogeometrical Approach
YUAN, Pei; LIU, Zhenyu; TAN, Jianrong
2017-05-01
The construction of traditional finite element geometry (i.e., the meshing procedure) is time consuming and creates geometric errors. The drawbacks can be overcame by the Isogeometric Analysis (IGA), which integrates the computer aided design and structural analysis in a unified way. A new IGA beam element is developed by integrating the displacement field of the element, which is approximated by the NURBS basis, with the internal work formula of Euler-Bernoulli beam theory with the small deformation and elastic assumptions. Two cases of the strong coupling of IGA elements, "beam to beam" and "beam to shell", are also discussed. The maximum relative errors of the deformation in the three directions of cantilever beam benchmark problem between analytical solutions and IGA solutions are less than 0.1%, which illustrate the good performance of the developed IGA beam element. In addition, the application of the developed IGA beam element in the Root Mean Square (RMS) error analysis of reflector antenna surface, which is a kind of typical functional surface whose precision is closely related to the product's performance, indicates that no matter how coarse the discretization is, the IGA method is able to achieve the accurate solution with less degrees of freedom than standard Finite Element Analysis (FEA). The proposed research provides an effective alternative to standard FEA for shape error analysis of functional surface.
Error processing network dynamics in schizophrenia.
Becerril, Karla E; Repovs, Grega; Barch, Deanna M
2011-01-15
Current theories of cognitive dysfunction in schizophrenia emphasize an impairment in the ability of individuals suffering from this disorder to monitor their own performance, and adjust their behavior to changing demands. Detecting an error in performance is a critical component of evaluative functions that allow the flexible adjustment of behavior to optimize outcomes. The dorsal anterior cingulate cortex (dACC) has been repeatedly implicated in error-detection and implementation of error-based behavioral adjustments. However, accurate error-detection and subsequent behavioral adjustments are unlikely to rely on a single brain region. Recent research demonstrates that regions in the anterior insula, inferior parietal lobule, anterior prefrontal cortex, thalamus, and cerebellum also show robust error-related activity, and integrate into a functional network. Despite the relevance of examining brain activity related to the processing of error information and supporting behavioral adjustments in terms of a distributed network, the contribution of regions outside the dACC to error processing remains poorly understood. To address this question, we used functional magnetic resonance imaging to examine error-related responses in 37 individuals with schizophrenia and 32 healthy controls in regions identified in the basic science literature as being involved in error processing, and determined whether their activity was related to behavioral adjustments. Our imaging results support previous findings showing that regions outside the dACC are sensitive to error commission, and demonstrated that abnormalities in brain responses to errors among individuals with schizophrenia extend beyond the dACC to almost all of the regions involved in error-related processing in controls. However, error related responses in the dACC were most predictive of behavioral adjustments in both groups. Moreover, the integration of this network of regions differed between groups, with the
Mackie, Peter; Nellthorp, John; Laird, James
2005-01-01
Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...
Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.
2009-01-01
For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br
Balthazar, Marcio Luiz Figueredo; Yasuda, Clarissa Lin; Pereira, Fabrício Ramos Silvestre; Bergo, Felipe Paulo Guazzi; Cendes, Fernando; Damasceno, Benito Pereira
2010-11-01
Naming difficulties are characteristic of Alzheimer's disease (AD) and, to a lesser extent, of amnestic mild cognitive impairment (aMCI) patients. The association of naming impairment with anterior temporal lobe (ATL) atrophy in Semantic Dementia (SD) could be a tip of the iceberg effect, in which case the atrophy is a marker of more generalized temporal lobe pathology. Alternatively, it could reflect the existence of a functional gradient within the temporal lobes, wherein more anterior regions provide the basis for greater specificity of representation. We tested these two hypotheses in a study of 15 subjects with mild AD, 17 with aMCI, and 16 aged control subjects and showed that coordinate and circumlocutory semantic error production on the Boston Naming Test was weakly correlated with ATL gray matter density, as determined by voxel-based morphometry. Additionally, we investigated whether these errors were benefited by phonemic cues, and similarly to SD, our AD patients had small improvement. Because there is minimal gradient of temporal lobe atrophy in AD or MCI, and, therefore, no basis for a tip of the iceberg effect, these findings support the theory of a modest functional gradient in the temporal lobes, with the ATLs being involved in the naming of more specific objects.
Kecklund, L.J. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden).Dept. of Man-Technology Organization; Svenson, O. [Stockholm University (Sweden). Dept. of Psychology
1997-12-01
The present study investigated the relationships between the operator`s appraisal of his own work situation and the quality of his own work performance, as well as self-reported errors in a nuclear power plant control room. In all, 98 control room operators from two nuclear power units filled out a questionnaire and several diaries during two operational conditions, annual outage and normal operation. As expected, the operators reported higher work demands in annual outage as compared to normal operation. In response to the increased demands, the operators reported that they used coping strategies such as increased effort, decreased aspiration level for work performance quality, and increased use of delegation of tasks to others. This way of coping does not reflect less positive motivation for the work during the outage period. Instead, the operators maintain the same positive motivation for their work, and succeed in being more alert during morning and night shifts. However, the operators feel less satisfied with their work result. The operators also perceive the risk of making minor errors as increasing during outage. (Author).
On the Threshold of Maximum-Distance Separable Codes
Kindarji, Bruno; Chabanne, Hervé
2010-01-01
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt'09, this paper deals with the threshold of linear $q$-ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible? We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.
Analgesic medication errors in North Carolina nursing homes.
Desai, Rishi J; Williams, Charrlotte E; Greene, Sandra B; Pierson, Stephanie; Caprio, Anthony J; Hansen, Richard A
2013-06-01
The objective of this study was to characterize analgesic medication errors and to evaluate their association with patient harm. The authors conducted a cross-sectional analysis of individual medication error incidents reported by North Carolina nursing homes to the Medication Error Quality Initiative (MEQI) during fiscal years 2010-2011. Bivariate associations between analgesic medication errors with patient factors, error-related factors, and impact on patients were tested with chi-square tests. A multivariate logistic regression model explored the relationship between type of analgesic medication errors and patient harm, controlling for patient- and error-related factors. A total of 32,176 individual medication error incidents were reported over a 2-year period in North Carolina nursing homes, 12.3% (n = 3949) of which were analgesic medication errors. Of these analgesic medication errors, opioid and nonopioid analgesics were involved in 3105 and 844 errors, respectively. Opioid errors were more likely to be wrong drug errors, wrong dose errors, and administration errors compared with nonopioid errors (P errors were found to have higher odds of patient harm compared with nonopioid errors (odds ratio [OR] = 3, 95% confodence interval [CI]: 1.1-7.8). The authors conclude that opioid analgesics represent the majority of analgesic error reports, and these error reports reflect an increased likelihood of patient harm compared with nonopioid analgesics.
Glosup, J.G.; Axelrod, M.C.
1996-08-05
The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.
DEM construction based on HASM and related error analysis%基于高精度曲面模型的DEM构建与误差分析
陈传法; 岳天祥; 杜正平; 卢毅敏
2010-01-01
引入地形表达误差(terrain representation error,Etr),选择标准曲面和甘肃省董忐塬地区作为研究对象,利用窗口分析法实现Etr的提取;用统计分析法得出Etr随网格分辨率变化的回归方程:根据误差传播定律计算DEM中误差.数值结果表明,该方法能更准确的计算HASM生成的DEM精度;相同的采样数下,HASM较传统方法(IDW,Spline和Kriging)能生成更高精度和分辨率的DEM.在难以获取已知数据的地区,HASM提供了生成相对准确DEM的高效工具.
Correlated measurement error hampers association network inference.
Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B
2014-09-01
Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow
Allodji, Rodrigue S; Thiébaut, Anne C M; Leuraud, Klervi; Rage, Estelle; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques
2012-12-30
A broad variety of methods for measurement error (ME) correction have been developed, but these methods have rarely been applied possibly because their ability to correct ME is poorly understood. We carried out a simulation study to assess the performance of three error-correction methods: two variants of regression calibration (the substitution method and the estimation calibration method) and the simulation extrapolation (SIMEX) method. Features of the simulated cohorts were borrowed from the French Uranium Miners' Cohort in which exposure to radon had been documented from 1946 to 1999. In the absence of ME correction, we observed a severe attenuation of the true effect of radon exposure, with a negative relative bias of the order of 60% on the excess relative risk of lung cancer death. In the main scenario considered, that is, when ME characteristics previously determined as most plausible from the French Uranium Miners' Cohort were used both to generate exposure data and to correct for ME at the analysis stage, all three error-correction methods showed a noticeable but partial reduction of the attenuation bias, with a slight advantage for the SIMEX method. However, the performance of the three correction methods highly depended on the accurate determination of the characteristics of ME. In particular, we encountered severe overestimation in some scenarios with the SIMEX method, and we observed lack of correction with the three methods in some other scenarios. For illustration, we also applied and compared the proposed methods on the real data set from the French Uranium Miners' Cohort study.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
张磊; 李珊; 彭舰; 陈黎; 黎红友
2014-01-01
In recent years, feature-opinion pairs classification of Chinese product review is one of the most important research field in Web data mining technology. In this paper, five types of Chinese dependency relationships for product review have been concluded based on the traditional English dependency grammar. The maximum entropy model is used to predict the opinion-relevant product feature relations. To train the model, a set of feature symbol combinations have been designed by means of Chinese dependency. The experiment result shows that the recall and F-score of our approach could reach 78.68%and 75.36%respectively, which is clearly superior to Hu’s adjacent based method and Popesecu’s pattern based method.%中文产品评论特征词与关联的情感词的分类是观点挖掘的重要研究内容之一。该文改进了英文依存关系语法，总结出5种常用的中文产品评论依存关系；利用最大熵模型进行训练，设计了基于依存关系的复合特征模板。实验证明，应用该复合模板进行特征-情感对的提取，系统的查全率和F-score相比于传统方法，分别提高到78.68%和75.36%。
On the Combination Procedure of Correlated Errors
Erler, Jens
2015-01-01
When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors.
On the combination procedure of correlated errors
Erler, Jens [Universidad Nacional Autonoma de Mexico, Instituto de Fisica, Mexico D.F. (Mexico)
2015-09-15
When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors. (orig.)
Jodi DeAraugo
2016-02-01
Full Text Available While the role of the horse in riding hazards is well recognised, little attention has been paid to the role of specific theoretical psychological processes of humans in contributing to and mitigating risk. The injury, mortality or compensation claim rates for participants in the horse-racing industry, veterinary medicine and equestrian disciplines provide compelling evidence for improving risk mitigation models. There is a paucity of theoretical principles regarding the risk of injury and mortality associated with human–horse interactions. In this paper we introduce and apply the four psychological principles of context, loss of focus, global cognitive style and the application of self as the frame of reference as a potential approach for assessing and managing human–horse risks. When these principles produce errors that are combined with a rigid self-referenced point, it becomes clear how rapidly risk emerges and how other people and animals may repeatedly become at risk over time. Here, with a focus on the thoroughbred racing industry, veterinary practice and equestrian disciplines, we review the merits of contextually applied strategies, an evolving reappraisal of risk, flexibility, and focused specifics of situations that may serve to modify human behaviour and mitigate risk.
Nakada, Masao; Okuno, Jun'ichi; Yokoyama, Yusuke
2016-02-01
Inference of globally averaged eustatic sea level (ESL) rise since the Last Glacial Maximum (LGM) highly depends on the interpretation of relative sea level (RSL) observations at Barbados and Bonaparte Gulf, Australia, which are sensitive to the viscosity structure of Earth's mantle. Here we examine the RSL changes at the LGM for Barbados and Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}), differential RSL for both sites (Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}}) and rate of change of degree-two harmonics of Earth's geopotential due to glacial isostatic adjustment (GIA) process (GIA-induced J˙2) to infer the ESL component and viscosity structure of Earth's mantle. Differential RSL, Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} and GIA-induced J˙2 are dominantly sensitive to the lower-mantle viscosity, and nearly insensitive to the upper-mantle rheological structure and GIA ice models with an ESL component of about (120-130) m. The comparison between the predicted and observationally derived Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} indicates the lower-mantle viscosity higher than ˜2 × 1022 Pa s, and the observationally derived GIA-induced J˙2 of -(6.0-6.5) × 10-11 yr-1 indicates two permissible solutions for the lower mantle, ˜1022 and (5-10) × 1022 Pa s. That is, the effective lower-mantle viscosity inferred from these two observational constraints is (5-10) × 1022 Pa s. The LGM RSL changes at both sites, {{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}, are also sensitive to the ESL component and upper-mantle viscosity as well as the lower-mantle viscosity. The permissible upper-mantle viscosity increases with decreasing ESL component due to the sensitivity of the LGM sea level at Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bon}}}) to the upper-mantle viscosity, and inferred upper-mantle viscosity for adopted lithospheric thicknesses of 65 and 100 km is (1-3) × 1020 Pa s for ESL˜130 m and (4-10) × 1020 Pa s for ESL˜125 m. The former solution of (1-3) × 1020
曾杰; 张永兴; 靳晓光
2011-01-01
通过分析国内外岩爆预测的判据,选择岩爆发生所需的力学条件、完整性条件、储能条件和脆性条件作为岩爆预测指标.引入岩爆预测的相对隶属度概念,计算了岩爆的相对隶属度模糊矩阵和预测指标的权重,以信息熵来描述并比较岩爆评价中的不确定性,定义了加权广义权距离来表征岩爆的差异.根据最大熵原理建立了岩爆预测的模糊最优化模型,对一些岩石地下工程实例进行了分析,预测结果与其他方法的分析结果以及实际情况基本一致.并将模型运用于葡萄山隧道岩爆预测,预测结果与实际岩爆情况符合较好.%In the analysis of rock burst criterion prediction at home and abroad, the prediction standards of rock burst are selected including the conditions of mechanics integrity, energy and brittle. The concept of relative membership degree on the rock burst prediction was introduced. The weight of standards and fuzzy matrix of relative membership degree are calculated. Uncertainty in rock burst prediction is described and compared according to the information entropy. Generalized weighted distance is also defined to characterize the differences in rock burst based on the maximum entropy principle, the establishment of a rock burst prediction fuzzy optimization model. The results from the application to practical example and comparisons with other methods are fairly good. Finally, the prediction model is applied in Putaoshan tunnel and the predictions are consistent with the actual rock burst.
Probabilistic quantum error correction
Fern, J; Fern, Jesse; Terilla, John
2002-01-01
There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.
Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm
无
2005-01-01
The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
ROBOT'S MOTION ERROR AND ONLINE COMPENSATION BASED ON FORCE SENSOR
GAN Fangjian; LIU Zhengshi; REN Chuansheng; ZHANG Ping
2007-01-01
Robot's dynamic motion error and on-line compensation based on multi-axis force sensor are dealt with. It is revealed that the reasons of the error are formed and the relations of the error are delivered. A motion equation of robot's termination with the error is established, and then, an error matrix and an error compensation matrix of the motion equation are also defined. An on-line error's compensation method is put forward to decrease the displacement error, which is a degree of millimeter, shown by the result of Simulation of PUMA562 robot.
Large errors and severe conditions
Smith, D L; Van Wormer, L A
2002-01-01
Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...
M. Buchwitz
2013-05-01
Full Text Available Carbon Monitoring Satellite (CarbonSat is one of two candidate missions for ESA's Earth Explorer 8 (EE8 satellite – the selected one to be launched around the end of this decade. The objective of the CarbonSat mission is to improve our understanding of natural and anthropogenic sources and sinks of the two most important anthropogenic greenhouse gases (GHG carbon dioxide (CO2 and methane (CH4. The unique feature of CarbonSat is its "GHG imaging capability", which is achieved via a combination of high spatial resolution (2 km × 2 km and good spatial coverage (wide swath and gap-free across- and along-track ground sampling. This capability enables global imaging of localized strong emission source such as cities, power plants, methane seeps, landfills and volcanos and better disentangling of natural and anthropogenic GHG sources and sinks. Source/sink information can be derived from the retrieved atmospheric column-averaged mole fractions of CO2 and CH4, i.e. XCO2 and XCH4, via inverse modeling. Using the most recent instrument and mission specification, an error analysis has been performed using the BESD/C retrieval algorithm. We focus on systematic errors due to aerosols and thin cirrus clouds, as this is the dominating error source especially with respect to XCO2 systematic errors. To compute the errors for each single CarbonSat observation in a one year time period, we have developed an error parameterization scheme based on six relevant input parameters: we consider solar zenith angle, surface albedo in two bands, aerosol and cirrus optical depth, and cirrus altitude variations but neglect, for example, aerosol type variations. Using this method we have generated and analyzed one year of simulated CarbonSat observations. Using this data set we estimate that scattering related systematic errors are mostly (approx. 85% below 0.3 ppm for XCO2 (XCH4 (XCO2 and 7 ppb for XCH4 (1-sigma. The number of quality filtered observations over cloud and
Error effects in anterior cingulate cortex reverse when error likelihood is high
Jessup, Ryan K.; Busemeyer, Jerome R.; Brown, Joshua W.
2010-01-01
Strong error-related activity in medial prefrontal cortex (mPFC) has been shown repeatedly with neuroimaging and event-related potential studies for the last several decades. Multiple theories have been proposed to account for error effects, including comparator models and conflict detection models, but the neural mechanisms that generate error signals remain in dispute. Typical studies use relatively low error rates, confounding the expectedness and the desirability of an error. Here we show with a gambling task and fMRI that when losses are more frequent than wins, the mPFC error effect disappears, and moreover, exhibits the opposite pattern by responding more strongly to unexpected wins than losses. These findings provide perspective on recent ERP studies and suggest that mPFC error effects result from a comparison between actual and expected outcomes. PMID:20203206
Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application
Riza Muhida
2013-07-01
Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.
错误相关负波(ERN)在精神病理学研究中的应用%Application of Error-Related Negativity (ERN) in Psychopathological Research
张华; 刘春雷; 王一峰; 张庆林
2009-01-01
错误相关负波(error-related negativity,ERN)是由行为错误诱发的一种脑电波成分,最大峰值在错误反应之后的50ms左右,偶极子源定位于前扣带回(anterior cingulate cortex,ACC)附近.错误加工(error processing)的经典研究范式中出现的ERN成分可能反映了ACC具有错误检测、冲突监控、强化学习、情绪动机等功能.大量研究表明过度的和不足的错误相关脑活动(hyperactive and hypoactive error processing)可能分别与精神病理学的内化性和外化性障碍(internalizing and externalizing disorders)相关联.内化性和外化性障碍的内表型(endophenotype)的研究,还存在许多值得进一步探索的问题.
樊宇; 王宇楠; 王俊杰; 曹奇
2011-01-01
Reducing noise error is an important link for point cloud data processing in reverse engineering, which has a great impact on the precision of the ultimate ideal model.For the point cloud data from laser scanning, this paper puts forward a new triangle filter method for reducing noise error of points clouds data based on geometric relations.Research shows that the triangle filter method can be better to reducing noise error.%点云去噪是逆向工程中点云数据处理中的一个重要环节,其对最终理想模型的精度将产生很大的影响.针对激光扫描光刀法扫描的点云数据,本文提出了一种基于几何关系的三角形滤波法则,能够较好的进行去噪处理.
Regression calibration with heteroscedastic error variance.
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.
Medical errors recovered by critical care nurses.
Dykes, Patricia C; Rothschild, Jeffrey M; Hurley, Ann C
2010-05-01
: The frequency and types of medical errors are well documented, but less is known about potential errors that were intercepted by nurses. We studied the type, frequency, and potential harm of recovered medical errors reported by critical care registered nurses (CCRNs) during the previous year. : Nurses are known to protect patients from harm. Several studies on medical errors found that there would have been more medical errors reaching the patient had not potential errors been caught earlier by nurses. : The Recovered Medical Error Inventory, a 25-item empirically derived and internally consistent (alpha =.90) list of medical errors, was posted on the Internet. Participants were recruited via e-mail and healthcare-related listservs using a nonprobability snowball sampling technique. Investigators e-mailed contacts working in hospitals or who managed healthcare-related listservs and asked the contacts to pass the link on to others with contacts in acute care settings. : During 1 year, 345 CCRNs reported that they recovered 18,578 medical errors, of which they rated 4,183 as potentially lethal. : Surveillance, clinical judgment, and interventions by CCRNs to identify, interrupt, and correct medical errors protected seriously ill patients from harm.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.
Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong
2015-05-05
The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.
A Novel Artificial Fish Swarm Algorithm for Recalibration of Fiber Optic Gyroscope Error Parameters
Yanbin Gao
2015-05-01
Full Text Available The artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes’ pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.
Textbook Errors, 136: The Reducing Action of Sodium Borohydride.
Todd, David
1979-01-01
This column generally relates errors which have been discovered in textbooks. The error discussed in this issue is the prevalence of erroneous ideas in organic chemistry textbooks, related to the chemistry of sodium borohydride. (Author/SA)
A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.
Li, Haisen S; Romeijn, H Edwin; Dempsey, James F
2006-09-01
orientation of the beam with respect to the dose grid was also investigated. The maximum acceptable dose grid size depends on the gradient of dose profile and in turn the range of proton beam. In the case that only the phantom scattering was considered and that the beam was aligned with the dose grid, grid sizes from 0.4 to 6.8 mm were required for proton beams with ranges from 2 to 30 cm for 2% error limit at the Bragg peak point. A near linear relation between the maximum acceptable grid size and beam range was observed. For this analysis model, the resolution requirement was not significantly related to the orientation of the beam with respect to the grid.
Correction for quadrature errors
Netterstrøm, A.; Christensen, Erik Lintz
1994-01-01
In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...
1998-01-01
To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.
Li, Duo-Fang; Cao, Tian-Guang; Geng, Jin-Peng; Gu, Jian-Zhong; An, Hai-Long; Zhan, Yong
2015-09-07
The stochastic Eigen model proposed by Feng et al. (2007) (Journal of Theoretical Biology, 246, 28) showed that error threshold is no longer a phase transition point but a crossover region whose width depends on the strength of the random fluctuation in an environment. The underlying cause of this phenomenon has not yet been well examined. In this article, we adopt a single peak Gaussian distributed fitness landscape instead of a constant one to investigate and analyze the change of the error threshold and the statistical property of the quasi-species population. We find a roughly linear relation between the width of the error threshold and the fitness fluctuation strength. For a given quasi-species, the fluctuation of the relative concentration has a minimum with a normal distribution of the relative concentration at the maximum of the averaged relative concentration, it has however a largest value with a bimodal distribution of the relative concentration near the error threshold. The above results deepen our understanding of the quasispecies and error threshold and are heuristic for exploring practicable antiviral strategies.
ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL
1994-01-01
Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)
Errors on errors - Estimating cosmological parameter covariance
Joachimi, Benjamin
2014-01-01
Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.
曾丽萍; 李桂芳; 杨薇薇
2012-01-01
目的:分析肿瘤患者术中给药错误的原因并探讨解决的方法.方法:对34起手术室给药错误事件进行回顾性分析.结果:在给药错误发生的手术级别中,三级手术占58.8%,四级手术占35.3%;在给药错误的分类中,给药时间错误占44.1%,其次为遗漏给药占17.6%;在给药错误的药品种类中,抗生素给药造成的错误占73.5%;91.2%的给药错误未对患者造成伤害.在给药错误的人为因素中以人力不足和不遵守操作规程为主要原因,分别占61.8%和17.6%;给药错误的主要人群为工作年限5年以下的低年资护士,占76.5%.结论:加强术中给药管理,合理配备手术室护理人员.加强低年资护理人员用药相关知识和技能的培训以及责任心的培养和职业道德的教育,以减少给药错误事件的发生.%Objective; To analyze the causes and to explore the preventive ways of medication errors during the operation for cancer patients. Methods; Thirty-four cases of administering errors occurred in the operating room were retrospectively analyzed. Re-sults: Among the 34 cases, 58. 8% occurred in the third grade operations and 35. 3% in the fourth grade operations. As for the classification of theses errors, 44. 1% of the medicine was given in incorrect time, 17. 6% was the missed administrate ring. Antibiotics administrate ring errors accounted for 73. 5% of all errors; 91. 2% of the medication errors didn' t bring any harm to the patients. Under-staffing(61. 8% ) and disobeying of the operation procedures ( 17.1% ) were the main causes of human factors. Nurses with working experience fewer than 5 years accounted for 76. 5% of the crowds who made the medication administering errors. Conclusion; Medica- tion administering errors may be decreased by enhancing the medicineadministering managemetn during operation and improving the understaffing status in the hospital. The training of medication related basic knowledge-,skill and responsibility as
PV Maximum Power-Point Tracking by Using Artificial Neural Network
Farzad Sedaghati; Ali Nahavandi; Mohammad Ali Badamchizadeh; Sehraneh Ghaemi; Mehdi Abedinpour Fallah
2012-01-01
In this paper, using artificial neural network (ANN) for tracking of maximum power point is discussed. Error back propagation method is used in order to train neural network. Neural network has advantages of fast and precisely tracking of maximum power point. In this method neural network is used to specify the reference voltage of maximum power point under different atmospheric conditions. By properly controling of dc-dc boost converter, tracking of maximum power point is feasible. To verify...
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Study of thin-film resistor resistance error
Spirin V. G.
2009-10-01
Full Text Available A relationship between a thin-film resistor resistance error and mask misalignment with a substrate conductive layer at the second photolithography stage for a thin-film resistor design in which the resistive element does not overlap conductor pads is studied. The error value is at a maximum when the resistor aspect ratio is equal to 1.0.
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca
2015-09-01
Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Sudden Possibilities: Porpoises, Eggcorns, and Error
Crovitz, Darren
2011-01-01
This article discusses how amusing mistakes can make for serious language instruction. The notion that close analysis of language errors can yield insight into how one thinks and learns seems fundamentally obvious. Yet until relatively recently, language errors were primarily treated as indicators of learner deficiency rather than opportunities to…
2012-12-01
Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.
王彩丽
2014-01-01
University students' syntax errors in English writing are mainly caused by English-Chinese thinking differences. Lin-guistic Relativity, also called "the Sapir-Whorf Hypothesis", is an important theory in exploring the relationship among language, thinking and culture. From the perspective of"Linguistics Rela-tivity", this paper analyzes university students' common syntax errors in English writing resulted from Chinese-Western thinking styles and their differences, and proposes corresponding counter-measures from the perspective of culture, hoping to provide some inspirations for the theoretical and practical research of college English teaching.%大学生英语写作中的句法错误主要是由于英汉思维差异导致的。语言相对论，又称“萨丕尔-沃尔夫假说”，是探究语言和思维、文化之间关系的重要理论。本文从“语言相对论”角度，分析了中西思维方式及其差异导致的大学生英语写作中常见的句法错误，并从文化角度提出相应的对策，以期为大学英语教学理论和实践研究提供一些启示。
韩旭; 邵玉婷; 孙钦凤; 郭泾
2015-01-01
Objective To investigate the effects of orthodontic intervention on maximum intercuspation (MIC)-centric relation (CR)condylar displacement of patients with or without temporomandibular disorders (TMD).Methods A total of 31 orthodontic patients aged 16 to 45 years were selected and divided into the TMD group (n =15)and non-TMD group (n =16).Records of MIC and CR of these patients taken before and after orthodontic intervention were compared.Results The two groups had different MIC-CR displacement before and after treatment.There were more changes in the TMD group,and the changes were mostly favorable.The MIC-CR condylar displacement was correlated with the symptom checklist (SCL)score.Conclusion Orthodontic intervention has effect on condylar position of pa-tients,especially for those with TMD.Orthodontists,therefore,need to understand and pay attention to the effect of malocclusion on TMD and limitations of measurement of condylar displacement (MCD)in the diagnosis.%目的：旨在探讨正畸干预对最大牙尖交错位（MIC）-正中关系位（CR）髁突位移量的影响。方法选取16～45岁门诊正畸患者31例，其中非颞下颌关节紊乱病（TMD）组16例（NTMD 组），TMD 组15例，在正畸干预前后分别取 MIC 与 CR 位记录。分析两组患者正畸干预前后正中牙合位与正中关系位的髁突位置差异。结果正畸干预前 TMD 组与 NTMD 组 MIC-CR 之间髁突位移（MCD）量存在差异，正畸干预后两组 MIC-CR 位移量减小，TMD组较为显著，且多为有利变化，TMD 组正畸干预前后 MIC-CR 髁突位移量与 TMD 症状自评量表得分呈相关性。结论正畸干预在一定程度上对错牙合畸形患者的髁突位置产生影响，对于 TMD 患者尤甚。因此，正畸医生要认识并注意到错牙合畸形对 TMD 产生和发展的作用，并正确认识 MCD 在诊断中的局限性。
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Measurement error in geometric morphometrics.
Fruciano, Carmelo
2016-06-01
Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Inpatients’ medical prescription errors
Aline Melo Santos Silva
2009-09-01
Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
FUZZY ECCENTRICITY AND GROSS ERROR IDENTIFICATION
无
2006-01-01
The dominant and recessive effect made by exceptional interferer is analyzed in measurement system based on responsive character, and the gross error model of fuzzy clustering based on fuzzy relation and fuzzy equipollence relation is built. The concept and calculate formula of fuzzy eccentricity are defined to deduce the evaluation rule and function of gross error, on the base of them, a fuzzy clustering method of separating and discriminating the gross error is found. Utilized in the dynamic circular division measurement system, the method can identify and eliminate gross error in measured data, and reduce measured data dispersity. Experimental results indicate that the use of the method and model enables repetitive precision of the system to improve 80% higher than the foregoing system, to reach 3.5 s, and angle measurement error is less than 7 s.
Accurate and fast methods to estimate the population mutation rate from error prone sequences
Miyamoto Michael M
2009-08-01
Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.
DONG Sheng; CHI Kun; ZHANG Qiyi; ZHANG Xiangdong
2012-01-01
Compared with traditional real-time forecasting,this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area.The GMM combines the Grey System and Markov theory into a higher precision model.The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values,and thus gives forecast results involving two aspects of information.The procedure for forecasting annul maximum water levels with the GMM contains five main steps:1) establish the GM (1,1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2,and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy.The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin,China are utilized to calibrate and verify the proposed model according to the above steps.Every 25 years' data are regarded as a hydro-sequence.Eight groups of simulated results show reasonable agreement between the predicted values and the measured data.The GMM is also applied to the 10 other hydrological stations in the same estuary.The forecast results for all of the hydrological stations are good or acceptable.The feasibility and effectiveness of this new forecasting model have been proved in this paper.
Zhang, Ting; Ye, Wenhua; Liang, Ruijun; Lou, Peihuang; Yang, Xiaolan
2013-01-01
Machine tool thermal error is an important reason for poor machining accuracy. Thermal error compensation is a primary technology in accuracy control. To build thermal error model, temperature variables are needed to be divided into several groups on an appropriate threshold. Currently, group threshold value is mainly determined by researchers experience. Few studies focus on group threshold in temperature variable grouping. Since the threshold is important in error compensation, this paper arms to find out an optimal threshold to realize temperature variable optimization in thermal error modeling. Firstly, correlation coefficient is used to express membership grade of temperature variables, and the theory of fuzzy transitive closure is applied to obtain relational matrix of temperature variables. Concepts as compact degree and separable degree are introduced. Then evaluation model of temperature variable clustering is built. The optimal threshold and the best temperature variable clustering can be obtained by setting the maximum value of evaluation model as the objective. Finally, correlation coefficients between temperature variables and thermal error are calculated in order to find out optimum temperature variables for thermal error modeling. An experiment is conducted on a precise horizontal machining center. In experiment, three displacement sensors are used to measure spindle thermal error and twenty-nine temperature sensors are utilized to detect the machining center temperature. Experimental result shows that the new method of temperature variable optimization on optimal threshold successfully worked out a best threshold value interval and chose seven temperature variables from twenty-nine temperature measuring points. The model residual of z direction is within 3 μm. Obviously, the proposed new variable optimization method has simple computing process and good modeling accuracy, which is quite fit for thermal error compensation.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Hubbard, William J; Bland, Kirby I; Chaudry, Irshad H
2015-07-01
As with sharks and horseshoe crabs, some designs of nature need only minor evolutionary adjustments during the millennia to remain superbly adapted. Such is the case at the molecular level for the nuclear receptors (NRs), which seem to have originated concomitantly with the earliest metazoan lineage of animals. A wide array of NRs persists today throughout all animal phyla with many different functions, yet they share a highly conserved protein structure, a testament to their having evolved through numerous gene duplications. Of particular interest for this readership are the estrogen-related receptors (ERRs), which have significant supportive roles in energy creation and regulation, mitochondrial function and biogenesis, development, tissue repair, hypoxia, and cancer. Thus, placed at the nexus of energetics and homeostasis, ERR (in association with the coregulatory molecules peroxisome proliferator-activated receptor-γ coactivator-1α and -β) can facilitate repair from injury and adaptations to stressful environments. Whereas it is curious that ERRs and some other NRs exist as "orphans" by virtue of having no known cognate ligand, increasing interest in the estrogen receptor has led to the development of synthetic ligands and screening for naturally occurring molecules, either capable of modulating ERR activity. Thus, what is needed now is a nomenclature update for the ERR to focus the mind on energetics and metabolism, the most compromised and crucial systems after trauma and shock.
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Effect of sampling variation on error of rainfall variables measured by optical disdrometer
Liu, X. C.; Gao, T. C.; Liu, L.
2012-12-01
During the sampling process of precipitation particles by optical disdrometers, the randomness of particles and sampling variability has great impact on the accuracy of precipitation variables. Based on a marked point model of raindrop size distribution, the effect of sampling variation on drop size distribution and velocity distribution measurement using optical disdrometers are analyzed by Monte Carlo simulation. The results show that the samples number, rain rate, drop size distribution, and sampling size have different influences on the accuracy of rainfall variables. The relative errors of rainfall variables caused by sampling variation in a descending order as: water concentration, mean diameter, mass weighed mean diameter, mean volume diameter, radar reflectivity factor, and number density, which are independent with samples number basically; the relative error of rain variables are positively correlated with the margin probability, which is also positively correlated with the rain rate and the mean diameter of raindrops; the sampling size is one of the main factors that influence the margin probability, with the decreasing of sampling area, especially the decreasing of short side of sample size, the probability of margin raindrops is getting greater, hence the error of rain variables are getting greater, and the variables of median size raindrops have the maximum error. To ensure the relative error of rainfall variables measured by optical disdrometer less than 1%, the width of light beam should be at least 40 mm.
Motion error compensation of multi-legged walking robots
Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei
2012-07-01
Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
Personality and error monitoring: an update
Sven eHoffmann
2012-06-01
Full Text Available People differ considerably with respect to their ability to initiate and maintain cognitive control. A core control function is the processing and evaluation of errors from which we learn to prevent maladaptive behavior. People strongly differ in the degree of error processing, and how errors are interpreted and appraised. In the present study it was investigated whether a correlate of error monitoring, the error negativity (Ne or ERN, is related to personality factors. Therefore the EEG was measured continuously during a task which provoked errors, and the Ne was tested with respect to its relation to personality traits. Our results indicate a substantial trait-like relation of error processing and personality factors: The Ne was more pronounced for subjection scoring low on the Openness scale, the Impulsiveness scale and the Emotionality scale. Inversely, the Ne was less pronounced for subjects scoring low on the Social Orientation scale. The results implicate that personality traits related to emotional valences and rigidity are reflected in the way people monitor and adapt to erroneous actions.
von Clarmann, T.
2014-09-01
The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Maximum Likelihood Position Location with a Limited Number of References
D. Munoz-Rodriguez
2011-04-01
Full Text Available A Position Location (PL scheme for mobile users on the outskirts of coverage areas is presented. The proposedmethodology makes it possible to obtain location information with only two land-fixed references. We introduce ageneral formulation and show that maximum-likelihood estimation can provide adequate PL information in thisscenario. The Root Mean Square (RMS error and error-distribution characterization are obtained for differentpropagation scenarios. In addition, simulation results and comparisons to another method are provided showing theaccuracy and the robustness of the method proposed. We study accuracy limits of the proposed methodology fordifferent propagation environments and show that even in the case of mismatch in the error variances, good PLestimation is feasible.
Maximum likelihood identification of aircraft stability and control derivatives
Mehra, R. K.; Stepner, D. E.; Tyler, J. S.
1974-01-01
Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.
MA. Lendita Kryeziu
2015-06-01
Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.
Dr. Grace Zhang
2000-01-01
Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.
Error Analysis of Robotic Assembly System Based on Screw Theory
韩卫军; 费燕琼; 赵锡芳
2003-01-01
Assembly errors have great influence on assembly quality in robotic assembly systems. Error analysis is directed to the propagations and accumula-tions of various errors and their effect on assembly success.Using the screw coordinates, assembly errors are represented as "error twist", the extremely compact expression. According to the law of screw composition, relative position and orientation errors of mating parts are computed and the necessary condition of assembly success is concluded. A new simple method for measuring assembly errors is also proposed based on the transformation law of a screw.Because of the compact representation of error, the model presented for error analysis can be applied to various part- mating types and especially useful for error analysis of complexity assembly.
Temporospatial dissociation of Pe subcomponents for perceived and unperceived errors
Tanja eEndrass
2012-06-01
Full Text Available Previous research on performance monitoring revealed that errors are followed by an initial fronto-central negative deflection (error-related negativity, ERN and subsequently centro-parietal positivity (error positivity, Pe. It has been shown that error awareness mainly influences the Pe, whereas the ERN seems unaffected by conscious awareness of an error. The aim of the present study was to investigate the relation of ERN and Pe to error awareness in a visual size discrimination task in which errors are not elicited by impulsive responding but by perceptual difficulty. Further, we applied a temporospatial principal component analysis (PCA to examine whether the temporospatial subcomponents of the Pe would differentially relate to error awareness. ERP results were in accordance with earlier studies: a significant error awareness effect was found for the Pe, but not for the ERN. Interestingly, a modulation with error perception on correct trials was found: correct responses considered as incorrect had larger correct-related negativity (CRN and lager Pe amplitudes than correct responses considered as correct. The PCA yielded two relevant spatial factors accounting for the Pe (latency 300 ms. A temporospatial factor displaying a centro-parietal positivity varied significantly with error awareness. Of the two temporospatial factors corresponding to response-related negativities, a factor with central topography varied with response correctness and subjective error perception on correct responses. The PCA results indicate that the error awareness effect is specifically related to the centro-parietal subcomponent of the Pe. Since this component has also been shown to be related to the importance of an error, the present variation with error awareness indicates that this component is sensitive to the salience of an error and that salience secondarily triggers error awareness.
Conically scanning lidar error in complex terrain
Ferhat Bingöl
2009-05-01
Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.
Efficient Image Transmission Through Analog Error Correction
Liu, Yang; Li,; Xie, Kai
2011-01-01
This paper presents a new paradigm for image transmission through analog error correction codes. Conventional schemes rely on digitizing images through quantization (which inevitably causes significant bandwidth expansion) and transmitting binary bit-streams through digital error correction codes (which do not automatically differentiate the different levels of significance among the bits). To strike a better overall performance in terms of transmission efficiency and quality, we propose to use a single analog error correction code in lieu of digital quantization, digital code and digital modulation. The key is to get analog coding right. We show that this can be achieved by cleverly exploiting an elegant "butterfly" property of chaotic systems. Specifically, we demonstrate a tail-biting triple-branch baker's map code and its maximum-likelihood decoding algorithm. Simulations show that the proposed analog code can actually outperform digital turbo code, one of the best codes known to date. The results and fin...
Delaporte F.
2008-09-01
Full Text Available The author discusses the significance, implications and limitations of Manson’s work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui
2015-01-01
The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.
Study of Errors among Nursing Students
Ella Koren
2007-09-01
Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the
Antonio Boldrini
2013-06-01
Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research
Influence of Ephemeris Error on GPS Single Point Positioning Accuracy
Lihua, Ma; Wang, Meng
2013-09-01
The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
LIBERTARISMO & ERROR CATEGORIAL
Carlos G. Patarroyo G.
2009-01-01
Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Zhang, Min; Wang, Wen; Xiang, Kui; Lu, Keqing; Fan, Zongwei
2015-02-01
This paper describes a novel cylindrical capacitive sensor (CCS) to measure the spindle five degree-of-freedom (DOF) motion errors. The operating principle and mathematical models of the CCS are presented. Using Ansoft Maxwell software to calculate the different capacitances in different configurations, structural parameters of end face electrode are then investigated. Radial, axial and tilt motions are also simulated by making comparisons with the given displacements and the simulation values respectively. It could be found that the proposed CCS has a high accuracy for measuring radial motion error when the average eccentricity is about 15 μm. Besides, the maximum relative error of axial displacement is 1.3% when the axial motion is within [0.7, 1.3] mm, and the maximum relative error of the tilt displacement is 1.6% as rotor tilts around a single axis within [-0.6, 0.6]°. Finally, the feasibility of the CCS for measuring five DOF motion errors is verified through simulation and analysis.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
Julian, Liam
2009-01-01
In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Molina, L; Elosua, R; Marrugat, J; Pons, S
1999-10-15
The relation between maximum systolic blood pressure (BP) during exercise and left ventricular (LV) mass is controversial. Physical activity also induces LV mass increase. The objective was to assess the relation between BP response to exercise and LV mass in normotensive men, taking into account physical activity practice. A cross-sectional study was performed. Three hundred eighteen healthy normotensive men, aged between 20 and 60 years, participated in this study. The Minnesota questionnaire was used to assess physical activity practice. An echocardiogram and a maximum exercise test were performed. LV mass was calculated and indexed to body surface area. LV hypertrophy was defined as a ventricular mass index > or =134 g/m2. BP was measured at the moment of maximum effort. Hypertensive response was considered when BP was > or =210 mm Hg. In the multiple linear regression model, maximum systolic BP was associated with LV mass index and correlation coefficient was 0.27 (SE 0.07). Physical activity practice and age were also associated with LV mass. An association between hypertensive response to exercise and LV hypertrophy was observed (odds ratio 3.16). Thus, BP response to exercise is associated with LV mass and men with systolic BP response > or =210 mm Hg present a 3-times higher risk of LV hypertrophy than those not reaching this limit. Physical activity practice is related to LV mass, but not to LV hypertrophy.
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
LU Xiaoxu; ZHONG Liyun; ZHANG Yimo
2007-01-01
Phase-shifting measurement and its error estimation method were studied according to the holographic principle.A function of synchronous superposition of object complex amplitude reconstructed from N-step phase-shifting through one integral period (N-step phase-shifting function for short) was proposed.In N-step phase-shifting measurement,the interferograms are seen as a series of in-line holograms and the reference beam is an ideal parallel-plane wave.So the N-step phase-shifting function can be obtained by multiplying the interferogram by the original referencc wave.In ideal conditions.the proposed method is a kind of synchronous superposition algorithm in which the complex amplitude is separated,measured and superposed.When error exists in measurement,the result of the N-step phase-shifting function is the optimal expected value of the least-squares fitting method.In the above method,the N+1-step phase-shifting function can be obtained from the N-step phase-shifting function.It shows that the N-step phase-shifting function can be separated into two parts:the ideal N-step phase-shifting function and its errors.The phase-shifting errors in N-steps phase-shifting phase measurement can be treated the same as the relative errors of amplitude and intensity under the understanding of the N+1-step phase-shifting function.The difficulties of the error estimation in phase-shifting phase measurement were restricted by this error estimation method.Meanwhile,the maximum error estimation method of phase-shifting phase measurement and its formula were proposed.
Josikélem da Silva Sodré Pelliciotti
2010-12-01
Full Text Available This study identifies the prevalence of medication errors in ICUs reported by nursing professionals, compares the health-related quality of life (HRQoL and health status changes of those professionals both involved and not involved with medication errors in ICUs. A total of 94 nursing professionals in three ICUs of a private hospital were studied: 39 (41.5% nurses and 55 (58.5% nursing technicians. HRQoL was assessed through the Portuguese version of the SF-36 instrument. Eighteen professionals (19.1% reported medication errors during the month prior to data collection. The errors were reported in 61.1% of the cases and the most frequent ones were those in the administration phase (67.8%. The professionals who reported medication errors displayed worse health conditions than those who did not report errors.Este estudio tuvo como objetivos: identificar la prevalencia de errores de medicación en UTI relatados por profesionales de enfermería; comparar la calidad de vida relacionada a la salud (CVRS y las alteraciones en el estado de salud de los profesionales envueltos y no envueltos con errores de medicación. Fueron investigados 94 profesionales de tres UTIs de un hospital privado, siendo 39 enfermeros (41,5% y 55 (58,5% técnicos de enfermería. La CVRS fue evaluada por la versión en portugués del instrumento SF-36. Dieciocho profesionales (19,1% mencionaron haber cometido errores en el mes anterior a la investigación. Los errores fueron notificados en 61,1% de los casos y los más frecuentes fueron los encontrados en la fase de administración (67,8%. Los profesionales que relataron errores de medicación tuvieron tendencia al peor estado de salud, cuando comparados a los que no relataron errores.Este estudo teve como objetivos identificar a prevalência de erros de medicação em unidades de terapia intensiva (UTI, relatados por profissionais de enfermagem, comparar a qualidade de vida relacionada à saúde (QVRS e as alterações no
Pooyan Vahidi Pashsaki
2016-06-01
Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.
Patient error: a preliminary taxonomy.
Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.
2009-01-01
PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca
Automatic Error Analysis Using Intervals
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Error estimation in plant growth analysis
Andrzej Gregorczyk
2014-01-01
Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.
Error Analysis in Frequency Domain for Linear Multipass Algorithms
无
2001-01-01
Error analysis methods in frequency domain are developed in this paper for determining the characteristic root and transfer function errors when the linear multipass algorithms are used to solve linear differential equations.the relation between the local truncation error in time domain and the error in frequency domain is established, which is the basis for developing the error estimation methods. The error estimation methods for the digital simulation model constructed by using the Runge-Kutta algorithms and the linear multistep predictor-corrector algorithms are also given.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Error bars in experimental biology.
Cumming, Geoff; Fidler, Fiona; Vaux, David L
2007-04-09
Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Video Error Correction Using Steganography
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Analyzing temozolomide medication errors: potentially fatal.
Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee
2014-10-01
The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.
A Characterization of Prediction Errors
Meek, Christopher
2016-01-01
Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.
Error Analysis and Its Implication
崔蕾
2007-01-01
Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.
Error bars in experimental biology
2007-01-01
Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...
Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems
Lutz, Robyn R.
1993-01-01
This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.
Floating-Point Numbers with Error Estimates (revised)
Masotti, Glauco
2012-01-01
The study addresses the problem of precision in floating-point (FP) computations. A method for estimating the errors which affect intermediate and final results is proposed and a summary of many software simulations is discussed. The basic idea consists of representing FP numbers by means of a data structure collecting value and estimated error information. Under certain constraints, the estimate of the absolute error is accurate and has a compact statistical distribution. By monitoring the estimated relative error during a computation (an ad-hoc definition of relative error has been used), the validity of results can be ensured. The error estimate enables the implementation of robust algorithms, and the detection of ill-conditioned problems. A dynamic extension of number precision, under the control of error estimates, is advocated, in order to compute results within given error bounds. A reduced time penalty could be achieved by a specialized FP processor. The realization of a hardwired processor incorporat...
Huang, Jingyi; Bishop, Thomas; Triantafilis, John
2016-04-01
The cation exchange capacity (CEC) of soil is widely used for agricultural assessment because it is a measure of fertility and an indicator of structural stability. However, measurement of CEC is time consuming. Whilst geostatistical methods have been used, a large number of samples must be collected. Using pedometric methods and specifically coupling easy-to-measure ancillary data with CEC have improved efficiency in spatial prediction. The evaluation of mapping uncertainty has not been considered, however. In this study, we use an error budget procedure to quantify the relative contributions that model, input and covariate error make to prediction error of a digital map of CEC using gamma-ray spectrometry and apparent electrical conductivity (ECa) data. The error budget uses empirical best linear unbiased prediction (E-BLUP) and conditional simulation to produce numerous realizations of the data and their underlying errors. Linear mixed models (LMM) estimated by residual maximum likelihood (REML) is used to create the prediction models. Results show that the combined error of model (5.07 cmol(+)/kg) and input error (12.88 cmol(+)/kg) is approximately 12.93 cmol(+)/kg, which is twice as large as the standard deviation of CEC (6.8 cmol(+)/kg). The individual covariate errors caused by the gamma-ray (9.64 cmol(+)/kg) and EM error (8.55 cmol(+)/kg) are also large. To overcome the former, pre-processing techniques to improve the quality of the gamma-ray data could be considered. In terms of the EM error, this could be reduced by the use of a smaller sampling interval and in particular near the edges of the study area and also at Pedoderm boundaries.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Yanbin Gao
2015-01-01
Full Text Available Artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS. In this study, a novel artificial fish swarm algorithm (NAFSA that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification.
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Common errors in disease mapping
Ricardo Ocaña-Riola
2010-05-01
Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.
Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M
2008-03-01
Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.
Visuomotor adaptation needs a validation of prediction error by feedback error
Valérie eGaveau
2014-11-01
Full Text Available The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In ‘terminal feedback error’ condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In ‘movement prediction error’ condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the ‘terminal feedback error’ condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are
A hardware error estimate for floating-point computations
Lang, Tomás; Bruguera, Javier D.
2008-08-01
We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However
Aerial measurement error with a dot planimeter: Some experimental estimates
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Transient Error Data Analysis.
1979-05-01
Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Errors of measurement by laser goniometer
Agapov, Mikhail Y.; Bournashev, Milhail N.
2000-11-01
The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.
Rank Modulation for Translocation Error Correction
Farnoud, Farzad; Milenkovic, Olgica
2012-01-01
We consider rank modulation codes for flash memories that allow for handling arbitrary charge drop errors. Unlike classical rank modulation codes used for correcting errors that manifest themselves as swaps of two adjacently ranked elements, the proposed \\emph{translocation rank codes} account for more general forms of errors that arise in storage systems. Translocations represent a natural extension of the notion of adjacent transpositions and as such may be analyzed using related concepts in combinatorics and rank modulation coding. Our results include tight bounds on the capacity of translocation rank codes, construction techniques for asymptotically good codes, as well as simple decoding methods for one class of structured codes. As part of our exposition, we also highlight the close connections between the new code family and permutations with short common subsequences, deletion and insertion error-correcting codes for permutations and permutation arrays.
Allodji, Rodrigue S.; Leuraud, Klervi; Laurier, Dominique [Institut de Radioprotection et de Surete Nucleaire (IRSN), DRPH, SRBE, Laboratoire d' Epidemiologie, Fontenay-aux-Roses (France); Thiebaut, Anne C.M. [INSERM, U657, Paris (France); Institut Pasteur, Unite Pharmaco-Epidemiologie et Maladies Infectieuses, Paris (France); Univ. Versailles Saint-Quentin, Garches (France); Henry, Stephane [Medical Council Areva Group, Pierrelatte (France); Benichou, Jacques [INSERM, U657, Rouen (France); Centre Hospitalier Universitaire (CHU) de Rouen, Unite de Biostatistique, Rouen (France); Univ. Rouen, Rouen (France)
2012-05-15
Measurement error (ME) can lead to bias in the analysis of epidemiologic studies. Here a simulation study is described that is based on data from the French Uranium Miners' Cohort and that was conducted to assess the effect of ME on the estimated excess relative risk (ERR) of lung cancer death associated with radon exposure. Starting from a scenario without any ME, data were generated containing successively Berkson or classical ME depending on time periods, to reflect changes in the measurement of exposure to radon ({sup 222}Rn) and its decay products over time in this cohort. Results indicate that ME attenuated the level of association with radon exposure, with a negative bias percentage on the order of 60% on the ERR estimate. Sensitivity analyses showed the consequences of specific ME characteristics (type, size, structure, and distribution) on the ERR estimates. In the future, it appears important to correct for ME upon analyzing cohorts such as this one to decrease bias in estimates of the ERR of adverse events associated with exposure to ionizing radiation. (orig.)
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.