WorldWideScience

Sample records for minimum error discrimination

  1. Efficient optimal minimum error discrimination of symmetric quantum states

    Science.gov (United States)

    Assalini, Antonio; Cariolaro, Gianfranco; Pierobon, Gianfranco

    2010-01-01

    This article deals with the quantum optimal discrimination among mixed quantum states enjoying geometrical uniform symmetry with respect to a reference density operator ρ0. It is well known that the minimal error probability is given by the positive operator-valued measure obtained as a solution of a convex optimization problem, namely a set of operators satisfying geometrical symmetry, with respect to a reference operator Π0 and maximizing Tr(ρ0Π0). In this article, by resolving the dual problem, we show that the same result is obtained by minimizing the trace of a semidefinite positive operator X commuting with the symmetry operator and such that X⩾ρ0. The new formulation gives a deeper insight into the optimization problem and allows to obtain closed-form analytical solutions, as shown by a simple but not trivial explanatory example. In addition to the theoretical interest, the result leads to semidefinite programming solutions of reduced complexity, allowing to extend the numerical performance evaluation to quantum communication systems modeled in Hilbert spaces of large dimension.

  2. Optimality of minimum-error discrimination by the no-signalling condition

    OpenAIRE

    Bae, Joonwoo; Lee, Jae-weon; Kim, Jaewan; Hwang, Won-Young

    2007-01-01

    In this work we relate the well-known no-go theorem that two non-orthogonal (mixed) quantum states cannot be perfectly discriminated, to the general principle in physics, the no-signalling condition. In fact, we derive the minimum error in discrimination between two quantum states, using the no-signalling condition.

  3. Minimum-error discrimination between three mirror-symmetric states

    CERN Document Server

    Andersson, E; Gilson, C R; Hunter, K; Andersson, Erika; Barnett, Stephen M.; Gilson, Claire R.; Hunter, Kieran

    2002-01-01

    We present the optimal measurement strategy for distinguishing between three quantum states exhibiting a mirror symmetry. The three states live in a two-dimensional Hilbert space, and are thus overcomplete. By mirror symmetry we understand that the transformation {|+> -> |+>, |-> -> -|->} leaves the set of states invariant. The obtained measurement strategy minimizes the error probability. An experimental realization for polarized photons, realizable with current technology, is suggested.

  4. Experimental Minimum-Error Quantum-State Discrimination in High Dimensions

    Science.gov (United States)

    Solís-Prosser, M. A.; Fernandes, M. F.; Jiménez, O.; Delgado, A.; Neves, L.

    2017-03-01

    Quantum mechanics forbids perfect discrimination among nonorthogonal states through a single shot measurement. To optimize this task, many strategies were devised that later became fundamental tools for quantum information processing. Here, we address the pioneering minimum-error (ME) measurement and give the first experimental demonstration of its application for discriminating nonorthogonal states in high dimensions. Our scheme is designed to distinguish symmetric pure states encoded in the transverse spatial modes of an optical field; the optimal measurement is performed by a projection onto the Fourier transform basis of these modes. For dimensions ranging from D =2 to D =21 and nearly 14 000 states tested, the deviations of the experimental results from the theoretical values range from 0.3% to 3.6% (getting below 2% for the vast majority), thus showing the excellent performance of our scheme. This ME measurement is a building block for high-dimensional implementations of many quantum communication protocols, including probabilistic state discrimination, dense coding with nonmaximal entanglement, and cryptographic schemes.

  5. The minimum-error discrimination via Helstrom family of ensembles and Convex Optimization

    CERN Document Server

    Jafarizadeh, M A; Aali, M

    2009-01-01

    Using the convex optimization method and Helstrom family of ensembles introduced in Ref. [1], we have discussed optimal ambiguous discrimination in qubit systems. We have analyzed the problem of the optimal discrimination of N known quantum states and have obtained maximum success probability and optimal measurement for N known quantum states with equiprobable prior probabilities and equidistant from center of the Bloch ball, not all of which are on the one half of the Bloch ball and all of the conjugate states are pure. An exact solution has also been given for arbitrary three known quantum states. The given examples which use our method include: 1. Diagonal N mixed states; 2. N equiprobable states and equidistant from center of the Bloch ball which their corresponding Bloch vectors are inclined at the equal angle from z axis; 3. Three mirror-symmetric states; 4. States that have been prepared with equal prior probabilities on vertices of a Platonic solid. Keywords: minimum-error discrimination, success prob...

  6. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  7. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  8. Tracking error with minimum guarantee constraints

    OpenAIRE

    Diana Barro; Elio Canestrelli

    2008-01-01

    In recent years the popularity of indexing has greatly increased in financial markets and many different families of products have been introduced. Often these products also have a minimum guarantee in the form of a minimum rate of return at specified dates or a minimum level of wealth at the end of the horizon. Period of declining stock market returns together with low interest rate levels on Treasury bonds make it more difficult to meet these liabilities. We formulate a dynamic asset alloca...

  9. Minimum Bayesian error probability-based gene subset selection.

    Science.gov (United States)

    Li, Jian; Yu, Tian; Wei, Jin-Mao

    2015-01-01

    Sifting functional genes is crucial to the new strategies for drug discovery and prospective patient-tailored therapy. Generally, simply generating gene subset by selecting the top k individually superior genes may obtain an inferior gene combination, for some selected genes may be redundant with respect to some others. In this paper, we propose to select gene subset based on the criterion of minimum Bayesian error probability. The method dynamically evaluates all available genes and sifts only one gene at a time. A gene is selected if its combination with the other selected genes can gain better classification information. Within the generated gene subset, each individual gene is the most discriminative one in comparison with those that classify cancers in the same way as this gene does and different genes are more discriminative in combination than in individual. The genes selected in this way are likely to be functional ones from the system biology perspective, for genes tend to co-regulate rather than regulate individually. Experimental results show that the classifiers induced based on this method are capable of classifying cancers with high accuracy, while only a small number of genes are involved.

  10. Discriminant analysis with errors in variables

    CERN Document Server

    Loustau, Sébastien

    2012-01-01

    The effect of measurement error in discriminant analysis is investigated. Given observations $Z=X+\\epsilon$, where $\\epsilon$ denotes a random noise, the goal is to predict the density of $X$ among two possible candidates $f$ and $g$. We suppose that we have at our disposal two learning samples. The aim is to approach the best possible decision rule $G^*$ defined as a minimizer of the Bayes risk. In the free-noise case $(\\epsilon=0)$, minimax fast rates of convergence are well-known under the margin assumption in discriminant analysis (see \\cite{mammen}) or in the more general classification framework (see \\cite{tsybakov2004,AT}). In this paper we intend to establish similar results in the noisy case, i.e. when dealing with errors in variables. In particular, we discuss two possible complexity assumptions that can be set on the problem, which may alternatively concern the regularity of $f-g$ or the boundary of $G^*$. We prove minimax lower bounds for these both problems and explain how can these rates be atta...

  11. Effect Of Oceanic Lithosphere Age Errors On Model Discrimination

    Science.gov (United States)

    DeLaughter, J. E.

    2016-12-01

    The thermal structure of the oceanic lithosphere is the subject of a long-standing controversy. Because the thermal structure varies with age, it governs properties such as heat flow, density, and bathymetry with important implications for plate tectonics. Though bathymetry, geoid, and heat flow for young (geoid, and heat flow data to an inverse model to determine lithospheric structure details. Though inverse models usually include the effect of errors in bathymetry, heat flow, and geoid, they rarely examine the effects of errors in age. This may have the effect of introducing subtle biases into inverse models of the oceanic lithosphere. Because the inverse problem for thermal structure is both ill-posed and ill-conditioned, these overlooked errors may have a greater effect than expected. The problem is further complicated by the non-uniform distribution of age and errors in age estimates; for example, only 30% of the oceanic lithosphere is older than 80 MY and less than 3% is older than 150 MY. To determine the potential strength of such biases, I have used the age and error maps of Mueller et al (2008) to forward model the bathymetry for half space and GDH1 plate models. For ages less than 20 MY, both models give similar results. The errors induced by uncertainty in age are relatively large and suggest that when possible young lithosphere should be excluded when examining the lithospheric thermal model. As expected, GDH1 bathymetry converges asymptotically on the theoretical result for error-free data for older data. The resulting uncertainty is nearly as large as that introduced by errors in the other parameters; in the absence of other errors, the models can only be distinguished for ages greater than 80 MY. These results suggest that the problem should be approached with the minimum possible number of variables. For example, examining the direct relationship of geoid to bathymetry or heat flow instead of their relationship to age should reduce uncertainties

  12. Quantum state discrimination using the minimum average number of copies

    CERN Document Server

    Slussarenko, Sergei; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M; Pryde, Geoff J

    2016-01-01

    In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we consider minimizing the average resources for a fixed admissible error probability. We derive a detection scheme optimized for the latter task, and experimentally test it, along with schemes previously considered for the former task. We show that, for our new task, our new scheme outperforms all previously considered schemes.

  13. On the Smoothed Minimum Error Entropy Criterion 

    Directory of Open Access Journals (Sweden)

    Badong Chen

    2012-11-01

    Full Text Available Recent studies suggest that the minimum error entropy (MEE criterion can outperform the traditional mean square error criterion in supervised machine learning, especially in nonlinear and non-Gaussian situations. In practice, however, one has to estimate the error entropy from the samples since in general the analytical evaluation of error entropy is not possible. By the Parzen windowing approach, the estimated error entropy converges asymptotically to the entropy of the error plus an independent random variable whose probability density function (PDF corresponds to the kernel function in the Parzen method. This quantity of entropy is called the smoothed error entropy, and the corresponding optimality criterion is named the smoothed MEE (SMEE criterion. In this paper, we study theoretically the SMEE criterion in supervised machine learning where the learning machine is assumed to be nonparametric and universal. Some basic properties are presented. In particular, we show that when the smoothing factor is very small, the smoothed error entropy equals approximately the true error entropy plus a scaled version of the Fisher information of error. We also investigate how the smoothing factor affects the optimal solution. In some special situations, the optimal solution under the SMEE criterion does not change with increasing smoothing factor. In general cases, when the smoothing factor tends to infinity, minimizing the smoothed error entropy will be approximately equivalent to minimizing error variance, regardless of the conditional PDF and the kernel.

  14. Adaptive Linear Filtering Design with Minimum Symbol Error Probability Criterion

    Institute of Scientific and Technical Information of China (English)

    Sheng Chen

    2006-01-01

    Adaptive digital filtering has traditionally been developed based on the minimum mean square error (MMSE)criterion and has found ever-increasing applications in communications. This paper presents an alternative adaptive filtering design based on the minimum symbol error rate (MSER) criterion for communication applications. It is shown that the MSER filtering is smarter, as it exploits the non-Gaussian distribution of filter output effectively. Consequently, it provides significant performance gain in terms of smaller symbol error over the MMSE approach. Adopting Parzen window or kernel density estimation for a probability density function, a block-data gradient adaptive MSER algorithm is derived. A stochastic gradient adaptive MSER algorithm, referred to as the least symbol error rate, is further developed for sampleby-sample adaptive implementation of the MSER filtering. Two applications, involving single-user channel equalization and beamforming assisted receiver, are included to demonstrate the effectiveness and generality of the proposed adaptive MSER filtering approach.

  15. Minimum Mean Square Error Estimation Under Gaussian Mixture Statistics

    CERN Document Server

    Flam, John T; Kansanen, Kimmo; Ekman, Torbjorn

    2011-01-01

    This paper investigates the minimum mean square error (MMSE) estimation of x, given the observation y = Hx+n, when x and n are independent and Gaussian Mixture (GM) distributed. The introduction of GM distributions, represents a generalization of the more familiar and simpler Gaussian signal and Gaussian noise instance. We present the necessary theoretical foundation and derive the MMSE estimator for x in a closed form. Furthermore, we provide upper and lower bounds for its mean square error (MSE). These bounds are validated through Monte Carlo simulations.

  16. Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation

    Directory of Open Access Journals (Sweden)

    Namyong Kim

    2016-06-01

    Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.

  17. Minimum mean square error method for stripe nonuniformity correction

    Institute of Scientific and Technical Information of China (English)

    Weixian Qian; Qian Chen; Guohua Gu

    2011-01-01

    @@ Stripe nonuniformity is very typical in line infrared focal plane (IRFPA) and uncooled starring IRFPA.We develop the minimum mean square error (MMSE) method for stripe nonuniformity correction (NUC).The goal of the MMSE method is to determine the optimal NUC parameters for making the corrected image the closest to the ideal image.%Stripe nonuniformity is very typical in line infrared focal plane (IRFPA) and uncooled starring IRFPA.We develop the minimum mean square error (MMSE) method for stripe nonuniformity correction (NUC).The goal of the MMSE method is to determine the optimal NUC parameters for making the corrected image the closest to the ideal image. Moreover, this method can be achieved in one frame, making it more competitive than other scene-based NUC algorithms. We also demonstrate the calibration results of our algorithm using real and virtual infrared image sequences. The experiments verify the positive effect of our algorithm.

  18. Proportionate Minimum Error Entropy Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Zongze Wu

    2015-08-01

    Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.

  19. Regularized Kernel Forms of Minimum Squared Error Method

    Institute of Scientific and Technical Information of China (English)

    XU Jian-hua; ZHANG Xue-gong; LI Yan-da

    2006-01-01

    Minimum squared error (MSE) algorithm is one of the classical pattern recognition and regression analysis methods,whose objective is to minimize the squared error summation between the output of linear function and the desired output.In this paper,the MSE algorithm is modified by using kernel functions satisfying the Mercer condition and regularization technique; and the nonlinear MSE algorithms based on kernels and regularization term,that is,the regularized kernel forms of MSE algorithm,are proposed.Their objective functions include the squared error summation between the output of nonlinear function based on kernels and the desired output and a proper regularization term.The regularization technique can handle ill-posed problems,reduce the solution space,and control the generalization.Three squared regularization terms are utilized in this paper.In accordance with the probabilistic interpretation of regularization terms,the difference among three regularization terms is given in detail.The synthetic and real data are used to analyze the algorithm performance.

  20. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  1. Triangle orientation discrimination: the alternative to minimum resolvable temperature difference and minimum resolvable contrast

    Science.gov (United States)

    Bijl, Piet; Valeton, J. Mathieu

    1998-07-01

    The characterization of electro-optical system performance by means of the minimum resolvable temperature difference (MRTD) or the minimum resolvable contrast (MRC) has at least three serious disadvantages: (1) the bar pattern stimulus is theoretically and practically unsuitable for 1D or 2D spatially sampled systems such as pixel-array cameras, (2) spatial phase is not taken into account, and (3) the results depend on the observer's subjective decision criterion. We propose an adequate and easily applicable alternative: the triangle orientation discrimination (TOD) threshold. The TOD is based on an improved test pattern, a better defined observer task, and a solid psychophysical measurement procedure. The method has a large number of theoretical and practical advantages: it is suitable for pixel-array cameras, scanning systems and other electro-optical and optical imaging system sin both the thermal and visual domains, it has a close relationship to real target acquisition, and the observer task is easy. The results are free from observer bias and allow statistical significance tests. The method lends itself very well to automatic measurements, and can be extended for future sensor systems that include advanced image processing. The TOD curve can be implemented easily in a target acquisition (TA) model such as ACQUIRE. An observer performance study with real targets shows that the TOD curve better predicts TA performance than the mRC does. The method has been implemented successfully in a thermal imager field test apparatus called the thermal imager performance indicator and may be implemented in current MRTD test equipment with little effort.

  2. Wage discrimination and partial compliance with the minimum wage law

    OpenAIRE

    Yang-Ming Chang; Bhavneet Walia

    2007-01-01

    This paper presents a simple model to characterize the discriminatory behavior of a non-complying firm in a minimum-wage economy. In the analysis, the violating firm pays one “favored†group of workers the statutory minimum and the other “non-favored†group of workers a sub-minimum. We find conditions under which law enforcement is ineffective in improving the between-group wage differentials. We show that an increase in the minimum wage raises the sub-minimum wage and employment of wor...

  3. Unbiased bootstrap error estimation for linear discriminant analysis.

    Science.gov (United States)

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  4. Minimum Error Thresholding Segmentation Algorithm Based on 3D Grayscale Histogram

    Directory of Open Access Journals (Sweden)

    Jin Liu

    2014-01-01

    Full Text Available Threshold segmentation is a very important technique. The existing threshold algorithms do not work efficiently for noisy grayscale images. This paper proposes a novel algorithm called three-dimensional minimum error thresholding (3D-MET, which is used to solve the problem. The proposed approach is implemented by an optimal threshold discriminant based on the relative entropy theory and the 3D histogram. The histogram is comprised of gray distribution information of pixels and relevant information of neighboring pixels in an image. Moreover, a fast recursive method is proposed to reduce the time complexity of 3D-MET from O(L6 to O(L3, where L stands for gray levels. Experimental results demonstrate that the proposed approach can provide superior segmentation performance compared to other methods for gray image segmentation.

  5. Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.

    Science.gov (United States)

    Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

    2014-06-11

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  6. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  7. MINIMUM DISCRIMINATION INFORMATION PROBLEMS VIA GENERALIZED GEOMETRIC PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    ZhuDetong

    2003-01-01

    In this paper,the quadratic program problm and minimum discrimiation in formation (MDI) problem with a set of quadratic inequality constraints and entropy constraints of density are considered.Based on the properties of the generalized geometric programming,the dual programs of thses two problems are derived.Furthermore,the duality theorms and related Kuhn-Tucker conditions for two pairs of the prime-dual programs are also established by the duality theory.

  8. Optimal state discrimination with an error margin of pure and mixed symmetric states: irreducible qudit and reducible qubit states

    Science.gov (United States)

    Jafarizadeh, M. A.; Mahmoudi, P.; Akhgar, D.; Faizi, E.

    2017-06-01

    Minimum error discrimination (MED) and Unambiguous discrimination (UD) are two common strategies for quantum state discrimination that can be modified by imposing a finite error margin on the error probability. Error margins 0 and 1 correspond to two common strategies. In this paper, for an arbitrary error margin m, the discrimination problem of equiprobable quantum symmetric states is analytically solved for four distinct cases. A generating set of irreducible and reducible representations of a subgroup of a unitary group are considered, separately, as unitary operators that produce one set of the symmetric states. In the irreducible case, for N≥slant d mixed and pure qudit states, one critical m which divides the parameter space into two domains is obtained. The number of critical values m in the reducible case is two, for both N mixed and pure qubit states. The reason for this difference between numbers of critical values m is explained. The optimal set of measurements and corresponding maximum success probability in fully analytical form are determined for all values of the error margin. The relationship between the amount of error that is imposed on error probability and geometrical situation of states with changes in rank of element corresponding to inconclusive result is determined. The behaviors of elements of measurement are explained geometrically in order to decrease the error probability in each domain. Furthermore, the problem of the discrimination with error margin among elements of two different sets of symmetric quantum states is studied. The number of critical values m is equivalent to one set in both reducible and irreducible cases. In addition, optimal measurements in each domain are obtained.

  9. Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor

    2005-01-01

    We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....

  10. Least mean square error difference minimum criterion for adaptive chaotic noise canceller

    Institute of Scientific and Technical Information of China (English)

    Zhang Jia-Shu

    2007-01-01

    The least mean square error difference (LMS-ED) minimum criterion for an adaptive chaotic noise canceller is proposed in this paper. Different from traditional least mean square error minimum criterion in which the error is uncorrelated with the input vector, the proposed LMS-ED minimum criterion tries to minimize the correlation between the error difference and input vector difference. The novel adaptive LMS-ED algorithm is then derived to update the weights of adaptive noise canceller. A comparison between cancelling performances of adaptive least mean square (LMS),normalized LMS (NLMS) and proposed LMS-ED algorithms is simulated by using three kinds of chaotic noises. The simulation results clearly show that the proposed algorithm outperforms the LMS and NLMS algorithms in achieving small values of steady-state excess mean square error. Moreover, the computational complexity of the proposed LMS-ED algorithm is the same as that of the standard LMS algorithms.

  11. Minimum Error Entropy Filter for Fault Detection of Networked Control Systems

    OpenAIRE

    Guolian Hou; Mifeng Ren; Lilong Du; Jianhua Zhang

    2012-01-01

    In this paper, fault detection of networked control systems with random delays, packet dropout and noises is studied. The filter is designed using a minimum error entropy criterion. The residual generated by the filter is then evaluated to detect faults in networked control systems. An illustrative networked control system is used to verify the effectiveness of the proposed approach.

  12. Minimum Error Entropy Filter for Fault Detection of Networked Control Systems

    Directory of Open Access Journals (Sweden)

    Guolian Hou

    2012-03-01

    Full Text Available In this paper, fault detection of networked control systems with random delays, packet dropout and noises is studied. The filter is designed using a minimum error entropy criterion. The residual generated by the filter is then evaluated to detect faults in networked control systems. An illustrative networked control system is used to verify the effectiveness of the proposed approach.

  13. Minimum Symbol Error Probability MIMO Design under the Per-Antenna Power Constraint

    Directory of Open Access Journals (Sweden)

    Enoch Lu

    2012-01-01

    Full Text Available Approximate minimum symbol error probability transceiver design of single user MIMO systems under the practical per-antenna power constraint is considered. The upper bound of a lower bound on the minimum distance between the symbol hypotheses is established. Necessary conditions and structures of the transmit covariance matrix for reaching the upper bound are discussed. Three numerical approaches (rank zero, rank one, and permutation for obtaining the optimum precoder are proposed. When the upper bound is reached, the resulting design is optimum. When the upper bound is not reached, a numerical fix is used. The approach is very simple and can be of practical use.

  14. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  15. Item Discrimination and Type I Error in the Detection of Differential Item Functioning

    Science.gov (United States)

    Li, Yanju; Brooks, Gordon P.; Johanson, George A.

    2012-01-01

    In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…

  16. Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2014-01-01

    Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.

  17. Chromatic error correction of diffractive optical elements at minimum etch depths

    Science.gov (United States)

    Barth, Jochen; Gühne, Tobias

    2014-09-01

    The integration of diffractive optical elements (DOE) into an optical design opens up new possibilities for applications in sensing and illumination. If the resulting optics is used in a larger spectral range we must correct not only the chromatic error of the conventional, refractive, part of the design but also of the DOE. We present a simple but effective strategy to select substrates which allow the minimum etch depths for the DOEs. The selection depends on both the refractive index and the dispersion.

  18. Optimization of Machining Parameters for Minimization of Roundness Error in Deep Hole Drilling using Minimum Quantity Lubricant

    Directory of Open Access Journals (Sweden)

    Kamaruzaman Anis Farhan

    2016-01-01

    Full Text Available This paper presents an experimental investigation of deep hole drilling using CNC milling machine. This experiment investigates the effect of machining parameters which are spindle speed, feed rate and depth of hole using minimum quantity lubricant on the roundness error. The experiment was designed using two level full factorial with four center point. Finally, the machining parameters were optimized in obtaining the minimum value of roundness error. The minimum value of roundness error for deep hole drilling is 0.0266 at the spindle speed is 800 rpm, feed rate is 60 mm/min, depth of hole is 70 mm and minimum quantity lubricant is 30ml/hr.

  19. Signal window minimum average error algorithm for multi-phase level computer-generated holograms

    Science.gov (United States)

    El Bouz, Marwa; Heggarty, Kevin

    2000-06-01

    This paper extends the article "Signal window minimum average error algorithm for computer-generated holograms" (JOSA A 1998) to multi-phase level CGHs. We show that using the same rule for calculating the complex error diffusion weights, iterative-algorithm-like low-error signal windows can be obtained for any window shape or position (on- or off-axis) and any number of CGH phase levels. Important algorithm parameters such as amplitude normalisation level and phase freedom diffusers are described and investigated to optimize the algorithm. We show that, combined with a suitable diffuser, the algorithm makes feasible the calculation of high performance CGHs far larger than currently practical with iterative algorithms yet now realisable with modern fabrication techniques. Preliminary experimental optical reconstructions are presented.

  20. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...... can be achieved by a Multiple-Input Multiple- Output(MIMO) whitening filter followed by a traditional BCJR algorithm. The Gauss-Markov noise model provides a reasonable approximation for co-channel interference, making it an interesting single-user detector for many multiuser communication systems...

  1. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    Directory of Open Access Journals (Sweden)

    Rong Wang

    2015-01-01

    Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  2. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    Science.gov (United States)

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  3. An Extended Result on the Optimal Estimation Under the Minimum Error Entropy Criterion

    Directory of Open Access Journals (Sweden)

    Badong Chen

    2014-04-01

    Full Text Available The minimum error entropy (MEE criterion has been successfully used in fields such as parameter estimation, system identification and the supervised machine learning. There is in general no explicit expression for the optimal MEE estimate unless some constraints on the conditional distribution are imposed. A recent paper has proved that if the conditional density is conditionally symmetric and unimodal (CSUM, then the optimal MEE estimate (with Shannon entropy equals the conditional median. In this study, we extend this result to the generalized MEE estimation where the optimality criterion is the Renyi entropy or equivalently, the α-order information potential (IP.

  4. Mean-square convergence analysis of ADALINE training with minimum error entropy criterion.

    Science.gov (United States)

    Chen, Badong; Zhu, Yu; Hu, Jinchun

    2010-07-01

    Recently, the minimum error entropy (MEE) criterion has been used as an information theoretic alternative to traditional mean-square error criterion in supervised learning systems. MEE yields nonquadratic, nonconvex performance surface even for adaptive linear neuron (ADALINE) training, which complicates the theoretical analysis of the method. In this paper, we develop a unified approach for mean-square convergence analysis for ADALINE training under MEE criterion. The weight update equation is formulated in the form of block-data. Based on a block version of energy conservation relation, and under several assumptions, we carry out the mean-square convergence analysis of this class of adaptation algorithm, including mean-square stability, mean-square evolution (transient behavior) and the mean-square steady-state performance. Simulation experimental results agree with the theoretical predictions very well.

  5. NEW APPROACH FOR RELIABILITY-BASED DESIGN OPTIMIZATION: MINIMUM ERROR POINT

    Institute of Scientific and Technical Information of China (English)

    LIU Deshun; YUE Wenhui; ZHU Pingyu; DU Xiaoping

    2006-01-01

    Conventional reliability-based design optimization (RBDO) requires to use the most probable point (MPP) method for a probabilistic analysis of the reliability constraints. A new approach is presented, called as the minimum error point (MEP) method or the MEP based method,for reliability-based design optimization, whose idea is to minimize the error produced by approximating performance functions. The MEP based method uses the first order Taylor's expansion at MEP instead of MPP. Examples demonstrate that the MEP based design optimization can ensure product reliability at the required level, which is very imperative for many important engineering systems. The MEP based reliability design optimization method is feasible and is considered as an alternative for solving reliability design optimization problems. The MEP based method is more robust than the commonly used MPP based method for some irregular performance functions.

  6. Data-aided efficient synchronization for UWB signals based on minimum average error probability

    Institute of Scientific and Technical Information of China (English)

    SUN Qiang; L(U) Tie-jun

    2008-01-01

    One of the biggest challenges in ultra-wideband(UWB) radio is the accurate timing acquisition for the receiver.In this article, we develop a novel data-aided synchronizationalgorithm for pulses amplitude modulation (PAM) UWB systems.Pilot and information symbols are transmitted simultaneously byan orthogonal code division multiplexing (OCDM) scheme. Inthe receiver, an algorithm based on the minimum average errorprobability (MAEP) of coherent detector is applied to estimatethe timing offset. The multipath interference (MI) problem fortiming offset estimation is considered. The mean-square-error(MSE) and the bit-error-rate(BER) performances of our proposedscheme are simulated. The results show that our algorithmoutperforms the algorithm based on the maximum correlatoroutput (MCO) in multipath channels.

  7. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error.

    Science.gov (United States)

    Stenroos, Matti; Hauk, Olaf

    2013-11-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only.

  8. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  9. On the Minimum Error Correction Problem for Haplotype Assembly in Diploid and Polyploid Genomes.

    Science.gov (United States)

    Bonizzoni, Paola; Dondi, Riccardo; Klau, Gunnar W; Pirola, Yuri; Pisanti, Nadia; Zaccaria, Simone

    2016-09-01

    In diploid genomes, haplotype assembly is the computational problem of reconstructing the two parental copies, called haplotypes, of each chromosome starting from sequencing reads, called fragments, possibly affected by sequencing errors. Minimum error correction (MEC) is a prominent computational problem for haplotype assembly and, given a set of fragments, aims at reconstructing the two haplotypes by applying the minimum number of base corrections. MEC is computationally hard to solve, but some approximation-based or fixed-parameter approaches have been proved capable of obtaining accurate results on real data. In this work, we expand the current characterization of the computational complexity of MEC from the approximation and the fixed-parameter tractability point of view. In particular, we show that MEC is not approximable within a constant factor, whereas it is approximable within a logarithmic factor in the size of the input. Furthermore, we answer open questions on the fixed-parameter tractability for parameters of classical or practical interest: the total number of corrections and the fragment length. In addition, we present a direct 2-approximation algorithm for a variant of the problem that has also been applied in the framework of clustering data. Finally, since polyploid genomes, such as those of plants and fishes, are composed of more than two copies of the chromosomes, we introduce a novel formulation of MEC, namely the k-ploid MEC problem, that extends the traditional problem to deal with polyploid genomes. We show that the novel formulation is still both computationally hard and hard to approximate. Nonetheless, from the parameterized point of view, we prove that the problem is tractable for parameters of practical interest such as the number of haplotypes and the coverage, or the number of haplotypes and the fragment length.

  10. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  11. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    Science.gov (United States)

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  12. Minimum probability of error recognition of three-dimensional laser-scanned targets

    Science.gov (United States)

    DeVore, Michael D.; Zhou, Xin

    2006-05-01

    Shape measurements form powerful features for recognizing objects, and many imaging modalities produce three-dimensional shape information. Stereo-photogrammetric techniques have been extensively developed, and many researchers have looked at related techniques such as shape from motion, shape from accommodation, and shape from shading. Recently, considerable attention has focused on laser radar systems for imaging distant objects, such as automobiles from an airborne platform, and on laser-based active stereo imaging for close-range objects, such as part scanners for automated inspection. Each use of these laser imagers generally results in a range image, an array of distance measurements as a function of direction. For multi-look data or data fused from multiple sensors, we may more generally treat the data as a 3D point-cloud, an unordered collection of 3D points measured from the surface of the scene. This paper presents a general approach to object recognition in the presence of significant clutter, that is suitable for application to a wide range of 3D imaging systems. The approach relies on a probabilistic framework relating 3D point-cloud data and the objects from which they are measured. Through this framework a minimum probability of error recognition algorithm is derived that accounts for both obscuring and nonobscuring clutter, and that accommodates arbitrary (range and cross-range) measurement errors. The algorithm is applied to a problem of target recognition from actual 3D point-cloud data measured in the laboratory from scale models of civilian automobiles. Noisy 3D measurements are used to train models of the automobiles, and these models are used to classify the automobiles when present in a scene containing natural and man-made clutter.

  13. Improved Estimation of Subsurface Magnetic Properties using Minimum Mean-Square Error Methods

    Energy Technology Data Exchange (ETDEWEB)

    Saether, Bjoern

    1997-12-31

    This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior information, i.e., the geometries of the rock bodies and their susceptibilities. Uncertainties may be included into the estimation process. The computation exploits the subtle information inherent in magnetic data sets in an optimal way in order to tune the initial susceptibility model. The MMSE method includes a statistical framework that allows the computation not only of the estimated susceptibilities, given by the magnetic measurements, but also of the associated reliabilities of these estimations. This allows the evaluation of the reliabilities in the estimates before any measurements are made, an option, which can be useful for survey planning. The MMSE method has been tested on a synthetic data set in order to compare the effects of various prior information. When more information is given as input to the estimation, the estimated models come closer to the true model, and the reliabilities in their estimates are increased. In addition, the method was evaluated using a real geological model from a North Sea oil field, based on seismic data and well information, including susceptibilities. Given that the geometrical model is correct, the observed mismatch between the forward calculated magnetic anomalies and the measured anomalies causes changes in the susceptibility model, which may show features of interesting geological significance to the explorationists. Such magnetic anomalies may be due to small fractures and faults not detectable on seismic, or local geochemical changes due to the upward migration of water or hydrocarbons. 76 refs., 42 figs., 18 tabs.

  14. Discriminating between antihydrogen and mirror-trapped antiprotons in a minimum-B trap

    CERN Document Server

    Amole, C; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Humphries, A J; Hydomako, R; Kurchaninov, L; Jonsell, S; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S

    2012-01-01

    Recently, antihydrogen atoms were trapped at CERN in a magnetic minimum (minimum-B) trap formed by superconducting octupole and mirror magnet coils. The trapped antiatoms were detected by rapidly turning off these magnets, thereby eliminating the magnetic minimum and releasing any antiatoms contained in the trap. Once released, these antiatoms quickly hit the trap wall, whereupon the positrons and antiprotons in the antiatoms annihilated. The antiproton annihilations produce easily detected signals; we used these signals to prove that we trapped antihydrogen. However, our technique could be confounded by mirror-trapped antiprotons, which would produce seemingly-identical annihilation signals upon hitting the trap wall. In this paper, we discuss possible sources of mirror-trapped antiprotons and show that antihydrogen and antiprotons can be readily distinguished, often with the aid of applied electric fields, by analyzing the annihilation locations and times. We further discuss the general properties of antipr...

  15. Discriminating between antihydrogen and mirror-trapped antiprotons in a minimum-B trap

    Science.gov (United States)

    Amole, C.; Andresen, G. B.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Cesar, C. L.; Chapman, S.; Charlton, M.; Deller, A.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Humphries, A. J.; Hydomako, R.; Kurchaninov, L.; Jonsell, S.; Madsen, N.; Menary, S.; Nolan, P.; Olchanski, K.; Olin, A.; Povilus, A.; Pusa, P.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Storey, J. W.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.

    2012-01-01

    Recently, antihydrogen atoms were trapped at CERN in a magnetic minimum (minimum-B) trap formed by superconducting octupole and mirror magnet coils. The trapped antiatoms were detected by rapidly turning off these magnets, thereby eliminating the magnetic minimum and releasing any antiatoms contained in the trap. Once released, these antiatoms quickly hit the trap wall, whereupon the positrons and antiprotons in the antiatoms annihilate. The antiproton annihilations produce easily detected signals; we used these signals to prove that we trapped antihydrogen. However, our technique could be confounded by mirror-trapped antiprotons, which would produce seemingly identical annihilation signals upon hitting the trap wall. In this paper, we discuss possible sources of mirror-trapped antiprotons and show that antihydrogen and antiprotons can be readily distinguished, often with the aid of applied electric fields, by analyzing the annihilation locations and times. We further discuss the general properties of antiproton and antihydrogen trajectories in this magnetic geometry, and reconstruct the antihydrogen energy distribution from the measured annihilation time history.

  16. Complex linear minimum mean-squared-error equalization of spatially quadrature-amplitude-modulated signals in holographic data storage

    Science.gov (United States)

    Sato, Takanori; Kanno, Kazutaka; Bunsen, Masatoshi

    2016-09-01

    We applied complex linear minimum mean-squared-error equalization to spatially quadrature-amplitude-modulated signals in holographic data storage (HDS). The equalization technique can improve dispersion in constellation outputs due to intersymbol interference. We confirm the effectiveness of the equalization technique in numerical simulations and basic optical experiments. Our numerical results have shown that intersymbol interference of a retrieved signal in a HDS system can be improved by using the equalization technique. In our experiments, a mean squared error (MSE), which indicates the deviation from an ideal signal, has been used for quantitatively evaluating the dispersion of equalized signals. Our equalization technique has been able to improve the MSE. However, symbols in the equalized signal have remained inseparable. To further improve the MSE and make the symbols separable, reducing errors in repeated measurements is our future task.

  17. Estimation of the minimum mRNA splicing error rate in vertebrates.

    Science.gov (United States)

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons.

  18. Optimal STBC Precoding with Channel Covariance Feedback for Minimum Error Probability

    Directory of Open Access Journals (Sweden)

    Zhao Yi

    2004-01-01

    Full Text Available This paper develops the optimal linear transformation (or precoding of orthogonal space-time block codes (STBC for minimizing probability of decoding error, when the channel covariance matrix is available at the transmitter. We build on recent work that stated the performance criterion without solving for the transformation. In this paper, we provide a water-filling solution for multi-input single-output (MISO systems, and present a numerical solution for multi-input multi-output (MIMO systems. Our results confirm that eigen-beamforming is optimal at low SNR or highly correlated channels, and full diversity is optimal at high SNR or weakly correlated channels, in terms of error probability. This conclusion is similar to one reached recently from the capacity-achieving viewpoint.

  19. Linear adaptive noise-reduction filters for tomographic imaging: Optimizing for minimum mean square error

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Winston Y. [Univ. of California, Berkeley, CA (United States)

    1993-04-01

    This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

  20. Minimum Mean-Squared Error Iterative Successive Parallel Arbitrated Decision Feedback Detectors for DS-CDMA Systems

    CERN Document Server

    de Lamare, Rodrigo C

    2012-01-01

    In this paper we propose minimum mean squared error (MMSE) iterative successive parallel arbitrated decision feedback (DF) receivers for direct sequence code division multiple access (DS-CDMA) systems. We describe the MMSE design criterion for DF multiuser detectors along with successive, parallel and iterative interference cancellation structures. A novel efficient DF structure that employs successive cancellation with parallel arbitrated branches and a near-optimal low complexity user ordering algorithm are presented. The proposed DF receiver structure and the ordering algorithm are then combined with iterative cascaded DF stages for mitigating the deleterious effects of error propagation for convolutionally encoded systems with both Viterbi and turbo decoding as well as for uncoded schemes. We mathematically study the relations between the MMSE achieved by the analyzed DF structures, including the novel scheme, with imperfect and perfect feedback. Simulation results for an uplink scenario assess the new it...

  1. Fast converging minimum probability of error neural network receivers for DS-CDMA communications.

    Science.gov (United States)

    Matyjas, John D; Psaromiligkos, Ioannis N; Batalama, Stella N; Medley, Michael J

    2004-03-01

    We consider a multilayer perceptron neural network (NN) receiver architecture for the recovery of the information bits of a direct-sequence code-division-multiple-access (DS-CDMA) user. We develop a fast converging adaptive training algorithm that minimizes the bit-error rate (BER) at the output of the receiver. The adaptive algorithm has three key features: i) it incorporates the BER, i.e., the ultimate performance evaluation measure, directly into the learning process, ii) it utilizes constraints that are derived from the properties of the optimum single-user decision boundary for additive white Gaussian noise (AWGN) multiple-access channels, and iii) it embeds importance sampling (IS) principles directly into the receiver optimization process. Simulation studies illustrate the BER performance of the proposed scheme.

  2. A minimum-error, energy-constrained neural code is an instantaneous-rate code.

    Science.gov (United States)

    Johnson, Erik C; Jones, Douglas L; Ratnam, Rama

    2016-04-01

    Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals.

  3. Photon-assisted entanglement creation by minimum-error generalized quantum measurements in the strong coupling regime

    CERN Document Server

    Bernád, J Z

    2012-01-01

    In generalization of the hydbrid quantum repeater model of van Loock et al. \\cite{vanLoock1} we explore possibilities of entangling two distant material qubits with the help of a single-mode optical radiation field in the strong quantum electrodynamical coupling regime of almost resonant interaction. The optimum generalized field measurements are determined which are capable of preparing a two-qubit Bell state by postselection with minimum error. It is demonstrated that in the strong coupling regime some of the recently found limitations of the non-resonant weak coupling regime can be circumvented successfully due to characteristic quantum electrodynamical quantum interference effects. In particular, in the absence of photon loss it is possible to postselect two-qubit Bell states with fidelities close to unity by a proper choice of the relevant interaction time. Even in the presence of photon loss this strong coupling regime offers interesting perspectives for creating spatially well separated Bell pairs with...

  4. Discrimination

    National Research Council Canada - National Science Library

    Midtbøen, Arnfinn H; Rogstad, Jon

    2012-01-01

    ... of discrimination in the labour market as well as to the mechanisms involved in discriminatory hiring practices. The design has several advantages compared to -‘single-method’ approaches and provides a more substantial understanding of the processes leading to ethnic inequality in the labour market.

  5. Discriminative learning for speech recognition

    CERN Document Server

    He, Xiadong

    2008-01-01

    In this book, we introduce the background and mainstream methods of probabilistic modeling and discriminative parameter optimization for speech recognition. The specific models treated in depth include the widely used exponential-family distributions and the hidden Markov model. A detailed study is presented on unifying the common objective functions for discriminative learning in speech recognition, namely maximum mutual information (MMI), minimum classification error, and minimum phone/word error. The unification is presented, with rigorous mathematical analysis, in a common rational-functio

  6. Mining discriminative class codes for multi-class classification based on minimizing generalization errors

    Science.gov (United States)

    Eiadon, Mongkon; Pipanmaekaporn, Luepol; Kamonsantiroj, Suwatchai

    2016-07-01

    Error Correcting Output Code (ECOC) has emerged as one of promising techniques for solving multi-class classification. In the ECOC framework, a multi-class problem is decomposed into several binary ones with a coding design scheme. Despite this, the suitable multi-class decomposition scheme is still ongoing research in machine learning. In this work, we propose a novel multi-class coding design method to mine the effective and compact class codes for multi-class classification. For a given n-class problem, this method decomposes the classes into subsets by embedding a structure of binary trees. We put forward a novel splitting criterion based on minimizing generalization errors across the classes. Then, a greedy search procedure is applied to explore the optimal tree structure for the problem domain. We run experiments on many multi-class UCI datasets. The experimental results show that our proposed method can achieve better classification performance than the common ECOC design methods.

  7. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  8. Error discrimination of an operational hydrological forecasting system at a national scale

    Science.gov (United States)

    Jordan, F.; Brauchli, T.

    2010-09-01

    The use of operational hydrological forecasting systems is recommended for hydropower production as well as flood management. However, the forecast uncertainties can be important and lead to bad decisions such as false alarms and inappropriate reservoir management of hydropower plants. In order to improve the forecasting systems, it is important to discriminate the different sources of uncertainties. To achieve this task, reanalysis of past predictions can be realized and provide information about the structure of the global uncertainty. In order to discriminate between uncertainty due to the weather numerical model and uncertainty due to the rainfall-runoff model, simulations assuming perfect weather forecast must be realized. This contribution presents the spatial analysis of the weather uncertainties and their influence on the river discharge prediction of a few different river basins where an operational forecasting system exists. The forecast is based on the RS 3.0 system [1], [2], which is also running the open Internet platform www.swissrivers.ch [3]. The uncertainty related to the hydrological model is compared to the uncertainty related to the weather prediction. A comparison between numerous weather prediction models [4] at different lead times is also presented. The results highlight an important improving potential of both forecasting components: the hydrological rainfall-runoff model and the numerical weather prediction models. The hydrological processes must be accurately represented during the model calibration procedure, while weather prediction models suffer from a systematic spatial bias. REFERENCES [1] Garcia, J., Jordan, F., Dubois, J. & Boillat, J.-L. 2007. "Routing System II, Modélisation d'écoulements dans des systèmes hydrauliques", Communication LCH n° 32, Ed. Prof. A. Schleiss, Lausanne [2] Jordan, F. 2007. Modèle de prévision et de gestion des crues - optimisation des opérations des aménagements hydroélectriques à accumulation

  9. Techniques for avoiding discrimination errors in the dynamic sampling of condensable vapors

    Science.gov (United States)

    Lincoln, K. A.

    1983-01-01

    In the mass spectrometric sampling of dynamic systems, measurements of the relative concentrations of condensable and noncondensable vapors can be significantly distorted if some subtle, but important, instrumental factors are overlooked. Even with in situ measurements, the condensables are readily lost to the container walls, and the noncondensables can persist within the vacuum chamber and yield a disproportionately high output signal. Where single pulses of vapor are sampled this source of error is avoided by gating either the mass spectrometer ""on'' or the data acquisition instrumentation ""on'' only during the very brief time-window when the initial vapor cloud emanating directly from the vapor source passes through the ionizer. Instrumentation for these techniques is detailed and its effectiveness is demonstrated by comparing gated and nongated spectra obtained from the pulsed-laser vaporization of several materials.

  10. Errorless Establishment of a Match-to-Sample Form Discrimination in Preschool Children. I. A Modification of Animal Laboratory Procedures for Children, II. A Comparison of Errorless and Trial-and-Error Discrimination. Progress Report.

    Science.gov (United States)

    LeBlanc, Judith M.

    A sequence of studies compared two types of discrimination formation: errorless learning and trial-and-error procedures. The subjects were three boys and five girls from a university preschool. The children performed the experimental tasks at a typical match-to-sample apparatus with one sample window above and four match (response) windows below.…

  11. Estimation of the Coefficient of Variation with Minimum Risk: A Sequential Method for Minimizing Sampling Error and Study Cost.

    Science.gov (United States)

    Chattopadhyay, Bhargab; Kelley, Ken

    2016-01-01

    The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.

  12. Demonstration of Near-Optimal Discrimination of Optical Coherent States

    DEFF Research Database (Denmark)

    Wittmann, Christoffer; Takeoka, Masahiro; Cassemiro, Katiuscia N

    2008-01-01

    The optimal discrimination of nonorthogonal quantum states with minimum error probability is a fundamental task in quantum measurement theory as well as an important primitive in optical communication. In this work, we propose and experimentally realize a new and simple quantum measurement strategy...... capable of discriminating two coherent states with smaller error probabilities than can be obtained using the standard measurement devices: the Kennedy receiver and the homodyne receiver....

  13. Zero-Forcing and Minimum Mean-Square Error Multiuser Detection in Generalized Multicarrier DS-CDMA Systems for Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Wang Li-Chun

    2008-01-01

    Full Text Available Abstract In wireless communications, multicarrier direct-sequence code-division multiple access (MC DS-CDMA constitutes one of the highly flexible multiple access schemes. MC DS-CDMA employs a high number of degrees-of-freedom, which are beneficial to design and reconfiguration for communications in dynamic communications environments, such as in the cognitive radios. In this contribution, we consider the multiuser detection (MUD in MC DS-CDMA, which motivates lowcomplexity, high flexibility, and robustness so that the MUD schemes are suitable for deployment in dynamic communications environments. Specifically, a range of low-complexity MUDs are derived based on the zero-forcing (ZF, minimum mean-square error (MMSE, and interference cancellation (IC principles. The bit-error rate (BER performance of the MC DS-CDMA aided by the proposed MUDs is investigated by simulation approaches. Our study shows that, in addition to the advantages provided by a general ZF, MMSE, or IC-assisted MUD, the proposed MUD schemes can be implemented using modular structures, where most modules are independent of each other. Due to the independent modular structure, in the proposed MUDs one module may be reconfigured without yielding impact on the others. Therefore, the MC DS-CDMA, in conjunction with the proposed MUDs, constitutes one of the promising multiple access schemes for communications in the dynamic communications environments such as in the cognitive radios.

  14. Zero-Forcing and Minimum Mean-Square Error Multiuser Detection in Generalized Multicarrier DS-CDMA Systems for Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Lie-Liang Yang

    2008-01-01

    Full Text Available In wireless communications, multicarrier direct-sequence code-division multiple access (MC DS-CDMA constitutes one of the highly flexible multiple access schemes. MC DS-CDMA employs a high number of degrees-of-freedom, which are beneficial to design and reconfiguration for communications in dynamic communications environments, such as in the cognitive radios. In this contribution, we consider the multiuser detection (MUD in MC DS-CDMA, which motivates lowcomplexity, high flexibility, and robustness so that the MUD schemes are suitable for deployment in dynamic communications environments. Specifically, a range of low-complexity MUDs are derived based on the zero-forcing (ZF, minimum mean-square error (MMSE, and interference cancellation (IC principles. The bit-error rate (BER performance of the MC DS-CDMA aided by the proposed MUDs is investigated by simulation approaches. Our study shows that, in addition to the advantages provided by a general ZF, MMSE, or IC-assisted MUD, the proposed MUD schemes can be implemented using modular structures, where most modules are independent of each other. Due to the independent modular structure, in the proposed MUDs one module may be reconfigured without yielding impact on the others. Therefore, the MC DS-CDMA, in conjunction with the proposed MUDs, constitutes one of the promising multiple access schemes for communications in the dynamic communications environments such as in the cognitive radios.

  15. Analysis of Reduction in Area in MIMO Receivers Using SQRD Method and Unitary Transformation with Maximum Likelihood Estimation (MLE and Minimum Mean Square Error Estimation (MMSE Techniques

    Directory of Open Access Journals (Sweden)

    Sabitha Gauni

    2014-03-01

    Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.

  16. MINIMUM DISCRIMINATION INFORMATION PROBLEMS VIA GENERALIZED GEOMETRIC PROGRAMMING%基于广义几何规划的最小广义信息密度区分问题及其对偶理论

    Institute of Scientific and Technical Information of China (English)

    朱德通

    2003-01-01

    In this paper,the quadratic program problem and minimum discrimination information (MDI) problem with a set of quadratic inequality constraints and entropy constraints of density are considered.Based on the properties of the generalized geometric programming,the dual programs of these two problems are derived.Furthermore,the duality theorems and related Kuhn-Tucker conditions for two pairs of the prime-dual programs are also established by the duality theory.

  17. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    Science.gov (United States)

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2016-07-08

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  18. Optimization of the detection coil of high-Tc superconducting quantum interference device-based nuclear magnetic resonance for discriminating a minimum amount of liver tumor of rats in microtesla fields

    Science.gov (United States)

    Chen, Hsin-Hsien; Huang, Kai-Wen; Yang, Hong-Chang; Horng, Herng-Er; Liao, Shu-Hsien

    2013-08-01

    This study presents an optimization of the detection coil of high-Tc superconducting quantum interference device (SQUID)-based nuclear magnetic resonance (NMR) in microtesla fields for discriminating a minimum amount of liver tumor in rats by characterizing the longitudinal relaxation rate, T1-1, of tested samples. The detection coil, which was coupled to the SQUID through a flux transformer, was optimized by varying the copper wires' winding turns and diameters. When comparing the measured NMR signals, we found that the simulated NMR signal agrees with simulated signals. When discriminating liver tumors in rats, the averaged longitudinal relaxation rate was observed to be T1-1 = 3.3 s-1 for cancerous liver tissue and T1-1 = 6.6 s-1 for normal liver tissue. The results suggest that it can be used to successfully discriminate cancerous liver tissue from normal liver tissues in rats. The minimum amount of samples that can be detected is 0.2 g for liver tumor and 0.4 g for normal liver tissue in 100 μT fields. The specimen was not damaged; it can be used for other pathological analyses. The proposed method provides more possibilities for examining undersized specimens.

  19. Optimal discrimination of single-qubit mixed states

    Science.gov (United States)

    Weir, Graeme; Barnett, Stephen M.; Croke, Sarah

    2017-08-01

    We consider the problem of minimum-error quantum state discrimination for single-qubit mixed states. We present a method which uses the Helstrom conditions constructively and analytically; this algebraic approach is complementary to existing geometric methods, and solves the problem for any number of arbitrary signal states with arbitrary prior probabilities. It has long been known that the minimum-error probability is given by the trace of the Lagrange operator Γ . The remarkable feature of our approach is the central role played not by Γ , but by its inverse.

  20. Optimum Theory of Fitting-curve with Minimum Error of Radius%曲率半径误差最小的曲线拟合优化方法

    Institute of Scientific and Technical Information of China (English)

    钟汉桥; 唐晓腾; 叶仲和

    2001-01-01

    Analysis the defaults of the fitting-curve at present, find a optimum theory of fitting-curve with the minimum error of the radius is indicated,and its feasibility is proved by samples.%分析并指出了现行各种曲线拟合方法的缺陷,提出了一种以曲率半径误差最小为原则的双圆弧拟合曲线的优化方法,通过实例验证了这种新型方法的可行性。

  1. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    Science.gov (United States)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  2. Flash-Type Discrimination

    Science.gov (United States)

    Koshak, William J.

    2010-01-01

    This viewgraph presentation describes the significant progress made in the flash-type discrimination algorithm development. The contents include: 1) Highlights of Progress for GLM-R3 Flash-Type discrimination Algorithm Development; 2) Maximum Group Area (MGA) Data; 3) Retrieval Errors from Simulations; and 4) Preliminary Global-scale Retrieval.

  3. 基于最小均方误差准则的相关旋转预编码算法%Correlation rotation precoding algorithm based on criterion of minimum mean square error

    Institute of Scientific and Technical Information of China (English)

    祁美娟; 吴玉成

    2012-01-01

    针对传统相关旋转(CR)算法放大噪声的问题,利用拉格朗日函数最小化接收信号与发射信号间的误差,通过贝叶斯理论和信道统计特性计算不完美信道状态信息,设计了信道状态信息( CSI)完美和不完美两种情况下基于最小均方误差(MMSE)准则的CR预编码算法的系统方案.分析与仿真结果表明,与传统迫零(ZF)准则下的CR算法相比较:信道状态信息完美时设计方案在同一信噪比(SNR)下误码率性能提高2~3 dB;信道状态信息不完美时系统误码性能也有显著的提高.%Concerning the problem that traditional Correlation Rotation ( CR) precoding algorithm enlarges noise, Lagrange function was explored to minimize the error between received signal and transmitted signal, statistics and Bayesian theory were explored to calculate the imperfect channel state information, and a scheme of CR precoding algorithm was designed based on Minimum Mean Square Error ( MMSE) criterion under perfect Channel State Information ( CSI) and imperfect CSI. The simulation results show that under perfect CSI the bit error ratio performance of the designed scheme is improved about 2 - 3 dB at the same Signal-to-Noise Ratio (SNR), and under imperfect CSI, the system performance is improved obviously compared to CR precoding based on Zero Forcing (ZF) criterion.

  4. 以DEM提取流域水系河源的最小误差分析%Analysis of Minimum Error at River Source to Extract River Network Based on DEM

    Institute of Scientific and Technical Information of China (English)

    陈冬平; 陈莹; 陈兴伟

    2011-01-01

    With the development of hydrological model, extraction of river drainage network has been a hot topic in hydrology research. River drainage network was extracted based on topographic maps or drainage maps by digitization in the early years, but the result was influenced by data source resolution. There are presently two kinds of methods to extract river drainage network based on DEM. One is to overlay the extracted river drainage network based on DEM on the river digitalized maps which came from drainage maps or vector layer of river, to make the extract drainage network more similar to the actual river networks.But the accuracy of river drainage network depends on the resolution of drainage maps or vector layer of river. The other one is based on “inflection point” to extract river drainage network, however, the assumption of “inflection point” exists the problem of choice of scale-free interval. To solve the above problems, the river source minimum error (RSME) method was presented based on DEM in this paper. First,the relationship between the distance error of the actual river source and the extracted river network source and the size of grid was established; second, the minimum distance error was adopted as the principle to solve the problem of the uniqueness in watershed drainage network extraction, and then the river network was determined. Taking Jinjiang River as an example and using DEM with 30m resolution as data source, the RSME method was adopted to extract Jinjiang River drainage network on the platform of ArcGIS9.2. The result showed that the distance error between the river source and the extracted river network source is the smallest one when the grid numbers are up to 5814 and the minimum river length is 42m, the corresponding fractal dimension is 1. 389. Moreover, the result indicated that the proposed RSME method is reasonable to extract watershed drainage network.%目前,以水文模型提取流域水系已成为水文科学研究中的

  5. A model for discriminating reinforcers in time and space.

    Science.gov (United States)

    Cowie, Sarah; Davison, Michael; Elliffe, Douglas

    2016-06-01

    Both the response-reinforcer and stimulus-reinforcer relation are important in discrimination learning; differential responding requires a minimum of two discriminably-different stimuli and two discriminably-different associated contingencies of reinforcement. When elapsed time is a discriminative stimulus for the likely availability of a reinforcer, choice over time may be modeled by an extension of the Davison and Nevin (1999) model that assumes that local choice strictly matches the effective local reinforcer ratio. The effective local reinforcer ratio may differ from the obtained local reinforcer ratio for two reasons: Because the animal inaccurately estimates times associated with obtained reinforcers, and thus incorrectly discriminates the stimulus-reinforcer relation across time; and because of error in discriminating the response-reinforcer relation. In choice-based timing tasks, the two responses are usually highly discriminable, and so the larger contributor to differences between the effective and obtained reinforcer ratio is error in discriminating the stimulus-reinforcer relation. Such error may be modeled either by redistributing the numbers of reinforcers obtained at each time across surrounding times, or by redistributing the ratio of reinforcers obtained at each time in the same way. We assessed the extent to which these two approaches to modeling discrimination of the stimulus-reinforcer relation could account for choice in a range of temporal-discrimination procedures. The version of the model that redistributed numbers of reinforcers accounted for more variance in the data. Further, this version provides an explanation for shifts in the point of subjective equality that occur as a result of changes in the local reinforcer rate. The inclusion of a parameter reflecting error in discriminating the response-reinforcer relation enhanced the ability of each version of the model to describe data. The ability of this class of model to account for a

  6. Robust Speech Recognition Method Based on Discriminative Environment Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    HAN Jiqing; GAO Wen

    2001-01-01

    It is an effective approach to learn the influence of environmental parameters,such as additive noise and channel distortions, from training data for robust speech recognition.Most of the previous methods are based on maximum likelihood estimation criterion. However,these methods do not lead to a minimum error rate result. In this paper, a novel discrimina-tive learning method of environmental parameters, which is based on Minimum ClassificationError (MCE) criterion, is proposed. In the method, a simple classifier and the Generalized Probabilistic Descent (GPD) algorithm are adopted to iteratively learn the environmental parameters. Consequently, the clean speech features are estimated from the noisy speech features with the estimated environmental parameters, and then the estimations of clean speech features are utilized in the back-end HMM classifier. Experiments show that the best error rate reduction of 32.1% is obtained, tested on a task of 18 isolated confusion Korean words, relative to a conventional HMM system.

  7. Minimum Entropy Orientations

    CERN Document Server

    Cardinal, Jean; Joret, Gwenaël

    2008-01-01

    We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).

  8. A Two-Layer Classifier with Rejection Feature based on Discrimination Projection and Minimum L1-ball Covering Model%区分性投影结合最小L1球覆盖的可拒识双层分类器

    Institute of Scientific and Technical Information of China (English)

    胡正平; 贾千文; 许成谦

    2011-01-01

    Classical classifier supposed that the test pattern must be the same class as the trainning pattern, while that maybe make error judgment in some applications such as network security biological ID recognition and medical diagnoses , because the classical classifier cann't make rejecting judgment for the existing uncooperative exceptional input pattern. A two-layer classifier with rejection feature based on discrimination projection and minimum L1-ball covering model is proposed to solve this problem. Aiming at the problem that one-class classification ignores discrimination between a given set of classes, the differential vector is defined to represent the detail information of each class, which forms into a new differential feature space. Combined with PCA-L1, a new discrimination projection called differential vector PCA-L1 is computed. Then, minimum L1-ball covering model as the decision boundary around each class is constructed. Thus the input pattem of no-object classes could be rejected hy the first decision boundary descriptor model. Finally, if a pattern ia accepted by the above L1-ball covering model, the recognition result is judged by the nearest neighbor classification model. Experiments on the UCI database, the MNIST database of handwritten digitals and the CMU AMP face expression datahase prove that the method proposed in this paper could achieve good recognition and rejection performance, and it could be applicable in many real pattern recognition fields.%经典分类模型总是假定测试样本属于训练类之一,然而在网络安全、身份识别、医学诊断等非合作模式识别中往往存在许多非训练类例外模式,这时由于分类器缺乏拒识能力,只能给出错误判决.为此,本文构造了一种基于区分性投影结合最小L1球覆盖的可拒识双层近邻分类器.该方法针对一类分类器忽略类别间区分性描述的不足,定义一种能够表征各训练类模式细节信息的差分矢量,形成

  9. ECOC多分类器实现的最小封闭球模型%The Minimum Closed Sphere Model for Multi-Class Classifiers Using Error-Correcting Output Codes

    Institute of Scientific and Technical Information of China (English)

    李建武; 魏海周; 宋玉龙

    2011-01-01

    基于纠错输出码(error-correcting output codes,ECOC)的多分类器实现旨在通过构造多个二分类器,根据各个二分类器的输出对测试样本进行分类决策,标准的做法是采用最短海明距离判别.首先对传统二进制ECOC的多分类模型进行了几何刻画,给出了ECOC多分类器的最小封闭球几何描述模型,然后把这种思想推广到实数编码的实现,并采用支持向量域描述(support vector domaindescription,SVDD)在实数向量空间中寻找各个类别的最小封闭球.进一步根据最小封闭球的几何模型,探讨了给出后验概率估计的ECOC多分类器实现策略.最后采用支持向量机作为ECOC的二类分类器,在UCI数据集上进行了实验分析.实验结果表明:对于长度较短的ECOC编码,所提出的计算模型在分类精度上相比传统的方法性能明显改善.%Multi-class classifiers using error-correcting output codes (ECOC) generally adopt the criterion of the shortest Hamming distance to classify test examples, via combining outputs of binary classifiers, which are built during the training phase. This paper analyzes standard ECOC classifiers from a geometrical view point of the minimum closed sphere model, then extends this model to float ECOC through using support vector domain description (SVDD) to find the minimum closed sphere enclosing one class in the float encoding space. Posterior probability of test examples for any class is further estimated. Finally, support vector machines (SVM) are selected as binary classifiers to evaluate classification performance of the proposed method, and experimental results based on the UCI data sets show that when the encoding length is short, our method can outperform several traditional techniques.

  10. 基于最小均方误差原理的医学X光影像滤波阈值选择%Threshold selection method for medical X-ray images filter based on minimum even-square error

    Institute of Scientific and Technical Information of China (English)

    刘光达; 赵立荣

    2001-01-01

    Disturbance noises in medical X-ray imaging systems consist of inherent and quantum noises, which obey random Gauss and Polsson distributions, respectively. This paper theoretically provided an optimum threshold selection method for medical X-ray images filter. Through practical process of CT images, medical X-ray images filter based on minimum even-square error has been accomplished in this work.%固有噪声和量子噪声构成了医学X光影像系统的干扰噪声。它们在统计规律上分别是依从高斯分布和泊松分布的随机空间波动。本文从理论上推导出了基于小波变换原理的医学X光影像的固有噪声抑制和消除处理中的最优滤波阈值选择。通过对实际CT影像的消噪处理应用,实现了基于最小均方误差原理的医学X光影像的滤波处理。

  11. 线性最小方差估计用于SAR干涉图大气延迟改正%Linear Minimum Mean Square Error Estimation for Wet Delay Correction in SAR Interferogram

    Institute of Scientific and Technical Information of China (English)

    许才军; 王江林

    2007-01-01

    将线性最小方差估计(linear minimum mean square error,LMMSE)引入SAR干涉图大气延迟改正,并用模拟实验比较了地势平坦地区LMMSE与以往常用于SAR干涉图大气延迟改正的距离权倒数法(inverse distance weighted averaging, IDWA)和普通克里格算法(ordinary Kriging, KRG)的插值效果.同时,还比较了地势起伏较大地区,考虑了高差因素的LMMSE和KRG算法、未考虑高差因素的LMMSE、KRG算法以及IDWA的插值效果.研究结果表明,对于地势平坦地区,在各种精度情况下,当已知点呈随机分布时,点数越少,LMMSE的插值效果的优势越明显;对于地势起伏较大地区,考虑了高差因素的LMMSE的插值效果最佳.

  12. The Best Model of the Swiss Banknote Data -Validation by the 95% CI of coefficients and t-test of discriminant scores

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2016-06-01

    Full Text Available The discriminant analysis is not the inferential statistics since there are no equations for standard error (SE of error rate and discriminant coefficient based on the normal distribution. In this paper, we proposed the “k-fold cross validation for small sample” and can obtain the 95% confidence interval (CI of error rates and discriminant coefficients. This method is the computer-intensive approach by statistical and mathematical programming (MP software such as JMP and LINGO. By the proposed approach, we can choose the best model with the minimum mean of error rate in the validation samples (Minimum M2 Standard. In this research, we examine the sixteen linear separable models of Swiss banknote data by eight linear discriminant functions (LDFs. M2 of the best model of Revised IP-OLDF is the smallest value of all models. We find all coefficients of six Revised IP-OLDF among sixteen models rejected by the 95% CI of discriminant coefficients (Discriminant coefficient standard. We compare t-values of the discriminant scores. The t-value of the best model has the maximum values among sixteen models (Maximum t-value Standard. Moreover, we can conclude that all standards support the best model of Revised IP-OLDF.

  13. Genetic Discrimination

    Science.gov (United States)

    ... in Genetics Archive Regulation of Genetic Tests Genetic Discrimination Overview Many Americans fear that participating in research ... I) and employment (Title II). Read more Genetic Discrimination and Other Laws Genetic Discrimination and Other Laws ...

  14. 新型三相三线电能表错接线快速判别方法研究%Study on the Rapid Error Connections Discriminating Method of the New Three - Phase and Three - Wire Measuring Device

    Institute of Scientific and Technical Information of China (English)

    陈霄; 周玉; 范洁; 易永仙; 陈刚

    2014-01-01

    The result of the error connections can’t be obtained accurately and rapidly because of the complex of con-nections in the field and the criteria algorithm. The phasor diagram between the measuring components can be easily established according to the angle measured in the field. Wiring relations of inductive load and capacitive load can be determined quickly after the overall rotation of phasor diagram. So error connections of three - phase and three - wire measuring device can be discriminated quickly and accurately.%高压计量装置现场接线复杂,错误接线很难判别,且判别中算法复杂,无法快速准确的给出结果。通过现场简单测试电能表各计量元件之间的相位角度,作出各计量元件之间的相量图,通过整体旋转相量图的方式,快速判断出三相三线计量装置在感性负载和容性负载下的接线关系,实现三相三线电能表错接线的快速精确识别。

  15. 基于线性最小均方误差估计的 SAR 图像降噪%SAR image denoising via linear minimum meansquare error estimation

    Institute of Scientific and Technical Information of China (English)

    刘书君; 吴国庆; 张新征; 沈晓东; 李勇明

    2016-01-01

    针对合成孔径雷达(synthetic aperture radar,SAR)图像降噪过程中容易引起细节纹理信息损失的问题,该文结合 SAR 图像相干斑噪声的统计特性,提出了一种基于变换域系数线性最小均方误差(linear mini-mum mean-square error,LMMSE)估计的 SAR 图像降噪方法。首先通过 SAR 场景下的 Kmeans 聚类算法将相似图像块聚类;然后针对每一类相似图像块集合进行奇异值分解(singular value decomposition,SVD),得到同时包含图像块集合行列相关信息的含噪奇异值系数;为从含噪奇异值系数中更准确地估计出真实图像奇异值的系数,先通过加性独立信号噪声(additive signal-dependent noise,ASDN)模型将乘性噪声转化为加性噪声,再利用LMMSE 准则对奇异值系数进行估计,最后将估计结果重构得到降噪后的图像块集合。实验结果表明,该方法充分利用相似图像块集合奇异值系数稀疏的特性,采用 LMMSE 准则估计奇异值系数,既保证了系数中噪声分量的去除又避免了图像纹理细节对应小系数的丢失,不仅去噪效果明显,同时能有效地保持图像纹理细节,具有良好的图像视觉效果。%In order to solve the problem that many detail texture information is lost during the synthetic ap-erture radar (SAR)image denoising process,SAR image denoising approach based on the estimated transform domain coefficients by the means of linear minimum mean square error (LMMSE)is proposed,which combines the statistical characteristics of the speckle noise in the SAR image.Firstly,cluster image blocks into disjoint sets of similar blocks through Kmeans corresponding to the SAR scene.Secondly,perform singular value de-composition (SVD)for each set of similar blocks,and the noisy singular value coefficients containing the corre-lation of rows and columns of the set of similar blocks can be obtained.In order to estimate the noise

  16. 最小误差准则与脉冲耦合神经网络的裂缝检测%Detection of crack defect based on minimum error and pulse coupled neural networks

    Institute of Scientific and Technical Information of China (English)

    赵慧洁; 葛文谦; 李旭东

    2012-01-01

    Surface crack detection can effectively judge structure dangers of concrete bridge. But, crack detection becomes very difficult because of variety of crack characters, image noise caused by bridge surface blots and uneven gray scale caused by asymmetric illumination. In order to detect cracks in complicated background, crack image character is analyzed; PCNN model is simplified through analyzing of its running characters and the state change of nerve cells. Crack image is segmented using the simplified PCNN model; the iterative stop condition of the PCNN model is judged with the rule of minimum error, and PCNN crack image segmentation is carried out automatically. The region characters are calculated according to the degrees of flatness and roundness, the interferences after segmentation are removed, and the surface crack effective detection is achieved. ROC curves are drawn using sensitivity and specificity, and the curve characteristics of different detection methods are compared to evaluate the algorithm. Experiment results using the real images of bridge surface show that the proposed crack detection method is effective.%表面裂缝检测能够有效判断混凝土桥梁出现的结构性危险.但裂缝特征的多样性、桥梁表面污点引起的图像噪声以及不均匀照明引起的灰度不均等给裂缝检测带来极大的困难.为能够在复杂背景下检测裂缝,分析裂缝图像特征,由脉冲耦合神经网络(pulse coupled neural networks,PCNN)的运行特征和神经元的状态变化分析简化PCNN模型,将简化PCNN模型用于裂缝图像的分割,根据最小误差准则判断PCNN迭代的终止条件,实现了PCNN的裂缝图像自动分割.由圆形度与扁度结合计算区域特征,去除分割后的各种干扰,实现表面裂缝的有效检测.通过敏感度和特异性计算绘制ROC(receiver operating characteristics)曲线,比较不同分割方法的曲线特性以评估算法,对实际裂缝图像的处理结果表

  17. Tone model integration based on discriminative weight training for Putonghua speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2008-01-01

    A discriminative framework of tone model integration in continuous speech recognition was proposed. The method uses model dependent weights to scale probabilities of the hidden Markov models based on spectral features and tone models based on tonal features.The weights are discriminatively trahined by minimum phone error criterion. Update equation of the model weights based on extended Baum-Welch algorithm is derived. Various schemes of model weight combination are evaluated and a smoothing technique is introduced to make training robust to over fitting. The proposed method is ewluated on tonal syllable output and character output speech recognition tasks. The experimental results show the proposed method has obtained 9.5% and 4.7% relative error reduction than global weight on the two tasks due to a better interpolation of the given models. This proves the effectiveness of discriminative trained model weights for tone model integration.

  18. The Risk Criteria for Discriminating Technology Innovation Investment Decision Errors of High-tech Enterprises%判别高新技术企业技术创新投资决策错误的风险准则

    Institute of Scientific and Technical Information of China (English)

    边云岗; 郭开仲

    2014-01-01

    The technological innovation of high-tech enterprises is a high-risk activity. High-risk means the probability of obtaining high risk gain or suffering large risk loss. Therefore, the correct decision on technological innovation investment is help-ful to maximize the investment income. Through an analysis of the law on technological innovation investment risk-benefit, it is found that an optimal degree of risk and a critical degree of risk exist in the technological innovation investment objectives. For rational decision makers, a risk tolerance interval including the optimal degree of risk should be determined, based on the enterprise's own risk tolerance and business strategy, as a the risk criteria for discriminating technology innovation investment decision errors, and an error function should be created to measure the degree of the investment decision error based on the error-eliminating theory in order to take appropriate remedial measures. Finally, a wrong decision example is analyzed to illustrate the risk criteria is scientific and rational.%高新技术企业的技术创新是一种高风险活动,高风险意味着面临获得高风险收益或遭受高风险损失的可能。因此,正确的技术创新投资决策有利于实现收益的最大化。通过对企业技术创新投资风险收益规律的分析发现,企业技术创新投资客观上存在最佳风险度和临界风险度,认为对于理性的技术创新投资决策者而言,应根据企业自身的风险承担能力和经营战略,确定包括最佳风险度在内的风险容忍区间,作为判别技术创新投资决策错误的风险准则,并通过消错理论中的错误函数度量投资决策错误的程度,以便采取相应的补救措施。最后通过错误投资决策的实例分析,说明了该风险准则的科学性和合理性。

  19. Price Discrimination

    OpenAIRE

    Armstrong, Mark

    2008-01-01

    This paper surveys recent economic research on price discrimination, both in monopoly and oligopoly markets. Topics include static and dynamic forms of price discrimination, and both final and input markets are considered. Potential antitrust aspects of price discrimination are highlighted throughout the paper. The paper argues that the informational requirements to make accurate policy are very great, and with most forms of price discrimination a laissez-faire policy may be the best availabl...

  20. Structural Discrimination

    DEFF Research Database (Denmark)

    Thorsen, Mira Skadegård

    In this article, I discuss structural discrimination, an underrepresented area of study in Danish discrimination and intercultural research. It is defined here as discursive and constitutive, and presented as a central element of my analytical approach. This notion is employed in the with which...... to understand and identify aspects of power and asymmetry in communication and interactions. With this as a defining term, I address how exclusion and discrimination exist, while also being indiscernible, within widely accepted societal norms. I introduce the concepts of microdiscrimination and benevolent...... discrimination as two ways of articulating particular, opaque forms of racial discrimination that occur in everyday Danish (and other) contexts, and have therefore become normalized. I present and discuss discrimination as it surfaces in data from my empirical studies of discrimination in Danish contexts...

  1. Multiclass Bayes error estimation by a feature space sampling technique

    Science.gov (United States)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  2. Multiclass Bayes error estimation by a feature space sampling technique

    Science.gov (United States)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  3. Minimum quality standards and international trade

    DEFF Research Database (Denmark)

    Baltzer, Kenneth Thomas

    2011-01-01

    This paper investigates the impact of a non-discriminating minimum quality standard (MQS) on trade and welfare when the market is characterized by imperfect competition and asymmetric information. A simple partial equilibrium model of an international Cournot duopoly is presented in which...... prefer different levels of regulation. As a result, international trade disputes are likely to arise even when regulation is non-discriminating....

  4. Discriminative tonal feature extraction method in mandarin speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2007-01-01

    To utilize the supra-segmental nature of Mandarin tones, this article proposes a feature extraction method for hidden markov model (HMM) based tone modeling. The method uses linear transforms to project F0 (fundamental frequency) features of neighboring syllables as compensations, and adds them to the original F0 features of the current syllable. The transforms are discriminatively trained by using an objective function termed as "minimum tone error", which is a smooth approximation of tone recognition accuracy. Experiments show that the new tonal features achieve 3.82% tone recognition rate improvement, compared with the baseline, using maximum likelihood trained HMM on the normal F0 features. Further experiments show that discriminative HMM training on the new features is 8.78% better than the baseline.

  5. How minimum detectable displacement in a GNSS Monitoring Network change?

    Science.gov (United States)

    Hilmi Erkoç, Muharrem; Doǧan, Uǧur; Aydın, Cüneyt

    2016-04-01

    The minimum detectable displacement in a geodetic monitoring network shows the displacement magnitude which may be just discriminated with known error probabilities. This displacement, which is originally deduced from sensitivity analysis, depends on network design, observation accuracy, datum of the network, direction of the displacement and power of the statistical test used for detecting the displacements. One may investigate how different scenarios on network design and observation accuracies influence the minimum detectable displacements for the specified datum, a-priorly forecasted directions and assumed power of the test and decide which scenario is the best or most optimum. It is sometimes difficult to forecast directions of the displacements. In that case, the minimum detectable displacements in a geodetic monitoring network are derived on the eigen-directions associated with the maximum eigen-values of the network stations. This study investigates how minimum detectable displacements in a GNSS monitoring network change depending on the accuracies of the network stations. For this, CORS-TR network in Turkey with 15 stations (a station fixed) is used. The data with 4h, 6h, 12 h and 24 h observing session duration in three sequential days of 2011, 2012 and 2013 were analyzed with Bernese 5.2 GNSS software. The repeatabilities of the daily solutions belonging to each year were analyzed carefully to scale the Bernese cofactor matrices properly. The root mean square (RMS) values for daily repeatability with respect to the combined 3-day solution are computed (the RMS values are generally less than 2 mm in the horizontal directions (north and east) and < 5 mm in the vertical direction for 24 h observing session duration). With the obtained cofactor matrices for these observing sessions, the minimum detectable displacements along the (maximum) eigen directions are compared each other. According to these comparisons, more session duration less minimum detectable

  6. Spatial discrimination and visual discrimination

    DEFF Research Database (Denmark)

    Haagensen, Annika M. J.; Grand, Nanna; Klastrup, Signe

    2013-01-01

    Two methods investigating learning and memory in juvenile Gottingen minipigs were evaluated for potential use in preclinical toxicity testing. Twelve minipigs were tested using a spatial hole-board discrimination test including a learning phase and two memory phases. Five minipigs were tested...... in a visual discrimination test. The juvenile minipigs were able to learn the spatial hole-board discrimination test and showed improved working and reference memory during the learning phase. Performance in the memory phases was affected by the retention intervals, but the minipigs were able to remember...... the concept of the test in both memory phases. Working memory and reference memory were significantly improved in the last trials of the memory phases. In the visual discrimination test, the minipigs learned to discriminate between the three figures presented to them within 9-14 sessions. For the memory test...

  7. Improvement Comparison of Different Lattice-based Discriminative Training Methods in Chinese-monolingual and Chinese-English-bilingual Speech Recognition%各种不同的基于词格的鉴别性训练方法在中文单语以及中英双语语音识别系统中的性能改善调研及比较

    Institute of Scientific and Technical Information of China (English)

    QIAN Yan-Min; SHAN Yu-Xiang; WANG Lin-Fang; LIU Jia

    2012-01-01

    Discriminative training approaches such as minimum phone error (MPE),feature minimum phone error (fMPE) and boosted maximum mutual information (BMMI) have brought remarkable improvement to the speech community in recent years,however,much work still remains to be done.This paper investigates the performances of three lattice-based discriminative training methods in detail,and does a comparison of different I-smoothing methods to obtain more robust models in the Chinese-monolingual situation.The complementary properties of the different discriminative training methods are explored to perform a system combination by recognizer output voting error reduction (ROVER).Although discriminative training is normally used in monolingual systems,this paper systematically investigates its use for bilingual speech recognition,including MPE,fMPE,and BMMI.A new method is proposed to generate significantly better lattices for training the bilingual model,and complementary discriminative training models are also explored to get the best ROVER performance in the bilingual situation.Experimental results show that all forms of discriminative training can reduce the word error rate in both monolingual and bilingual systems,and that combining complementary discriminative training methods can improve the performance significantly.

  8. A high-performance, low-cost, leading edge discriminator

    Indian Academy of Sciences (India)

    S K Gupta; Y Hayashi; A Jain; S Karthikeyan; S Kawakami; K C Ravindran; S C Tonwar

    2005-08-01

    A high-performance, low-cost, leading edge discriminator has been designed with a timing performance comparable to state-of-the-art, commercially available discriminators. A timing error of 16 ps is achieved under ideal operating conditions. Under more realistic operating conditions the discriminator displays a timing error of 90 ps. It has an intrinsic double pulse resolution of 4 ns which is better than most commercial discriminators. A low-cost discriminator is an essential requirement of the GRAPES-3 experiment where a large number of discriminator channels are used.

  9. Discriminative Structured Dictionary Learning for Image Classification

    Institute of Scientific and Technical Information of China (English)

    王萍; 兰俊花; 臧玉卫; 宋占杰

    2016-01-01

    In this paper, a discriminative structured dictionary learning algorithm is presented. To enhance the dictionary’s discriminative power, the reconstruction error, classification error and inhomogeneous representation error are integrated into the objective function. The proposed approach learns a single structured dictionary and a linear classifier jointly. The learned dictionary encourages the samples from the same class to have similar sparse codes, and the samples from different classes to have dissimilar sparse codes. The solution to the objective function is achieved by employing a feature-sign search algorithm and Lagrange dual method. Experimental results on three public databases demonstrate that the proposed approach outperforms several recently proposed dictionary learning techniques for classification.

  10. Minimum Length - Maximum Velocity

    CERN Document Server

    Panes, Boris

    2011-01-01

    We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.

  11. Minimum Mean-Square Error Single-Channel Signal Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas

    2008-01-01

    are expressed and in the way the estimator is approximated. The starting point of the first method is prior probability density functions for both signal and noise and it is assumed that their Laplace transforms (moment generating functions) are available. The corresponding posterior mean integral that defines...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...... particle filtering using the reparameterized speech model because it is relative straightforward to exploit prior information about formant features. A modified MMSE estimator is introduced and performance of the particle filtering algorithm is compared to a state of the art hearing aid noise reduction...

  12. Nowcasting daily minimum air and grass temperature

    Science.gov (United States)

    Savage, M. J.

    2016-02-01

    Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) errors for grass minimum temperature and the 4-h nowcasts.

  13. Minimum-cost quantum measurements for quantum information

    OpenAIRE

    Wallden, Petros; Dunjko, Vedran; Andersson, Erika

    2014-01-01

    Knowing about optimal quantum measurements is important for many applications in quantum information and quantum communication. However, deriving optimal quantum measurements is often difficult. We present a collection of results for minimum-cost quantum measurements, and give examples of how they can be used. Among other results, we show that a minimum-cost measurement for a set of given pure states is formally equivalent to a minimum-error measurement for certain mixed states of those same ...

  14. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  15. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  16. Fighting discrimination.

    Science.gov (United States)

    Wientjens, Wim; Cairns, Douglas

    2012-10-01

    In the fight against discrimination, the IDF launched the first ever International Charter of Rights and Responsibilities of People with Diabetes in 2011: a balance between rights and duties to optimize health and quality of life, to enable as normal a life as possible and to reduce/eliminate the barriers which deny realization of full potential as members of society. It is extremely frustrating to suffer blanket bans and many examples exist, including insurance, driving licenses, getting a job, keeping a job and family affairs. In this article, an example is given of how pilots with insulin treated diabetes are allowed to fly by taking the responsibility of using special blood glucose monitoring protocols. At this time the systems in the countries allowing flying for pilots with insulin treated diabetes are applauded, particularly the USA for private flying, and Canada for commercial flying. Encouraging developments may be underway in the UK for commercial flying and, if this materializes, could be used as an example for other aviation authorities to help adopt similar protocols. However, new restrictions implemented by the new European Aviation Authority take existing privileges away for National Private Pilot Licence holders with insulin treated diabetes in the UK.

  17. Discrimination and Anti-discrimination in Denmark

    DEFF Research Database (Denmark)

    Olsen, Tore Vincents

    The purpose of this report is to describe and analyse Danish anti-discrimination legislation and the debate about discrimination in Denmark in order to identify present and future legal challenges. The main focus is the implementation of the EU anti-discrimination directives in Danish law...

  18. Discrimination and Anti-discrimination in Denmark

    DEFF Research Database (Denmark)

    Olsen, Tore Vincents

    The purpose of this report is to describe and analyse Danish anti-discrimination legislation and the debate about discrimination in Denmark in order to identify present and future legal challenges. The main focus is the implementation of the EU anti-discrimination directives in Danish law...

  19. Near-infrared spectroscopy is feasible to discriminate hazelnut cultivars

    Directory of Open Access Journals (Sweden)

    Elisabetta Stella

    2013-09-01

    Full Text Available The study demonstrated the feasibility of the near infrared (NIR spectroscopy use for hazelnut-cultivar sorting. Hazelnut spectra were acquired from 600 fruit for each cultivar sample, two diffuse reflectance spectra were acquired from opposite sides of the same hazelnut. Spectral data were transformed into absorbance before the computations. A different variety of spectral pretreatments were applied to extract characteristics for the classification. An iterative Linear Discriminant Analysis (LDA algorithm was used to select a relatively small set of variables to correctly classify samples. The optimal group of features selected for each test was analyzed using Partial Least Squares Discriminant Analysis (PLS-DA. The spectral region most frequently chosen was the 1980-2060 nm range, which corresponds to best differentiation performance for a total minimum error rate lower than 1.00%. This wavelength range is generally associated with stretching and bending of the N-H functional group of amino acids and proteins. The feasibility of using NIR Spectroscopy to distinguish different hazelnut cultivars was demonstrated.

  20. Classification of hand preshaping in persons with stroke using Linear Discriminant Analysis.

    Science.gov (United States)

    Puthenveettil, Saumya; Fluet, Gerard; Qiu, Qinyin; Adamovich, Sergei

    2012-01-01

    This study describes the analysis of hand preshaping using Linear Discriminant Analysis (LDA) to predict hand formation during reaching and grasping tasks of the hemiparetic hand, following a series of upper extremity motor intervention treatments. The purpose of this study is to use classification of hand posture as an additional tool for evaluating the effectiveness of therapies for upper extremity rehabilitation such as virtual reality (VR) therapy and conventional physical therapy. Classification error for discriminating between two objects during hand preshaping is obtained for the hemiparetic and unimpaired hands pre and post training. Eight subjects post stroke participated in a two-week training session consisting of upper extremity motor training. Four subjects trained with interactive VR computer games and four subjects trained with clinical physical therapy procedures of similar intensity. Subjects' finger joint angles were measured during a kinematic reach to grasp test using CyberGlove® and arm joint angles were measured using the trackSTAR™ system prior to training and after training. The unimpaired hand of subjects preshape into the target object with greater accuracy than the hemiparetic hand as indicated by lower classification errors. Hemiparetic hand improved in preshaping accuracy and time to reach minimum error. Classification of hand preshaping may provide insight into improvements in motor performance elicited by robotically facilitated virtually simulated training sessions or conventional physical therapy.

  1. Discrimination of binary coherent states using a homodyne detector and a photon number resolving detector

    DEFF Research Database (Denmark)

    Wittmann, Christoffer; Andersen, Ulrik Lund; Takeoka, Masahiro;

    2010-01-01

    We investigate quantum measurement strategies capable of discriminating two coherent states probabilistically with significantly smaller error probabilities than can be obtained using nonprobabilistic state discrimination. We apply a postselection strategy to the measurement data of a homodyne de...

  2. Rising above the Minimum Wage.

    Science.gov (United States)

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  3. DILEMATIKA PENETAPAN UPAH MINIMUM

    Directory of Open Access Journals (Sweden)

    . Pitaya

    2015-02-01

    Full Text Available In the effort of creating appropiate wage for employees, it is necessary to determine the wages by considering the increase of poverty without ignoring the increase of productivity, the progressivity of companies and the growth of economic. The new minimum wages in the provincial level and the regoinal/municipality level have been implemented per 1st January in Indonesia since 2001. The determination of minimum wage for provinvial level should be done 30 days before 1st January, whereas the determination of minimumwage for regional/municipality level should be done 40 days before 1st January. Moreover,there is an article which governs thet the minimumwage will be revised annually. By considering the time of determination and the time of revision above,it can be predicted that before and after the determination date will be crucial time. This is because the controversy among parties in industrial relationships will arise. The determination of minimum wage will always become a dilemmatic step which has to be done by the Government. Through this policy, on one side the government attempts to attract many investors, however, on the other side the government also has to protect the employees in order to have the appropiate wage in accordance with the standard of living.

  4. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  5. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  6. The Badness of Discrimination

    DEFF Research Database (Denmark)

    Lippert-Rasmussen, Kasper

    2006-01-01

    The most blatant forms of discrimination are morally outrageous and very obviously so; but the nature and boundaries of discrimination are more controversial, and it is not clear whether all forms of discrimination are morally bad; nor is it clear why objectionable cases of discrimination are bad....... In this paper I address these issues. First, I offer a taxonomy of discrimination. I then argue that discrimination is bad, when it is, because it harms people. Finally, I criticize a rival, disrespect-based account according to which discrimination is bad regardless of whether it causes harm....

  7. Minimum quality standards and exports

    OpenAIRE

    2015-01-01

    This paper studies the interaction of a minimum quality standard and exports in a vertical product differentiation model when firms sell global products. If ex ante quality of foreign firms is lower (higher) than the quality of exporting firms, a mild minimum quality standard in the home market hinders (supports) exports. The minimum quality standard increases quality in both markets. A welfare maximizing minimum quality standard is always lower under trade than under autarky. A minimum quali...

  8. 基于非理想信道状态信息的最小化均方误差非线性收发机设计%Minimum Mean Square Error Non-linear Transceiver Design Based on Imperfect Channel State Information

    Institute of Scientific and Technical Information of China (English)

    耿烜; 何迪

    2012-01-01

    A design method for nonlinear transceiver with Tomlinson-Harashima precoding(THP) struc ture was proposed based on minimum mean square error (MSE) criterion, when the transceiver knows the imperfect channel state information (CSI) for multiple-input multiple-output (MIMO) system. The MSE is derived firstly, and then transformed as the function of only one variable being precoding matrix. By minimizing the lower bound of MSE using optimization and matrix theory, the optimal precoding matrix and the closed-form of the lower bound are obtained, so that the total non-linear transceiver matrices are solved. The simulation results show that the proposed method outperforms the existing linear transceiver and the classic THP transceiver.%在多输入多输出系统中,当收发端已知非理想信道状态信息时,提出了一种基于最小化均方误差准则的非线性收发机设计方法,其结构基于汤姆林森一哈拉希玛预编码(THP).首先研究了收发信号的均方误差表达式,并将其转换为预编码矩阵的函数;然后,通过最优化及矩阵论方法最小化均方误差的下界,求解最优预编码矩阵以及下界的闭式解,进而获得整个非线性收发机矩阵.仿真结果表明,该方法性能优于现有的线性收发机设计和经典的THP收发机设计.

  9. Taylor’s series expansion search vibratory source localization algorithm based on the gross error gray discriminant%基于粗大误差灰色判别的泰勒级数展开振源定位搜索算法

    Institute of Scientific and Technical Information of China (English)

    冯立杰; 樊瑶

    2014-01-01

    When the search algorithms is used to target positioning vibration,due to the environmental complexity and frequent interference sources,it leads to the consistency of measuringtime delay difference is poor,appears even the gross error,and the emergence probability of gross error is shock and serious influence the search precision and convergence speed of the algorithm.In order to solve this problem,this research puts forward the positioning searching algorithm based on gross time delay error grey discrimination and Taylor’s series expansion localization algorithm.Firstly this algorithm confirms the credible degree of each sensor according to the SNR (signal-noise-ratio )of signals and gray absolute correlation degree,and then it chooses three sensors with high credible degree to locate the position and estimate the initial value of the source coordinates.Time delay error is calculated for each sensor and it is made difference with the measurement time delay differential.The gross time delay error gray discriminate rule is used to eliminate the gross time delay error.Finally the Taylor’s series iteration searching method is used to determine the source location. Experimental results show that this method can effectively improve the search speed positioning precision and anti-interference ability of the vibration source.%在利用搜索算法进行振动目标定位时,由于环境复杂性和干扰源频发性,导致时延差测量的一致性差,甚至出现粗大误差,且粗大误差出现的概率急增,而粗大误差的出现严重影响搜索算法的精度和收敛速度。针对这一问题,提出了一种基于粗大时延误差灰色判别和泰勒级数展开的定位搜索算法。算法根据灰色绝对关联度和信号的信噪比确定各个传感器的信度,然后从多个传感器中选择3个信度高的传感器进行定位,估计出振源坐标的初值,据此计算出各传感器的时延差,并与测量时延差作

  10. Socially-Tolerable Discrimination

    OpenAIRE

    J. Atsu Amegashie

    2008-01-01

    History is replete with overt discrimination of various forms. However, these forms of discrimination are not equally tolerable. For example, discrimination based on immutable or prohibitively unalterable characteristics such as race or gender is much less acceptable. Why? I develop a simple model of conflict which is driven by either racial (gender) discrimination or generational discrimination (i.e., young versus old). I show that there exist parameters of the model where racial (gender) di...

  11. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  12. Minimum fuel mode evaluation

    Science.gov (United States)

    Orme, John S.; Nobbs, Steven G.

    1995-01-01

    The minimum fuel mode of the NASA F-15 research aircraft is designed to minimize fuel flow while maintaining constant net propulsive force (FNP), effectively reducing thrust specific fuel consumption (TSFC), during cruise flight conditions. The test maneuvers were at stabilized flight conditions. The aircraft test engine was allowed to stabilize at the cruise conditions before data collection initiated; data were then recorded with performance seeking control (PSC) not-engaged, then data were recorded with the PSC system engaged. The maneuvers were flown back-to-back to allow for direct comparisons by minimizing the effects of variations in the test day conditions. The minimum fuel mode was evaluated at subsonic and supersonic Mach numbers and focused on three altitudes: 15,000; 30,000; and 45,000 feet. Flight data were collected for part, military, partial, and maximum afterburning power conditions. The TSFC savings at supersonic Mach numbers, ranging from approximately 4% to nearly 10%, are in general much larger than at subsonic Mach numbers because of PSC trims to the afterburner.

  13. Gender Discrimination in English

    Institute of Scientific and Technical Information of China (English)

    廖敏慧

    2014-01-01

    Gender discrimination in language is usually defined as discrimination based on sex, especially discrimination against women. With the rise of women’s liberation movement in the 1960s and 1970s, and the improvement of women’s social status in recent years, gender discrimination in English attracts more and more attention. Based on previous studies, this thesis first dis⁃cusses the manifestations of gender discrimination in English vocabulary and address terms, then analyzes the factors of gender dis⁃crimination in English from social and cultural perspectives, finally puts forward some methods that are good for avoiding or elim⁃inating gender discrimination in English.

  14. Minimum wages, earnings, and migration

    National Research Council Canada - National Science Library

    Boffy-Ramirez, Ernest

    2013-01-01

    Does increasing a state’s minimum wage induce migration into the state? Previous literature has shown mobility in response to welfare benefit differentials across states, yet few have examined the minimum wage as a cause of mobility...

  15. Discriminant Incoherent Component Analysis.

    Science.gov (United States)

    Georgakis, Christos; Panagakis, Yannis; Pantic, Maja

    2016-05-01

    Face images convey rich information which can be perceived as a superposition of low-complexity components associated with attributes, such as facial identity, expressions, and activation of facial action units (AUs). For instance, low-rank components characterizing neutral facial images are associated with identity, while sparse components capturing non-rigid deformations occurring in certain face regions reveal expressions and AU activations. In this paper, the discriminant incoherent component analysis (DICA) is proposed in order to extract low-complexity components, corresponding to facial attributes, which are mutually incoherent among different classes (e.g., identity, expression, and AU activation) from training data, even in the presence of gross sparse errors. To this end, a suitable optimization problem, involving the minimization of nuclear-and l1 -norm, is solved. Having found an ensemble of class-specific incoherent components by the DICA, an unseen (test) image is expressed as a group-sparse linear combination of these components, where the non-zero coefficients reveal the class(es) of the respective facial attribute(s) that it belongs to. The performance of the DICA is experimentally assessed on both synthetic and real-world data. Emphasis is placed on face analysis tasks, namely, joint face and expression recognition, face recognition under varying percentages of training data corruption, subject-independent expression recognition, and AU detection by conducting experiments on four data sets. The proposed method outperforms all the methods that are compared with all the tasks and experimental settings.

  16. Unsupervised Linear Discriminant Analysis

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An algorithm for unsupervised linear discriminant analysis was presented. Optimal unsupervised discriminant vectors are obtained through maximizing covariance of all samples and minimizing covariance of local k-nearest neighbor samples. The experimental results show our algorithm is effective.

  17. Airline Price Discrimination

    OpenAIRE

    Stacey, Brian

    2015-01-01

    Price discrimination enjoys a long history in the airline industry. Borenstein (1989) discusses price discrimination through frequent flyer programs from 1985 as related to the Piedmont-US Air merger, price discrimination strategies have grown in size and scope since then. From Saturday stay over requirements to varying costs based on time of purchase, the airline industry is uniquely situated to enjoy the fruits of price discrimination.

  18. Minimum Reservoir Water Level in Hydropower Dams

    Science.gov (United States)

    Sarkardeh, Hamed

    2017-07-01

    Vortex formation over the intakes is an undesirable phenomenon within the water withdrawal process from a dam reservoir. Calculating the minimum operating water level in power intakes by empirical equations is not a safe way and sometimes contains some errors. Therefore, current method to calculate the critical submergence of a power intake is construction of a scaled physical model in parallel with numerical model. In this research some proposed empirical relations for prediction of submergence depth in power intakes were validated with experimental data of different physical and numerical models of power intakes. Results showed that, equations which involved the geometry of intake have better correspondence with the experimental and numerical data.

  19. EasiPLED: An Approach to Discriminate the Causes of Packet Losses and Errors for Wireless Sensor Networks Based on Supervised Learning Theory%EasiPLED:一种基于监督学习理论的无线传感网络分组丢失和错误原因识别方法

    Institute of Scientific and Technical Information of China (English)

    黄庭培; 陈海明; 张招亮; 崔莉

    2013-01-01

    It is well known that there are two kinds of causes, namely channel-errors and collisions, which lead to high probability of packet losses and errors in wireless networks. The ability of discriminating the above two causes provides many opportunities for implementing high efficient networking protocols in wireless sensor networks (WSNs). However, the limited resources of sensor nodes and the highly complex communication environment pose great challenges to coping with the above problem. This paper focuses on how to improve the accuracy of discriminating the causes of packet losses and errors with low overhead and the simplicity of implementation on sensor nodes. Based on supervised learning theory, we propose a light-weighted discriminator, named EasiPLED, to differentiate the root causes of packet losses and errors with high accuracy and timeliness. EasiPLED investigates the F-BER patterns of error packets and the statistic characteristics of received packets' RSSI and LQI in different environments through extensive indoor experimental studies on packet reception. EasiPLED extracts the input features for supervised learning model based on F-BER , RSSI and LQI, and implements a low-overhead F-BER estimation method by combining the control and data-driven mechanisms together. To mitigate the effect of noises, hardware limitations and highly dynamic communication environment on the estimation of feature values, the paper presents an adaptive feature estimator based on error-based filter. We model and test the EasiPLED model through three widely used supervised learning methods. The testing results show that EasiPLED can achieves at least 79.8% of accuracy. Finally, we apply the EasiPLED to the probabilistic polling protocol to evaluate its performance. Experimental results show that EasiPLED yields a promotion of the probability of successful polling by up to 43. 5% when compared to the recent method.%信道错误和冲突是导致无线网络中分组丢失和错

  20. Learning from minimum entropy queries in a large committee machine

    CERN Document Server

    Sollich, P

    1996-01-01

    In supervised learning, the redundancy contained in random examples can be avoided by learning from queries. Using statistical mechanics, we study learning from minimum entropy queries in a large tree-committee machine. The generalization error decreases exponentially with the number of training examples, providing a significant improvement over the algebraic decay for random examples. The connection between entropy and generalization error in multi-layer networks is discussed, and a computationally cheap algorithm for constructing queries is suggested and analysed.

  1. Popularity at Minimum Cost

    CERN Document Server

    Kavitha, Telikepalli; Nimbhorkar, Prajakta

    2010-01-01

    We consider an extension of the {\\em popular matching} problem in this paper. The input to the popular matching problem is a bipartite graph G = (A U B,E), where A is a set of people, B is a set of items, and each person a belonging to A ranks a subset of items in an order of preference, with ties allowed. The popular matching problem seeks to compute a matching M* between people and items such that there is no matching M where more people are happier with M than with M*. Such a matching M* is called a popular matching. However, there are simple instances where no popular matching exists. Here we consider the following natural extension to the above problem: associated with each item b belonging to B is a non-negative price cost(b), that is, for any item b, new copies of b can be added to the input graph by paying an amount of cost(b) per copy. When G does not admit a popular matching, the problem is to "augment" G at minimum cost such that the new graph admits a popular matching. We show that this problem is...

  2. Discriminately Decreasing Discriminability with Learned Image Filters

    CERN Document Server

    Whitehill, Jacob

    2011-01-01

    In machine learning and computer vision, input images are often filtered to increase data discriminability. In some situations, however, one may wish to purposely decrease discriminability of one classification task (a "distractor" task), while simultaneously preserving information relevant to another (the task-of-interest): For example, it may be important to mask the identity of persons contained in face images before submitting them to a crowdsourcing site (e.g., Mechanical Turk) when labeling them for certain facial attributes. Another example is inter-dataset generalization: when training on a dataset with a particular covariance structure among multiple attributes, it may be useful to suppress one attribute while preserving another so that a trained classifier does not learn spurious correlations between attributes. In this paper we present an algorithm that finds optimal filters to give high discriminability to one task while simultaneously giving low discriminability to a distractor task. We present r...

  3. Social Security's special minimum benefit.

    Science.gov (United States)

    Olsen, K A; Hoffmeyer, D

    Social Security's special minimum primary insurance amount (PIA) provision was enacted in 1972 to increase the adequacy of benefits for regular long-term, low-earning covered workers and their dependents or survivors. At the time, Social Security also had a regular minimum benefit provision for persons with low lifetime average earnings and their families. Concerns were rising that the low lifetime average earnings of many regular minimum beneficiaries resulted from sporadic attachment to the covered workforce rather than from low wages. The special minimum benefit was seen as a way to reward regular, low-earning workers without providing the windfalls that would have resulted from raising the regular minimum benefit to a much higher level. The regular minimum benefit was subsequently eliminated for workers reaching age 62, becoming disabled, or dying after 1981. Under current law, the special minimum benefit will phase out over time, although it is not clear from the legislative history that this was Congress's explicit intent. The phaseout results from two factors: (1) special minimum benefits are paid only if they are higher than benefits payable under the regular PIA formula, and (2) the value of the regular PIA formula, which is indexed to wages before benefit eligibility, has increased faster than that of the special minimum PIA, which is indexed to inflation. Under the Social Security Trustees' 2000 intermediate assumptions, the special minimum benefit will cease to be payable to retired workers attaining eligibility in 2013 and later. Their benefits will always be larger under the regular benefit formula. As policymakers consider Social Security solvency initiatives--particularly proposals that would reduce benefits or introduce investment risk--interest may increase in restoring some type of special minimum benefit as a targeted protection for long-term low earners. Two of the three reform proposals offered by the President's Commission to Strengthen

  4. Measurement Error with Different Computer Vision Techniques

    Science.gov (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  5. MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández

    2017-09-01

    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  6. Conjugate descent formulation of backpropagation error in feedforward neural networks

    Directory of Open Access Journals (Sweden)

    NK Sharma

    2009-06-01

    Full Text Available The feedforward neural network architecture uses backpropagation learning to determine optimal weights between different interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given input-output pattern. It employs an iterative procedure to minimise the error function for a given set of patterns, by adjusting the weights of the network. The first derivates of the error with respect to the weights identify the local error surface in the descent direction. Hence the network exhibits a different local error surface for every different pattern presented to it, and weights are iteratively modified in order to minimise the current local error. The determination of an optimal weight vector is possible only when the total minimum error (mean of the minimum local errors for all patterns from the training set may be minimised. In this paper, we present a general mathematical formulation for the second derivative of the error function with respect to the weights (which represents a conjugate descent for arbitrary feedforward neural network topologies, and we use this derivative information to obtain the optimal weight vector. The local error is backpropagated among the units of hidden layers via the second order derivative of the error with respect to the weights of the hidden and output layers independently and also in combination. The new total minimum error point may be evaluated with the help of the current total minimum error and the current minimised local error. The weight modification processes is performed twice: once with respect to the present local error and once more with respect to the current total or mean error. We present some numerical evidence that our proposed method yields better network weights than those determined via a conventional gradient descent approach.

  7. Appliance Efficiency Standards and Price Discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Spurlock, Cecily Anna [Univ. of California, Berkeley, CA (United States)

    2013-05-08

    I explore the effects of two simultaneous changes in minimum energy efficiency and ENERGY STAR standards for clothes washers. Adapting the Mussa and Rosen (1978) and Ronnen (1991) second-degree price discrimination model, I demonstrate that clothes washer prices and menus adjusted to the new standards in patterns consistent with a market in which firms had been price discriminating. In particular, I show evidence of discontinuous price drops at the time the standards were imposed, driven largely by mid-low efficiency segments of the market. The price discrimination model predicts this result. On the other hand, in a perfectly competition market, prices should increase for these market segments. Additionally, new models proliferated in the highest efficiency market segment following the standard changes. Finally, I show that firms appeared to use different adaptation strategies at the two instances of the standards changing.

  8. Discrimination of optical coherent states using a photon number resolving detector

    DEFF Research Database (Denmark)

    Wittmann, C.; Andersen, Ulrik Lund; Leuchs, G.

    2010-01-01

    The discrimination of non-orthogonal quantum states with reduced or without errors is a fundamental task in quantum measurement theory. In this work, we investigate a quantum measurement strategy capable of discriminating two coherent states probabilistically with significantly smaller error...... probabilities than can be obtained using non-probabilistic state discrimination. We find that appropriate postselection of the measurement data of a photon number resolving detector can be used to discriminate two coherent states with small error probability. We compare our new receiver to an optimal...

  9. FUZZY ECCENTRICITY AND GROSS ERROR IDENTIFICATION

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The dominant and recessive effect made by exceptional interferer is analyzed in measurement system based on responsive character, and the gross error model of fuzzy clustering based on fuzzy relation and fuzzy equipollence relation is built. The concept and calculate formula of fuzzy eccentricity are defined to deduce the evaluation rule and function of gross error, on the base of them, a fuzzy clustering method of separating and discriminating the gross error is found. Utilized in the dynamic circular division measurement system, the method can identify and eliminate gross error in measured data, and reduce measured data dispersity. Experimental results indicate that the use of the method and model enables repetitive precision of the system to improve 80% higher than the foregoing system, to reach 3.5 s, and angle measurement error is less than 7 s.

  10. Design of Linear - and Minimum-phase FIR-equalizers

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen; Jensen, K.J.; Gaunholt, Hans

    1996-01-01

    an error function which is quadratic in the filtercoefficients. The advantage of the quadratic function is the ability to find the optimal coefficients solving a system of linear equations without iterations.The transformation to a minimum-phase equalizer is carried out by homomorphic deconvolution...

  11. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  12. Minimum signals in classical physics

    Institute of Scientific and Technical Information of China (English)

    邓文基; 许基桓; 刘平

    2003-01-01

    The bandwidth theorem for Fourier analysis on any time-dependent classical signal is shown using the operator approach to quantum mechanics. Following discussions about squeezed states in quantum optics, the problem of minimum signals presented by a single quantity and its squeezing is proposed. It is generally proved that all such minimum signals, squeezed or not, must be real Gaussian functions of time.

  13. Discrimination against Black Students

    Science.gov (United States)

    Aloud, Ashwaq; Alsulayyim, Maryam

    2016-01-01

    Discrimination is a structured way of abusing people based on racial differences, hence barring them from accessing wealth, political participation and engagement in many spheres of human life. Racism and discrimination are inherently rooted in institutions in the society, the problem has spread across many social segments of the society including…

  14. INTERSECTIONAL DISCRIMINATION AGAINST CHILDREN

    DEFF Research Database (Denmark)

    Ravnbøl, Camilla Ida

    This paper adds a perspective to existing research on child protection by engaging in a debate on intersectional discrimination and its relationship to child protection. The paper has a twofold objective, (1) to further establish intersectionality as a concept to address discrimination against ch...... children, and (2) to illustrate the importance of addressing intersectionality within rights-based programmes of child protection....

  15. INTERSECTIONAL DISCRIMINATION AGAINST CHILDREN

    DEFF Research Database (Denmark)

    Ravnbøl, Camilla Ida

    This paper adds a perspective to existing research on child protection by engaging in a debate on intersectional discrimination and its relationship to child protection. The paper has a twofold objective, (1) to further establish intersectionality as a concept to address discrimination against...

  16. Minimum length-maximum velocity

    Science.gov (United States)

    Panes, Boris

    2012-03-01

    We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.

  17. Cylindricity Error Measuring and Evaluating for Engine Cylinder Bore in Manufacturing Procedure

    Directory of Open Access Journals (Sweden)

    Qiang Chen

    2016-01-01

    Full Text Available On-line measuring device of cylindricity error is designed based on two-point method error separation technique (EST, which can separate spindle rotation error from measuring error. According to the principle of measuring device, the mathematical model of the minimum zone method for cylindricity error evaluating is established. Optimized parameters of objective function decrease to four from six by assuming that c is equal to zero and h is equal to one. Initial values of optimized parameters are obtained from least square method and final values are acquired by the genetic algorithm. The ideal axis of cylinder is fitted in MATLAB. Compared to the error results of the least square method, the minimum circumscribed cylinder method, and the maximum inscribed cylinder method, the error result of the minimum zone method conforms to the theory of error evaluation. The results indicate that the method can meet the requirement of engine cylinder bore cylindricity error measuring and evaluating.

  18. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  19. Against a Minimum Voting Age

    OpenAIRE

    Cook, Philip

    2013-01-01

    A minimum voting age is defended as the most effective and least disrespectful means of ensuring all members of an electorate are sufficiently competent to vote. Whilst it may be reasonable to require competency from voters, a minimum voting age should be rejected because its view of competence is unreasonably controversial, it is incapable of defining a clear threshold of sufficiency and an alternative test is available which treats children more respectfully. This alternative is a procedura...

  20. FET frequency discriminator

    Science.gov (United States)

    Mawhinney, F. D.

    1982-03-01

    The FET Frequency Discriminator is an experimental microwave frequency discriminator developed for use in a specialized set-on VCO frequency memory system. Additional development and evaluation work has been done during this program to more fully determine the applicability of the FET frequency discriminator as a low-cost, expendable receiver front-end for both surveillance and ECM systems. Various methods for adjusting the frequency-to-voltage characteristic of the discriminator as well as the effects of detector characteristics and ambient temperature changes were evaluated. A number of discriminators for use in the 7- to 11-GHz and the 11to 18-GHz bands were fabricated and tested. Interim breadboard and final packaged models were either delivered or installed in developmental frequency systems. The major limitations and deficiencies of the FET frequency discriminator that were reviewed during the program include the effects of temperature, input power level variations, nonlinearity, and component repeatability. Additional effort will be required to advance the developmental status of the FET frequency discriminator to the level necessary for inclusion in low-cost receiver systems, but the basic simplicity of the approach continues to show much promise.

  1. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  2. Robustification and Optimization in Repetitive Control For Minimum Phase and Non-Minimum Phase Systems

    Science.gov (United States)

    Prasitmeeboon, Pitcha

    repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.

  3. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression....

  4. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression....

  5. Legal consequences of the moral duty to report errors.

    Science.gov (United States)

    Hall, Jacqulyn Kay

    2003-09-01

    Increasingly, clinicians are under a moral duty to report errors to the patients who are injured by such errors. The sources of this duty are identified, and its probable impact on malpractice litigation and criminal law is discussed. The potential consequences of enforcing this new moral duty as a minimum in law are noted. One predicted consequence is that the trend will be accelerated toward government payment of compensation for errors. The effect of truth-telling on individuals is discussed.

  6. Error-resilient DNA computation

    Energy Technology Data Exchange (ETDEWEB)

    Karp, R.M.; Kenyon, C.; Waarts, O. [Univ. of California, Berkeley, CA (United States)

    1996-12-31

    The DNA model of computation, with test tubes of DNA molecules encoding bit sequences, is based on three primitives, Extract-A-Bit, which splits a test tube into two test tubes according to the value of a particular bit x, Merge-Two-Tubes and Detect-Emptiness. Perfect operations can test the satisfiability of any boolean formula in linear time. However, in reality the Extract operation is faulty; it misclassifies a certain proportion of the strands. We consider the following problem: given an algorithm based on perfect Extract, Merge and Detect operations, convert it to one that works correctly with high probability when the Extract operation is faulty. The fundamental problem in such a conversion is to construct a sequence of faulty Extracts and perfect Merges that simulates a highly reliable Extract operation. We first determine (up to a small constant factor) the minimum number of faulty Extract operations inherently required to simulate a highly reliable Extract operation. We then go on to derive a general method for converting any algorithm based on error-free operations to an error-resilient one, and give optimal error-resilient algorithms for realizing simple n-variable boolean functions such as Conjunction, Disjunction and Parity.

  7. Discriminative power of visual attributes in dermatology.

    Science.gov (United States)

    Giotis, Ioannis; Visser, Margaretha; Jonkman, Marcel; Petkov, Nicolai

    2013-02-01

    Visual characteristics such as color and shape of skin lesions play an important role in the diagnostic process. In this contribution, we quantify the discriminative power of such attributes using an information theoretical approach. We estimate the probability of occurrence of each attribute as a function of the skin diseases. We use the distribution of this probability across the studied diseases and its entropy to define the discriminative power of the attribute. The discriminative power has a maximum value for attributes that occur (or do not occur) for only one disease and a minimum value for those which are equally likely to be observed among all diseases. Verrucous surface, red and brown colors, and the presence of more than 10 lesions are among the most informative attributes. A ranking of attributes is also carried out and used together with a naive Bayesian classifier, yielding results that confirm the soundness of the proposed method. proposed measure is proven to be a reliable way of assessing the discriminative power of dermatological attributes, and it also helps generate a condensed dermatological lexicon. Therefore, it can be of added value to the manual or computer-aided diagnostic process. © 2012 John Wiley & Sons A/S.

  8. Chiral discrimination in biomimetic systems: Phenylalanine

    Indian Academy of Sciences (India)

    K Thirumoorthy; K Soni; T Arun; N Nandi

    2007-09-01

    Chiral discrimination and recognition is important in peptide biosynthesis, amino acid synthesis and drug designing. Detailed structural information is available about the peptide synthesis in ribosome. However, no detailed study is available about the discrimination in peptide synthesis. We study the conformational energy variation of neutral methoxy phenyl alanine molecule as a function of its different dihedral angle to locate the minimum energy conformation using quantum chemical theory. We compared the intermolecular energy surfaces of phenyl alanine molecule in its neutral and zwitterionic state using quantum chemical theory as a function of distance and mutual orientation. The energy surfaces are studied with rigid geometry by varying the distance and orientation. The potential energy surfaces of - and - pairs are found to be dissimilar and reflect the underlying chirality of the homochiral pair and racemic nature of the heterochiral pair. The intermolecular energy surface of homochiral pair is more favourable than the corresponding energy surface of heterochiral pair.

  9. Minimum Q Electrically Small Antennas

    DEFF Research Database (Denmark)

    Kim, O. S.

    2012-01-01

    for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q.......Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for the stored energies obtained through the vector spherical wave theory, it is shown that a magnetic-coated metal core reduces the internal stored energy of both TM1m and TE1m modes simultaneously, so that a self-resonant antenna with the Q approaching the fundamental minimum is created. Numerical results...

  10. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  11. Kernel methods and minimum contrast estimators for empirical deconvolution

    CERN Document Server

    Delaigle, Aurore

    2010-01-01

    We survey classical kernel methods for providing nonparametric solutions to problems involving measurement error. In particular we outline kernel-based methodology in this setting, and discuss its basic properties. Then we point to close connections that exist between kernel methods and much newer approaches based on minimum contrast techniques. The connections are through use of the sinc kernel for kernel-based inference. This `infinite order' kernel is not often used explicitly for kernel-based deconvolution, although it has received attention in more conventional problems where measurement error is not an issue. We show that in a comparison between kernel methods for density deconvolution, and their counterparts based on minimum contrast, the two approaches give identical results on a grid which becomes increasingly fine as the bandwidth decreases. In consequence, the main numerical differences between these two techniques are arguably the result of different approaches to choosing smoothing parameters.

  12. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  13. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  14. Error estimation in the direct state tomography

    Science.gov (United States)

    Sainz, I.; Klimov, A. B.

    2016-10-01

    We show that reformulating the Direct State Tomography (DST) protocol in terms of projections into a set of non-orthogonal bases one can perform an accuracy analysis of DST in a similar way as in the standard projection-based reconstruction schemes, i.e., in terms of the Hilbert-Schmidt distance between estimated and true states. This allows us to determine the estimation error for any measurement strength, including the weak measurement case, and to obtain an explicit analytic form for the average minimum square errors.

  15. On the assimilation-discrimination relationship in American English adults' French vowel learning.

    Science.gov (United States)

    Levy, Erika S

    2009-11-01

    A quantitative "cross-language assimilation overlap" method for testing predictions of the Perceptual Assimilation Model (PAM) was implemented to compare results of a discrimination experiment with the listeners' previously reported assimilation data. The experiment examined discrimination of Parisian French (PF) front rounded vowels /y/ and /oe/. Three groups of American English listeners differing in their French experience (no experience [NoExp], formal experience [ModExp], and extensive formal-plus-immersion experience [HiExp]) performed discrimination of PF /y-u/, /y-o/, /oe-o/, /oe-u/, /y-i/, /y-epsilon/, /oe-epsilon/, /oe-i/, /y-oe/, /u-i/, and /a-epsilon/. Vowels were in bilabial /rabVp/ and alveolar /radVt/ contexts. More errors were found for PF front vs back rounded vowel pairs (16%) than for PF front unrounded vs rounded pairs (2%). Overall, ModExp listeners did not perform more accurately (11% errors) than NoExp listeners (13% errors). Extensive immersion experience, however, was associated with fewer errors (3%) than formal experience alone, although discrimination of PF /y-u/ remained relatively poor (12% errors) for HiExp listeners. More errors occurred on pairs involving front vs back rounded vowels in alveolar context (20% errors) than in bilabial (11% errors). Significant correlations were revealed between listeners' assimilation overlap scores and their discrimination errors, suggesting that the PAM may be extended to second-language (L2) vowel learning.

  16. Minimum Thermal Conductivity of Superlattices

    Energy Technology Data Exchange (ETDEWEB)

    Simkin, M. V.; Mahan, G. D.

    2000-01-31

    The phonon thermal conductivity of a multilayer is calculated for transport perpendicular to the layers. There is a crossover between particle transport for thick layers to wave transport for thin layers. The calculations show that the conductivity has a minimum value for a layer thickness somewhat smaller then the mean free path of the phonons. (c) 2000 The American Physical Society.

  17. Minimum aanlandingsmaat Brasem (Abramis brama)

    NARCIS (Netherlands)

    Hal, van R.; Miller, D.C.M.

    2016-01-01

    Ter ondersteuning van een besluit aangaande een minimum aanlandingsmaat voor brasem, primair voor het IJsselmeer en Markermeer, heeft het ministerie van Economische Zaken IMARES verzocht een overzicht te geven van aanlandingsmaten voor brasem in andere landen en waar mogelijk de motivatie achter dez

  18. Coupling between minimum scattering antennas

    DEFF Research Database (Denmark)

    Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans

    1974-01-01

    Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...

  19. Learning discriminative dictionary for group sparse representation.

    Science.gov (United States)

    Sun, Yubao; Liu, Qingshan; Tang, Jinhui; Tao, Dacheng

    2014-09-01

    In recent years, sparse representation has been widely used in object recognition applications. How to learn the dictionary is a key issue to sparse representation. A popular method is to use l1 norm as the sparsity measurement of representation coefficients for dictionary learning. However, the l1 norm treats each atom in the dictionary independently, so the learned dictionary cannot well capture the multisubspaces structural information of the data. In addition, the learned subdictionary for each class usually shares some common atoms, which weakens the discriminative ability of the reconstruction error of each subdictionary. This paper presents a new dictionary learning model to improve sparse representation for image classification, which targets at learning a class-specific subdictionary for each class and a common subdictionary shared by all classes. The model is composed of a discriminative fidelity, a weighted group sparse constraint, and a subdictionary incoherence term. The discriminative fidelity encourages each class-specific subdictionary to sparsely represent the samples in the corresponding class. The weighted group sparse constraint term aims at capturing the structural information of the data. The subdictionary incoherence term is to make all subdictionaries independent as much as possible. Because the common subdictionary represents features shared by all classes, we only use the reconstruction error of each class-specific subdictionary for classification. Extensive experiments are conducted on several public image databases, and the experimental results demonstrate the power of the proposed method, compared with the state-of-the-arts.

  20. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  1. Cylindricity Error Measuring and Evaluating for Engine Cylinder Bore in Manufacturing Procedure

    OpenAIRE

    Qiang Chen; Xueheng Tao; Jinshi Lu; Xuejun Wang

    2016-01-01

    On-line measuring device of cylindricity error is designed based on two-point method error separation technique (EST), which can separate spindle rotation error from measuring error. According to the principle of measuring device, the mathematical model of the minimum zone method for cylindricity error evaluating is established. Optimized parameters of objective function decrease to four from six by assuming that c is equal to zero and h is equal to one. Initial values of optimized parameters...

  2. About accuracy of the discrimination parameter estimation for the dual high-energy method

    Science.gov (United States)

    Osipov, S. P.; Chakhlov, S. V.; Osipov, O. S.; Shtein, A. M.; Strugovtsev, D. V.

    2015-04-01

    A set of the mathematical formulas to estimate the accuracy of discrimination parameters for two implementations of the dual high energy method - by the effective atomic number and by the level lines is given. The hardware parameters which influenced on the accuracy of the discrimination parameters are stated. The recommendations to form the structure of the high energy X-ray radiation impulses are formulated. To prove the applicability of the proposed procedure there were calculated the statistical errors of the discrimination parameters for the cargo inspection system of the Tomsk polytechnic university on base of the portable betatron MIB-9. The comparison of the experimental estimations and the theoretical ones of the discrimination parameter errors was carried out. It proved the practical applicability of the algorithm to estimate the discrimination parameter errors for the dual high energy method.

  3. A Computational Discriminability Analysis on Twin Fingerprints

    Science.gov (United States)

    Liu, Yu; Srihari, Sargur N.

    Sharing similar genetic traits makes the investigation of twins an important study in forensics and biometrics. Fingerprints are one of the most commonly found types of forensic evidence. The similarity between twins’ prints is critical establish to the reliability of fingerprint identification. We present a quantitative analysis of the discriminability of twin fingerprints on a new data set (227 pairs of identical twins and fraternal twins) recently collected from a twin population using both level 1 and level 2 features. Although the patterns of minutiae among twins are more similar than in the general population, the similarity of fingerprints of twins is significantly different from that between genuine prints of the same finger. Twins fingerprints are discriminable with a 1.5%~1.7% higher EER than non-twins. And identical twins can be distinguished by examine fingerprint with a slightly higher error rate than fraternal twins.

  4. Discrimination of Wild Tea Germplasm Resources (Camellia sp.) Using RAPD Markers

    Institute of Scientific and Technical Information of China (English)

    CHEN Liang; WANG Ping-sheng; Yamaguchi Satoshi

    2002-01-01

    Discrimination of 24 wild tea germplasm resources ( Camellia sp. ) using RAPD markers was conducted. The result showed that RAPD markers were very effective tool and method in wild tea germplasm discrimination. There were 3 independent ways to discriminate tea germplasms, a) unique RAPD markers, b)specific band patterns and c) a combination of the band patterns or DNA fingerprinting provided by different primers. The presence of 16 unique RAPD markers and the absence of 3 unique markers obtained from 12 primers made it possible to discriminate 14 germplasms. Using the unique band patterns of primer OPO-13 could discriminate 10 tea germplasms. It was of much importance using minimum primers to obtain maximum discrimination capacity. All the 24 wild tea germplasms could be discriminated easily and entirely by the band patterns combination or DNA fingerprinting obtained from OPO-13, OPO-18, OPG-12 and OPA-13, including two wild tea trees of very similar morphological characteristics and chemical components.

  5. A note on the stability and discriminability of graph-based features for classification problems in digital pathology

    Science.gov (United States)

    Cruz-Roa, Angel; Xu, Jun; Madabhushi, Anant

    2015-01-01

    Nuclear architecture or the spatial arrangement of individual cancer nuclei on histopathology images has been shown to be associated with different grades and differential risk for a number of solid tumors such as breast, prostate, and oropharyngeal. Graph-based representations of individual nuclei (nuclei representing the graph nodes) allows for mining of quantitative metrics to describe tumor morphology. These graph features can be broadly categorized into global and local depending on the type of graph construction method. While a number of local graph (e.g. Cell Cluster Graphs) and global graph (e.g. Voronoi, Delaunay Triangulation, Minimum Spanning Tree) features have been shown to associated with cancer grade, risk, and outcome for different cancer types, the sensitivity of the preceding segmentation algorithms in identifying individual nuclei can have a significant bearing on the discriminability of the resultant features. This therefore begs the question as to which features while being discriminative of cancer grade and aggressiveness are also the most resilient to the segmentation errors. These properties are particularly desirable in the context of digital pathology images, where the method of slide preparation, staining, and type of nuclear segmentation algorithm employed can all dramatically affect the quality of the nuclear graphs and corresponding features. In this paper we evaluated the trade off between discriminability and stability of both global and local graph-based features in conjunction with a few different segmentation algorithms and in the context of two different histopathology image datasets of breast cancer from whole-slide images (WSI) and tissue microarrays (TMA). Specifically in this paper we investigate a few different performance measures including stability, discriminability and stability vs discriminability trade off, all of which are based on p-values from the Kruskal-Wallis one-way analysis of variance for local and global

  6. Understanding the Minimum Wage: Issues and Answers.

    Science.gov (United States)

    Employment Policies Inst. Foundation, Washington, DC.

    This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…

  7. 5 CFR 551.301 - Minimum wage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...

  8. Quantum mechanics the theoretical minimum

    CERN Document Server

    Susskind, Leonard

    2014-01-01

    From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.

  9. Minimum Bias Trigger in ATLAS

    CERN Document Server

    Kwee, R E; The ATLAS collaboration

    2010-01-01

    Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp-collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.09 < |eta| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presen...

  10. Aptitude Tests and Discrimination

    Science.gov (United States)

    Coupland, D. E.

    1970-01-01

    Explains why in the United States the feeling is increasing that much of the aptitude testing now being done discriminates against minority group members seeking employment. Skeptical of eliminating the discriminatory aspects of testing, the article raises the question of eliminating testing itself. (DM)

  11. A Lesson in Discrimination.

    Science.gov (United States)

    Chotiner, Barbara; Hameroff-Cohen, Wendy

    1994-01-01

    Public high school students with deafness vividly learned about the realities of discrimination when they were informed of "new rules for deaf students," which required that they wear "deaf badges" in school, follow a strict dress code, and so on. After the "new rules" hoax was revealed, students' feelings and reactions to the situation were…

  12. Color measurement and discrimination

    Science.gov (United States)

    Wandell, B. A.

    1985-01-01

    Theories of color measurement attempt to provide a quantative means for predicting whether two lights will be discriminable to an average observer. All color measurement theories can be characterized as follows: suppose lights a and b evoke responses from three color channels characterized as vectors, v(a) and v(b); the vector difference v(a) - v(b) corresponds to a set of channel responses that would be generated by some real light, call it *. According to theory a and b will be discriminable when * is detectable. A detailed development and test of the classic color measurement approach are reported. In the absence of a luminance component in the test stimuli, a and b, the theory holds well. In the presence of a luminance component, the theory is clearly false. When a luminance component is present discrimination judgements depend largely on whether the lights being discriminated fall in separate, categorical regions of color space. The results suggest that sensory estimation of surface color uses different methods, and the choice of method depends upon properties of the image. When there is significant luminance variation a categorical method is used, while in the absence of significant luminance variation judgments are continuous and consistant with the measurement approach.

  13. Sex Discrimination in Coaching.

    Science.gov (United States)

    Dessem, Lawrence

    1980-01-01

    Even in situations in which the underpayment of girls' coaches is due to the sex of the students coached rather than to the sex of the coaches, the coaches and the girls coached are victims of unlawful discrimination. Available from Harvard Women's Law Journal, Harvard Law School, Cambridge, MA 02138. (Author/IRT)

  14. Education and Gender Discrimination

    Science.gov (United States)

    Sumi, V. S.

    2012-01-01

    This paper discusses the status of women education in present education system and some measures to overcome the lags existing. Discrimination against girls and women in the developing world is a devastating reality. It results in millions of individual tragedies, which add up to lost potential for entire countries. Gender bias in education is an…

  15. Discrimination. Opposing Viewpoints Series.

    Science.gov (United States)

    Williams, Mary E., Ed.

    Books in the Opposing Viewpoints series challenge readers to question their own opinions and assumptions. By reading carefully balanced views, readers confront new ideas on the topic of interest. The Civil Rights Act of 1964, which prohibited job discrimination based on age, race, religion, gender, or national origin, provided the groundwork for…

  16. Reversing Discrimination: A Perspective

    Science.gov (United States)

    Pati, Gopal; Reilly, Charles W.

    1977-01-01

    Examines the debate over affirmative action and reverse discrimination, and discusses how and why the present dilemma has developed. Suggests that organizations can best address the problem through an honest, in-depth analysis of their organizational structure and management practices. (JG)

  17. Immunological self, nonself discrimination

    DEFF Research Database (Denmark)

    Guillet, J G; Lai, M Z; Briner, T J

    1987-01-01

    The ability of immunodominant peptides derived from several antigen systems to compete with each other for T cell activation was studied. Only peptides restricted by a given transplantation antigen are mutually competitive. There is a correlation between haplotype restriction, ability to bind to ...... that provides a basis for explaining self, nonself discrimination as well as alloreactivity....

  18. Analytic boosted boson discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States)

    2016-05-20

    Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D{sub 2}, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

  19. Discrimination Learning in Children

    Science.gov (United States)

    Ochocki, Thomas E.; And Others

    1975-01-01

    Examined the learning performance of 192 fourth-, fifth-, and sixth-grade children on either a two or four choice simultaneous color discrimination task. Compared the use of verbal reinforcement and/or punishment, under conditions of either complete or incomplete instructions. (Author/SDH)

  20. Discrimination and its Effects.

    Science.gov (United States)

    Thomas, Clarence

    1983-01-01

    Reviews challenges facing Black professionals committed to further promoting civil rights. Focuses on the Federal government role, particularly regarding racial discrimination in employment. Warns against the acceptance of orthodoxies, and calls for new action and the exercising of intellectual freedom. (KH)

  1. Minimum thickness anterior porcelain restorations.

    Science.gov (United States)

    Radz, Gary M

    2011-04-01

    Porcelain laminate veneers (PLVs) provide the dentist and the patient with an opportunity to enhance the patient's smile in a minimally to virtually noninvasive manner. Today's PLV demonstrates excellent clinical performance and as materials and techniques have evolved, the PLV has become one of the most predictable, most esthetic, and least invasive modalities of treatment. This article explores the latest porcelain materials and their use in minimum thickness restoration.

  2. Fingerprinting with Minimum Distance Decoding

    CERN Document Server

    Lin, Shih-Chun; Gamal, Hesham El

    2007-01-01

    This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...

  3. Minimum feature size preserving decompositions

    CERN Document Server

    Aloupis, Greg; Demaine, Martin L; Dujmovic, Vida; Iacono, John

    2009-01-01

    The minimum feature size of a crossing-free straight line drawing is the minimum distance between a vertex and a non-incident edge. This quantity measures the resolution needed to display a figure or the tool size needed to mill the figure. The spread is the ratio of the diameter to the minimum feature size. While many algorithms (particularly in meshing) depend on the spread of the input, none explicitly consider finding a mesh whose spread is similar to the input. When a polygon is partitioned into smaller regions, such as triangles or quadrangles, the degradation is the ratio of original to final spread (the final spread is always greater). Here we present an algorithm to quadrangulate a simple n-gon, while achieving constant degradation. Note that although all faces have a quadrangular shape, the number of edges bounding each face may be larger. This method uses Theta(n) Steiner points and produces Theta(n) quadrangles. In fact to obtain constant degradation, Omega(n) Steiner points are required by any al...

  4. On the Minimum Distance of Non Binary LDPC Codes

    CERN Document Server

    Pulikkoonattu, Rethnakaran

    2009-01-01

    Minimum distance is an important parameter of a linear error correcting code. For improved performance of binary Low Density Parity Check (LDPC) codes, we need to have the minimum distance grow fast with n, the codelength. However, the best we can hope for is a linear growth in dmin with n. For binary LDPC codes, the necessary and sufficient conditions on the LDPC ensemble parameters, to ensure linear growth of minimum distance is well established. In the case of non-binary LDPC codes, the structure of logarithmic weight codewords is different from that of binary codes. We have carried out a preliminary study on the logarithmic bound on the the minimum distance of non-binary LDPC code ensembles. In particular, we have investigated certain configurations which would lead to low weight codewords. A set of simulations are performed to identify some of these configurations. Finally, we have provided a bound on the logarithmic minimum distance of nonbinary codes, using a strategy similar to the girth bound for bin...

  5. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.;

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...... of the actual errors for the potentials....

  6. Temporospatial dissociation of Pe subcomponents for perceived and unperceived errors

    Directory of Open Access Journals (Sweden)

    Tanja eEndrass

    2012-06-01

    Full Text Available Previous research on performance monitoring revealed that errors are followed by an initial fronto-central negative deflection (error-related negativity, ERN and subsequently centro-parietal positivity (error positivity, Pe. It has been shown that error awareness mainly influences the Pe, whereas the ERN seems unaffected by conscious awareness of an error. The aim of the present study was to investigate the relation of ERN and Pe to error awareness in a visual size discrimination task in which errors are not elicited by impulsive responding but by perceptual difficulty. Further, we applied a temporospatial principal component analysis (PCA to examine whether the temporospatial subcomponents of the Pe would differentially relate to error awareness. ERP results were in accordance with earlier studies: a significant error awareness effect was found for the Pe, but not for the ERN. Interestingly, a modulation with error perception on correct trials was found: correct responses considered as incorrect had larger correct-related negativity (CRN and lager Pe amplitudes than correct responses considered as correct. The PCA yielded two relevant spatial factors accounting for the Pe (latency 300 ms. A temporospatial factor displaying a centro-parietal positivity varied significantly with error awareness. Of the two temporospatial factors corresponding to response-related negativities, a factor with central topography varied with response correctness and subjective error perception on correct responses. The PCA results indicate that the error awareness effect is specifically related to the centro-parietal subcomponent of the Pe. Since this component has also been shown to be related to the importance of an error, the present variation with error awareness indicates that this component is sensitive to the salience of an error and that salience secondarily triggers error awareness.

  7. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  8. Examining Workplace Discrimination in a Discrimination-Free Environment

    OpenAIRE

    Braxton, Shawn Lamont

    2010-01-01

    Examining Workplace Discrimination in a Discrimination-Free Environment Shawn L. Braxton Abstract The purpose of this study is to explore how racial and gender discrimination is reproduced in concrete workplace settings even when anti-discrimination policies are present, and to understand the various reactions utilized by those who commonly experience it. I have selected a particular medical center, henceforth referred to by a pseudonym, â The Bliley Medical Centerâ as my case ...

  9. The relative merits of discriminating and non-discriminating dosemeters

    DEFF Research Database (Denmark)

    Marshal, T. O.; Christensen, Palle; Julius, H. W.;

    1986-01-01

    The need for discriminating and non-discriminating personal dosemeters in the field of radiological protection is examined. The ability of various types of dosemeter to meet these needs is also discussed. It is concluded that there is a need for discriminating dosemeters but in the majority of ca...

  10. Employment Age Discrimination on Women

    Institute of Scientific and Technical Information of China (English)

    黄捧

    2015-01-01

    Employment age discrimination against women is not an unusual phenomenon in China.Through describing the present situation and negative effect of this phenomenon,this paper claims laws are very important weapon to eliminate age discrimination against women.

  11. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  12. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  13. Entanglement as a resource for discrimination of classical environments

    Energy Technology Data Exchange (ETDEWEB)

    Trapani, Jacopo, E-mail: jacopo.trapani@unimi.it [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); Paris, Matteo G.A., E-mail: matteo.paris@fisica.unimi.it [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); INFN, Sezione di Milano, I-20133 Milano (Italy)

    2017-01-30

    We address extended systems interacting with classical fluctuating environments and analyze the use of quantum probes to discriminate local noise, described by independent fluctuating fields, from common noise, corresponding to the interaction with a common one. In particular, we consider a bipartite system made of two non-interacting harmonic oscillators and assess discrimination strategies based on homodyne detection, comparing their performances with the ultimate bounds on the error probabilities of quantum-limited measurements. We analyze in details the use of Gaussian probes, with emphasis on experimentally friendly signals. Our results show that a joint measurement of the position-quadrature on the two oscillators outperforms any other homodyne-based scheme for any input Gaussian state. - Highlights: • Strategies to discriminate local or common noise are proposed for CV systems. • Homodyne detection outperforms QC bound for experimentally friendly signals. • Entanglement may be exploited as a resource for discrimination of classical fields.

  14. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  15. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  16. Ceramic veneers with minimum preparation.

    Science.gov (United States)

    da Cunha, Leonardo Fernandes; Reis, Rachelle; Santana, Lino; Romanini, Jose Carlos; Carvalho, Ricardo Marins; Furuse, Adilson Yoshio

    2013-10-01

    The aim of this article is to describe the possibility of improving dental esthetics with low-thickness glass ceramics without major tooth preparation for patients with small to moderate anterior dental wear and little discoloration. For this purpose, a carefully defined treatment planning and a good communication between the clinician and the dental technician helped to maximize enamel preservation, and offered a good treatment option. Moreover, besides restoring esthetics, the restorative treatment also improved the function of the anterior guidance. It can be concluded that the conservative use of minimum thickness ceramic laminate veneers may provide satisfactory esthetic outcomes while preserving the dental structure.

  17. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  18. Price Discrimination: A Classroom Experiment

    Science.gov (United States)

    Aguiló, Paula; Sard, Maria; Tugores, Maria

    2016-01-01

    In this article, the authors describe a classroom experiment aimed at familiarizing students with different types of price discrimination (first-, second-, and third-degree price discrimination). During the experiment, the students were asked to decide what tariffs to set as monopolists for each of the price discrimination scenarios under…

  19. Price Discrimination: Lessons for Consumers.

    Science.gov (United States)

    Maynes, E. Scott

    1990-01-01

    Explains price and product discrimination, showing how intelligent consumers can achieve increased purchasing power of their income and discusses how consumer educators can explain this discrimination. Evaluates the pros and cons of price/product discrimination from the social viewpoint. (Author/JOW)

  20. Transgender Discrimination and the Law

    Science.gov (United States)

    Trotter, Richard

    2010-01-01

    An emerging area of law is developing regarding sex/gender identity discrimination, also referred to as transgender discrimination, as distinguished from discrimination based on sexual orientation. A transgendered individual is defined as "a person who has a gender-identity disorder which is a persistent discomfort about one?s assigned sex or…

  1. Racial Discrimination and Competition

    OpenAIRE

    Ross Levine; Alexey Levkov; Yona Rubinstein

    2008-01-01

    This paper assesses the impact of competition on racial discrimination. The dismantling of inter- and intrastate bank restrictions by U.S. states from the mid-1970s to the mid-1990s reduced financial market imperfections, lowered entry barriers facing nonfinancial firms, and boosted the rate of new firm formation. We use bank deregulation to identify an exogenous intensification of competition in the nonfinancial sector, and evaluate its impact on the racial wage gap, which is that component ...

  2. Optimal time discrimination

    OpenAIRE

    Coşkun, Filiz; Sayalı, Zeynep Ceyda; Gürbüz, Emine; Balcı, Fuat

    2015-01-01

    Optimal Time Discrimination Journal: Quarterly Journal of Experimental Psychology Manuscript ID: QJE-STD 14-039.R1 Manuscript Type: Standard Article Date Submitted by the Author: n/a Complete List of Authors: Çoskun, Filiz; Koç University, Psychology Sayalı Ungerer, Zeynep; Koç University, Psychology Gürbüz, Emine; Koç University, Psychology Balcı, Fuat; Koç University, Psychology Keywords: Decision making, Interval Timing, Optimality, Response Times, Temporal ...

  3. Discrimination in lexical decision

    Science.gov (United States)

    Feldman, Laurie Beth; Ramscar, Michael; Hendrix, Peter; Baayen, R. Harald

    2017-01-01

    In this study we present a novel set of discrimination-based indicators of language processing derived from Naive Discriminative Learning (ndl) theory. We compare the effectiveness of these new measures with classical lexical-distributional measures—in particular, frequency counts and form similarity measures—to predict lexical decision latencies when a complete morphological segmentation of masked primes is or is not possible. Data derive from a re-analysis of a large subset of decision latencies from the English Lexicon Project, as well as from the results of two new masked priming studies. Results demonstrate the superiority of discrimination-based predictors over lexical-distributional predictors alone, across both the simple and primed lexical decision tasks. Comparable priming after masked corner and cornea type primes, across two experiments, fails to support early obligatory segmentation into morphemes as predicted by the morpho-orthographic account of reading. Results fit well with ndl theory, which, in conformity with Word and Paradigm theory, rejects the morpheme as a relevant unit of analysis. Furthermore, results indicate that readers with greater spelling proficiency and larger vocabularies make better use of orthographic priors and handle lexical competition more efficiently. PMID:28235015

  4. Workplace discrimination and cancer.

    Science.gov (United States)

    McKenna, Maureen A; Fabian, Ellen; Hurley, Jessica E; McMahon, Brian T; West, Steven L

    2007-01-01

    Data from the Equal Employment Opportunity Commission (EEOC) Integrated Mission System database were analyzed with specific reference to allegations of workplace discrimination filed by individuals with cancer under ADA Title One. These 6,832 allegations, filed between July 27, 1992 and September 30, 2003, were compared to 167,798 allegations from a general disability population on the following dimensions: type of workplace discrimination; demographic characteristics of the charging parties (CPs); the industry designation, location, and size of employers; and the outcome or resolution of EEOC investigations. Results showed allegations derived from CPs with cancer were more likely than those in the general disability population to include issues involving discharge, terms and conditions of employment, lay-off, wages, and demotion. Compared to the general disability group, CPs with cancer were more likely to be female, older, and White. Allegations derived from CPs with cancer were also more likely to be filed against smaller employers (15-100 workers) or those in service industries. Finally, the resolution of allegations by CPs with cancer were more likely to be meritorious than those filed from the general disability population; that is, actual discrimination is more likely to have occurred.

  5. [Comment on] Statistical discrimination

    Science.gov (United States)

    Chinn, Douglas

    In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.

  6. 最优和最小能量最优跟踪问题%Optimal and Minimum Energy Optimal Tracking Problems

    Institute of Scientific and Technical Information of China (English)

    刘轩黄

    2005-01-01

    Based on the theory of generalized inverses and Bellman's dynamic programming approach,two forms of solutions to the optimal and minimum energy optimal tracking problems are presented for discrete linear time-varying systems.In each case, simple expressions for the minimum tracking error and minimum control energy are derived.

  7. Handwriting Error Patterns of Children with Mild Motor Difficulties.

    Science.gov (United States)

    Malloy-Miller, Theresa; And Others

    1995-01-01

    A test of handwriting legibility and 6 perceptual-motor tests were completed by 66 children ages 7-12. Among handwriting error patterns, execution was associated with visual-motor skill and sensory discrimination, aiming with visual-motor and fine-motor skills. The visual-spatial factor had no significant association with perceptual-motor…

  8. An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation

    Directory of Open Access Journals (Sweden)

    Hao Zhang

    2015-01-01

    Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.

  9. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  10. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  11. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  12. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  13. A hardware error estimate for floating-point computations

    Science.gov (United States)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However

  14. New Gear Transmission Error Measurement System Designed

    Science.gov (United States)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  15. Sequential discrimination of qudits by multiple observers

    Science.gov (United States)

    Hillery, Mark; Mimih, Jihane

    2017-10-01

    We discuss a scheme in which sequential state-discrimination measurements are performed on qudits to determine the quantum state in which they were initially prepared. The qudits belong to a set of nonorthogonal quantum states and hence cannot be distinguished with certainty. Unambiguous state discrimination allows error-free measurements at the expense of occasionally failing to give a conclusive answer about the state of the qudit. Qudits have the potential to carry more information per transmission than qubits. We considered the situation in which Alice sends one of N qudits, where the dimension of the qudits is also N. We looked at two cases, one in which the states all have the same overlap and one in which the qudits are divided into two sets, with qudits in different sets having different overlaps. We also studied the robustness of our scheme against a simple eavesdropping attack and found that by using qudits rather than qubits, there is a greater probability that an eavesdropper will introduce errors and be detected.

  16. MR PROSTATE SEGMENTATION VIA DISTRIBUTED DISCRIMINATIVE DICTIONARY (DDD) LEARNING.

    Science.gov (United States)

    Guo, Yanrong; Zhan, Yiqiang; Gao, Yaozong; Jiang, Jianguo; Shen, Dinggang

    2013-01-01

    Segmenting prostate from MR images is important yet challenging. Due to non-Gaussian distribution of prostate appearances in MR images, the popular active appearance model (AAM) has its limited performance. Although the newly developed sparse dictionary learning method[1, 2] can model the image appearance in a non-parametric fashion, the learned dictionaries still lack the discriminative power between prostate and non-prostate tissues, which is critical for accurate prostate segmentation. In this paper, we propose to integrate deformable model with a novel learning scheme, namely the Distributed Discriminative Dictionary (DDD) learning, which can capture image appearance in a non-parametric and discriminative fashion. In particular, three strategies are designed to boost the tissue discriminative power of DDD. First, minimum Redundancy Maximum Relevance (mRMR) feature selection is performed to constrain the dictionary learning in a discriminative feature space. Second, linear discriminant analysis (LDA) is employed to assemble residuals from different dictionaries for optimal separation between prostate and non-prostate tissues. Third, instead of learning the global dictionaries, we learn a set of local dictionaries for the local regions (each with small appearance variations) along prostate boundary, thus achieving better tissue differentiation locally. In the application stage, DDDs will provide the appearance cues to robustly drive the deformable model onto the prostate boundary. Experiments on 50 MR prostate images show that our method can yield a Dice Ratio of 88% compared to the manual segmentations, and have 7% improvement over the conventional AAM.

  17. Asymmetric k-Center with Minimum Coverage

    DEFF Research Database (Denmark)

    Gørtz, Inge Li

    2008-01-01

    In this paper we give approximation algorithms and inapproximability results for various asymmetric k-center with minimum coverage problems. In the k-center with minimum coverage problem, each center is required to serve a minimum number of clients. These problems have been studied by Lim et al. [A....... Lim, B. Rodrigues, F. Wang, Z. Xu, k-center problems with minimum coverage, Theoret. Comput. Sci. 332 (1–3) (2005) 1–17] in the symmetric setting....

  18. Estimation of Minimum DNBR Using Cascaded Fuzzy Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Yeong; Yoo, Kwae Hwan; Na, Man Gyun [Chosun University, Gwangju (Korea, Republic of)

    2015-05-15

    This phenomenon of boiling crisis is called a departure from nucleate boiling (DNB). The DNB phenomena can influence the fuel cladding and fuel pellets. The DNB ratio (DNBR) is defined as the ratio of the expected DNB heat flux to the actual fuel rod heat flux. Since it is very important to monitor and predict the minimum DNBR in a reactor core to prevent the boiling crisis and clad melting, a number of researches have been conducted to predict DNBR values. The aim of this study is to estimate the minimum DNBR in a reactor core using the measured signals of the reactor coolant system (RCS) by applying cascaded fuzzy neural networks (CFNN) according to operating conditions. Reactor core monitoring and protection systems require minimum DNBR prediction. The CFNN can be used to optimize the minimum DNBR value through the process of adding fuzzy neural networks (FNN) repeatedly. The proposed algorithm is trained by using the data set prepared for training (development data) and verified by using another data set different (independent) from the development data. The developed CFNN models were applied to the first fuel cycle of OPR1000. The RMS errors are 0.23% and 0.12% for the positive and negative ASI, respectively.

  19. Improvement of CPU time of Linear Discriminant Function based on MNM criterion by IP

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2014-05-01

    Full Text Available Revised IP-OLDF (optimal linear discriminant function by integer programming is a linear discriminant function to minimize the number of misclassifications (NM of training samples by integer programming (IP. However, IP requires large computation (CPU time. In this paper, it is proposed how to reduce CPU time by using linear programming (LP. In the first phase, Revised LP-OLDF is applied to all cases, and all cases are categorized into two groups: those that are classified correctly or those that are not classified by support vectors (SVs. In the second phase, Revised IP-OLDF is applied to the misclassified cases by SVs. This method is called Revised IPLP-OLDF.In this research, it is evaluated whether NM of Revised IPLP-OLDF is good estimate of the minimum number of misclassifications (MNM by Revised IP-OLDF. Four kinds of the real data—Iris data, Swiss bank note data, student data, and CPD data—are used as training samples. Four kinds of 20,000 re-sampling cases generated from these data are used as the evaluation samples. There are a total of 149 models of all combinations of independent variables by these data. NMs and CPU times of the 149 models are compared with Revised IPLP-OLDF and Revised IP-OLDF. The following results are obtained: 1 Revised IPLP-OLDF significantly improves CPU time. 2 In the case of training samples, all 149 NMs of Revised IPLP-OLDF are equal to the MNM of Revised IP-OLDF. 3 In the case of evaluation samples, most NMs of Revised IPLP-OLDF are equal to NM of Revised IP-OLDF. 4 Generalization abilities of both discriminant functions are concluded to be high, because the difference between the error rates of training and evaluation samples are almost within 2%.   Therefore, Revised IPLP-OLDF is recommended for the analysis of big data instead of Revised IP-OLDF. Next, Revised IPLP-OLDF is compared with LDF and logistic regression by 100-fold cross validation using 100 re-sampling samples. Means of error rates of

  20. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-01-08

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  1. Minimum Competency Testing and the Handicapped.

    Science.gov (United States)

    Wildemuth, Barbara M.

    This brief overview of minimum competency testing and disabled high school students discusses: the inclusion or exclusion of handicapped students in minimum competency testing programs; approaches to accommodating the individual needs of handicapped students; and legal issues. Surveys of states that have mandated minimum competency tests indicate…

  2. Do Some Workers Have Minimum Wage Careers?

    Science.gov (United States)

    Carrington, William J.; Fallick, Bruce C.

    2001-01-01

    Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…

  3. Does the Minimum Wage Affect Welfare Caseloads?

    Science.gov (United States)

    Page, Marianne E.; Spetz, Joanne; Millar, Jane

    2005-01-01

    Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…

  4. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its effect

  5. New time-domain three-point error separation methods for measurement roundness and spindle error motion

    Science.gov (United States)

    Liu, Wenwen; Tao, Tingting; Zeng, Hao

    2016-10-01

    Error separation is a key technology for online measuring spindle radial error motion or artifact form error, such as roundness and cylindricity. Three time-domain three-point error separation methods are proposed based on solving the minimum norm solution of the linear equations. Three laser displacement sensors are used to collect a set of discrete measurements recorded, by which a group of linear measurement equations is derived according to the criterion of prior separation form (PSF), prior separation spindle error motion (PSM) or synchronous separation both form and spindle error motion (SSFM). The work discussed the correlations between the angles of three sensors in measuring system, rank of coefficient matrix in the measurement equations and harmonics distortions in the separation results, revealed the regularities of the first order harmonics distortion and recommended the applicable situation of the each method. Theoretical research and large simulations show that SSFM is the more precision method because of the lower distortion.

  6. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  7. Discriminative Shape Alignment

    DEFF Research Database (Denmark)

    Loog, M.; de Bruijne, M.

    2009-01-01

    The alignment of shape data to a common mean before its subsequent processing is an ubiquitous step within the area shape analysis. Current approaches to shape analysis or, as more specifically considered in this work, shape classification perform the alignment in a fully unsupervised way......, not taking into account that eventually the shapes are to be assigned to two or more different classes. This work introduces a discriminative variation to well-known Procrustes alignment and demonstrates its benefit over this classical method in shape classification tasks. The focus is on two......-dimensional shapes from a two-class recognition problem....

  8. Decoding Cyclic Codes up to a New Bound on the Minimum Distance

    CERN Document Server

    Zeh, Alexander; Bezzateev, Sergey

    2011-01-01

    A new lower bound on the minimum distance of q-ary cyclic codes is proposed. This bound improves upon the Bose-Chaudhuri-Hocquenghem (BCH) bound and, for some codes, upon the Hartmann-Tzeng (HT) bound. Several Boston bounds are special cases of our bound. For some classes of codes the bound on the minimum distance is refined. Furthermore, a quadratic-time decoding algorithm up to this new bound is developed. The determination of the error locations is based on the Euclidean Algorithm and a modified Chien search. The error evaluation is done by solving a generalization of Forney's formula.

  9. Dynamic preconditioning of the September sea-ice extent minimum

    Science.gov (United States)

    Williams, James; Tremblay, Bruno; Newton, Robert; Allard, Richard

    2016-04-01

    There has been an increased interest in seasonal forecasting of the sea-ice extent in recent years, in particular the minimum sea-ice extent. We propose a dynamical mechanism, based on winter preconditioning through first year ice formation, that explains a significant fraction of the variance in the anomaly of the September sea-ice extent from the long-term linear trend. To this end, we use a Lagrangian trajectory model to backtrack the September sea-ice edge to any time during the previous winter and quantify the amount of sea-ice divergence along the Eurasian and Alaskan coastlines as well as the Fram Strait sea-ice export. We find that coastal divergence that occurs later in the winter (March, April and May) is highly correlated with the following September sea-ice extent minimum (r = -0.73). This is because the newly formed first year ice will melt earlier allowing for other feedbacks (e.g. ice albedo feedback) to start amplifying the signal early in the melt season when the solar input is large. We find that the winter mean Fram Strait sea-ice export anomaly is also correlated with the minimum sea-ice extent the following summer. Next we backtrack a synthetic ice edge initialized at the beginning of the melt season (June 1st) in order to develop hindcast models of the September sea-ice extent that do not rely on a-priori knowledge of the minimum sea-ice extent. We find that using a multi-variate regression model of the September sea-ice extent anomaly based on coastal divergence and Fram Strait ice export as predictors reduces the error by 41%. A hindcast model based on the mean DJFMA Arctic Oscillation index alone reduces the error by 24%.

  10. The influence of gender and bruxism on human minimum interdental threshold ability

    Directory of Open Access Journals (Sweden)

    Patrícia dos Santos Calderon

    2009-06-01

    Full Text Available OBJECTIVE: To evaluate the influence of gender and bruxism on the ability to discriminate minimum interdental threshold. MATERIAL AND METHODS: One hundred and fifteen individuals, representing both genders, bruxers and non-bruxers, with a mean age of 23.64 years, were selected for this study. For group allocation, every individual was subjected to a specific physical examination to detect bruxism (performed by three different examiners. Evaluation of the ability to discriminate minimum interdental threshold was performed using industrialized 0.010 mm-, 0.024 mm-, 0.030 mm-, 0.050 mm-, 0.080 mm- and 0.094 mm-thick aluminum foils that were placed between upper and lower premolars. Data were analyzed statistically by multiple linear regression analysis at 5% significance level. RESULTS: Neither gender nor bruxism influenced the ability to discriminate minimum interdental threshold (p>0.05. CONCLUSIONS: Gender and the presence of bruxism do not play a role in the minimum interdental threshold.

  11. THE INFLUENCE OF GENDER AND BRUXISM ON HUMAN MINIMUM INTERDENTAL THRESHOLD ABILITY

    Science.gov (United States)

    Calderon, Patrícia dos Santos; Kogawa, Evelyn Mikaela; Corpas, Lívia dos Santos; Lauris, José Roberto Pereira; Conti, Paulo César Rodrigues

    2009-01-01

    Objective: To evaluate the influence of gender and bruxism on the ability to discriminate minimum interdental threshold. Material and methods: One hundred and fifteen individuals, representing both genders, bruxers and non-bruxers, with a mean age of 23.64 years, were selected for this study. For group allocation, every individual was subjected to a specific physical examination to detect bruxism (performed by three different examiners). Evaluation of the ability to discriminate minimum interdental threshold was performed using industrialized 0.010 mm-, 0.024 mm-, 0.030 mm-, 0.050 mm-, 0.080 mm- and 0.094 mm-thick aluminum foils that were placed between upper and lower premolars. Data were analyzed statistically by multiple linear regression analysis at 5% significance level. Results: Neither gender nor bruxism influenced the ability to discriminate minimum interdental threshold (p>0.05). Conclusion: Gender and the presence of bruxism do not play a role in the minimum interdental threshold. PMID:19466256

  12. Discriminative sensing techniques

    Science.gov (United States)

    Lewis, Keith

    2008-10-01

    The typical human vision system is able to discriminate between a million or so different colours, yet is able to do this with a chromatic sensor array that is fundamentally based on three different receptors, sensitive to light in the blue, green and red portions of the visible spectrum. Some biological organisms have extended capabilities, providing vision in the ultra-violet, whilst others, such as some species of mantis shrimp reportedly have sixteen different types of photo-receptors. In general the biological imaging sensor takes a minimalist approach to sensing its environment, whereas current optical engineering approaches follow a 'brute' force solution where the challenge of hyperspectral imaging is addressed by various schemes for spatial and spectral dispersion of radiation across existing detector arrays. This results in a problem for others to solve in the processing and communication of the generated hypercube of data. This paper explores the parallels between some of those biological systems and the various design concepts being developed for discriminative imaging, drawing on activity supported by the UK Electro-Magnetic Remote Sensing Defence Technology Centre (EMRS DTC).

  13. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  14. Runge-Kutta methods with minimum storage implementations

    KAUST Repository

    Ketcheson, David I.

    2010-03-01

    Solution of partial differential equations by the method of lines requires the integration of large numbers of ordinary differential equations (ODEs). In such computations, storage requirements are typically one of the main considerations, especially if a high order ODE solver is required. We investigate Runge-Kutta methods that require only two storage locations per ODE. Existing methods of this type require additional memory if an error estimate or the ability to restart a step is required. We present a new, more general class of methods that provide error estimates and/or the ability to restart a step while still employing the minimum possible number of memory registers. Examples of such methods are found to have good properties. © 2009 Elsevier Inc. All rights reserved.

  15. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  16. Reference respiratory waveforms by minimum jerk model analysis

    Energy Technology Data Exchange (ETDEWEB)

    Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Yamadaoka 2-2, Suita-shi, Osaka 565-0871 (Japan); Ota, Seiichi [Department of Medical Technology, Osaka University Hospital, Yamadaoka 2-15, Suita-shi, Osaka 565-0871 (Japan)

    2015-09-15

    Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy

  17. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  18. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  19. Discriminant Analysis on Land Grading

    Institute of Scientific and Technical Information of China (English)

    LIU Yaolin; HOU Yajuan

    2004-01-01

    This paper proposes the discriminant analysis on land grading after analyzing the common methods and discussing the Fisher's discriminant in detail. Actually this method deduces the dimension from multi to single, thus it makes the feature vectors in n-dimension change to a scalar, and use this scalar to classify samples. This paper illustrates the result by giving an example of the residential land grading by the discriminant analysis.

  20. Women Status and their Discrimination

    OpenAIRE

    PEŠKOVÁ, Pavlína

    2008-01-01

    My work deal with women status and their discrimination. Chapter one contains women status in different historical periods and development of their status to bigger equal with men. There is also written about present feminist trends. Chapter two is about women discrimination. There is about women´ job discrimination, job segregation according to gender and inequality in payment. There is also written about women status at home and unequal duties at home among family mates. Chapter three is ab...

  1. Linear Minimum variance estimation fusion

    Institute of Scientific and Technical Information of China (English)

    ZHU Yunmin; LI Xianrong; ZHAO Juan

    2004-01-01

    This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.

  2. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-05-14

    This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  3. Dose variation during solar minimum

    Energy Technology Data Exchange (ETDEWEB)

    Gussenhoven, M.S.; Mullen, E.G.; Brautigam, D.H. (Phillips Lab., Geophysics Directorate, Hanscom Air Force Base, MA (US)); Holeman, E. (Boston Univ., MA (United States). Dept. of Physics)

    1991-12-01

    In this paper, the authors use direct measurement of dose to show the variation in inner and outer radiation belt populations at low altitude from 1984 to 1987. This period includes the recent solar minimum that occurred in September 1986. The dose is measured behind four thicknesses of aluminum shielding and for two thresholds of energy deposition, designated HILET and LOLET. The authors calculate an average dose per day for each month of satellite operation. The authors find that the average proton (HILET) dose per day (obtained primarily in the inner belt) increased systematically from 1984 to 1987, and has a high anticorrelation with sunspot number when offset by 13 months. The average LOLET dose per day behind the thinnest shielding is produced almost entirely by outer zone electrons and varies greatly over the period of interest. If any trend can be discerned over the 4 year period it is a decreasing one. For shielding of 1.55 gm/cm{sup 2} (227 mil) Al or more, the LOLET dose is complicated by contributions from {gt} 100 MeV protons and bremsstrahlung.

  4. The role of teamworking in error reduction during vascular procedures.

    Science.gov (United States)

    Soane, Emma; Bicknell, Colin; Mason, Sarah; Godard, Kathleen; Cheshire, Nick

    2014-07-01

    To examine the associations between teamworking processes and error rates during vascular surgical procedures and then make informed recommendations for future studies and practices in this area. This is a single-center observational pilot study. Twelve procedures were observed over a 3-week period by a trained observer. Errors were categorized using a standardized error capture tool. Leadership and teamworking processes were categorized based on the Malakis et al. (2010) framework. Data are expressed as frequencies, means, standard deviations and percentages. Errors rates (per hour) were likely to be reduced when there were effective prebriefing measures to ensure that members were aware of their roles and responsibilities (4.50 vs. 5.39 errors/hr), communications were kept to a practical and effective minimum (4.64 vs. 5.56 errors/hr), when the progress of surgery was communicated throughout (3.14 vs. 8.33 errors/hr), and when team roles changed during the procedure (3.17 vs. 5.97 errors/hr). Reduction of error rates is a critical goal for surgical teams. The present study of teamworking processes in this environment shows that there is a variation that should be further examined. More effective teamworking could prevent or mitigate a range of errors. The development of vascular surgical team members should incorporate principles of teamworking and appropriate communication. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Finger dexterity and visual discrimination following two yoga breathing practices.

    Science.gov (United States)

    Telles, Shirley; Singh, Nilkamal; Balkrishna, Acharya

    2012-01-01

    Practicing yoga has been shown to improve motor functions and attention. Though attention is required for fine motor and discrimination tasks, the effect of yoga breathing techniques on fine motor skills and visual discrimination has not been assessed. To study the effect of yoga breathing techniques on finger dexterity and visual discrimination. The present study consisted of one hundred and forty subjects who had enrolled for stress management. They were randomly divided into two groups, one group practiced high frequency yoga breathing while the other group practiced breath awareness. High frequency yoga breathing (kapalabhati, breath rate 1.0 Hz) and breath awareness are two yoga practices which improve attention. The immediate effect of high frequency yoga breathing and breath awareness (i) were assessed on the performance on the O'Connor finger dexterity task and (ii) (in) a shape and size discrimination task. There was a significant improvement in the finger dexterity task by 19% after kapalabhati and 9% after breath awareness (P<0.001 in both cases, repeated measures ANOVA and post-hoc analyses). There was a significant reduction (P<0.001) in error (41% after kapalabhati and 21% after breath awareness) as well as time taken to complete the shape and size discrimination test (15% after kapalabhati and 15% after breath awareness; P<0.001) was also observed. Both kapalabahati and breath awareness can improve fine motor skills and visual discrimination, with a greater magnitude of change after kapalabhati.

  6. Finger dexterity and visual discrimination following two yoga breathing practices

    Directory of Open Access Journals (Sweden)

    Shirley Telles

    2012-01-01

    Full Text Available Background: Practicing yoga has been shown to improve motor functions and attention. Though attention is required for fine motor and discrimination tasks, the effect of yoga breathing techniques on fine motor skills and visual discrimination has not been assessed. Aim: To study the effect of yoga breathing techniques on finger dexterity and visual discrimination. Materials and Methods: The present study consisted of one hundred and forty subjects who had enrolled for stress management. They were randomly divided into two groups, one group practiced high frequency yoga breathing while the other group practiced breath awareness. High frequency yoga breathing (kapalabhati, breath rate 1.0 Hz and breath awareness are two yoga practices which improve attention. The immediate effect of high frequency yoga breathing and breath awareness (i were assessed on the performance on the O′Connor finger dexterity task and (ii (in a shape and size discrimination task. Results: There was a significant improvement in the finger dexterity task by 19% after kapalabhati and 9% after breath awareness (P<0.001 in both cases, repeated measures ANOVA and post-hoc analyses. There was a significant reduction (P<0.001 in error (41% after kapalabhati and 21% after breath awareness as well as time taken to complete the shape and size discrimination test (15% after kapalabhati and 15% after breath awareness; P<0.001 was also observed. Conclusion: Both kapalabahati and breath awareness can improve fine motor skills and visual discrimination, with a greater magnitude of change after kapalabhati.

  7. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  8. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  9. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  10. Weight discrimination and bullying.

    Science.gov (United States)

    Puhl, Rebecca M; King, Kelly M

    2013-04-01

    Despite significant attention to the medical impacts of obesity, often ignored are the negative outcomes that obese children and adults experience as a result of stigma, bias, and discrimination. Obese individuals are frequently stigmatized because of their weight in many domains of daily life. Research spanning several decades has documented consistent weight bias and stigmatization in employment, health care, schools, the media, and interpersonal relationships. For overweight and obese youth, weight stigmatization translates into pervasive victimization, teasing, and bullying. Multiple adverse outcomes are associated with exposure to weight stigmatization, including depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care. This review summarizes the nature and extent of weight stigmatization against overweight and obese individuals, as well as the resulting consequences that these experiences create for social, psychological, and physical health for children and adults who are targeted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Wepman Test of Auditory Discrimination: What Does it Discriminate?

    Science.gov (United States)

    Ross, Helen Warren

    1979-01-01

    This study investigated auditory discrimination as a function of ethnic group membership within the same socioeconimic status (SES). Subjects were 126 six-year-old students attending schools in a lower SES community. Contrary to previous findings, there were no differences between the groups on the Wepman Test of Auditory Discrimination. (Author)

  12. Perceived discrimination: why applicants and employees expect and perceive discrimination

    NARCIS (Netherlands)

    Abu Ghazaleh, N.

    2012-01-01

    In this dissertation we have investigated perceptions of discrimination. We have shown discrimination exists in the eyes of applicants and employees and especially when from an ethnic minority group. There are psychological variables that influence these perceptions differently for minority and

  13. Causal Link between Solar Variability and Climate Anomalies in East Asia during the Maunder Minimum

    Science.gov (United States)

    Sakashita, W.; Yokoyama, Y.; Miyahara, H.; Yonenobu, H.; Ohyama, M.; Hoshino, Y.; Nakatsuka, T.

    2011-12-01

    There has been discussion that past climate changes have a causal connection with solar variations. However, it is very difficult to discriminate the solar related variability from the internally caused similar variations in climate record. However, our previous studies have shown that solar activity was unique during the Maunder Minimum (A. D. 1645-1715), the prolonged sunspot absence that may have contributed to the Little Ice Age (LIA). It has been revealed based on tree-ring Δ14C and ice-core 10Be that the Sun had a few year longer cycles (14 and 28 years) than those of today (11 and 22 years), and that GCRs had significant 28 year variations associated with the magnetic polarity reversals. Those periodic variations are very useful for distinguishing solar related variability from other internally caused similar variations, and especially for identifying the effect of GCRs during LIA. For this purpose, tree-ring isotopes (Δ14C, δ18O) are useful as GCR variability can be directly compared with climate variations without any dating error. In our previous study, annual δ18O variations in tree-ring cellulose from central Japan were investigated for the Maunder Minimum and compared with tree-ring Δ14C record. The tree-ring δ18O record shows distinct negative δ18O spikes (wetter rainy seasons) coinciding with rapid cooling in Greenland and with decreases in Northern Hemisphere mean temperature. These climate signals have shown strong correlation with Δ14C positive anomaly and with the changes in the polarity of solar dipole magnetic field, suggesting a causal link to GCRs. We have also investigated the annual δ18O variability in tree-ring cellulose from Taiwan [24.3'N, 121.3'E] and from Mie, Japan [34.3'N, 136.4'E] for the same period to understand the spatial distributions of climate variations associated with GCRs anomalies. In this paper, we report the preliminary results of our measurements.

  14. Planar straightness error evaluation based on particle swarm optimization

    Science.gov (United States)

    Mao, Jian; Zheng, Huawen; Cao, Yanlong; Yang, Jiangxin

    2006-11-01

    The straightness error generally refers to the deviation between an actual line and an ideal line. According to the characteristics of planar straightness error evaluation, a novel method to evaluate planar straightness errors based on the particle swarm optimization (PSO) is proposed. The planar straightness error evaluation problem is formulated as a nonlinear optimization problem. According to minimum zone condition the mathematical model of planar straightness together with the optimal objective function and fitness function is developed. Compared with the genetic algorithm (GA), the PSO algorithm has some advantages. It is not only implemented without crossover and mutation but also has fast congruence speed. Moreover fewer parameters are needed to set up. The results show that the PSO method is very suitable for nonlinear optimization problems and provides a promising new method for straightness error evaluation. It can be applied to deal with the measured data of planar straightness obtained by the three-coordinates measuring machines.

  15. Error Control of Iterative Linear Solvers for Integrated Groundwater Models

    CERN Document Server

    Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq

    2010-01-01

    An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...

  16. Perceived weight discrimination and obesity.

    Directory of Open Access Journals (Sweden)

    Angelina R Sutin

    Full Text Available Weight discrimination is prevalent in American society. Although associated consistently with psychological and economic outcomes, less is known about whether weight discrimination is associated with longitudinal changes in obesity. The objectives of this research are (1 to test whether weight discrimination is associated with risk of becoming obese (Body Mass Index≥30; BMI by follow-up among those not obese at baseline, and (2 to test whether weight discrimination is associated with risk of remaining obese at follow-up among those already obese at baseline. Participants were drawn from the Health and Retirement Study, a nationally representative longitudinal survey of community-dwelling US residents. A total of 6,157 participants (58.6% female completed the discrimination measure and had weight and height available from the 2006 and 2010 assessments. Participants who experienced weight discrimination were approximately 2.5 times more likely to become obese by follow-up (OR = 2.54, 95% CI = 1.58-4.08 and participants who were obese at baseline were three times more likely to remain obese at follow up (OR = 3.20, 95% CI = 2.06-4.97 than those who had not experienced such discrimination. These effects held when controlling for demographic factors (age, sex, ethnicity, education and when baseline BMI was included as a covariate. These effects were also specific to weight discrimination; other forms of discrimination (e.g., sex, race were unrelated to risk of obesity at follow-up. The present research demonstrates that, in addition to poorer mental health outcomes, weight discrimination has implications for obesity. Rather than motivating individuals to lose weight, weight discrimination increases risk for obesity.

  17. Perceived weight discrimination and obesity.

    Science.gov (United States)

    Sutin, Angelina R; Terracciano, Antonio

    2013-01-01

    Weight discrimination is prevalent in American society. Although associated consistently with psychological and economic outcomes, less is known about whether weight discrimination is associated with longitudinal changes in obesity. The objectives of this research are (1) to test whether weight discrimination is associated with risk of becoming obese (Body Mass Index≥30; BMI) by follow-up among those not obese at baseline, and (2) to test whether weight discrimination is associated with risk of remaining obese at follow-up among those already obese at baseline. Participants were drawn from the Health and Retirement Study, a nationally representative longitudinal survey of community-dwelling US residents. A total of 6,157 participants (58.6% female) completed the discrimination measure and had weight and height available from the 2006 and 2010 assessments. Participants who experienced weight discrimination were approximately 2.5 times more likely to become obese by follow-up (OR = 2.54, 95% CI = 1.58-4.08) and participants who were obese at baseline were three times more likely to remain obese at follow up (OR = 3.20, 95% CI = 2.06-4.97) than those who had not experienced such discrimination. These effects held when controlling for demographic factors (age, sex, ethnicity, education) and when baseline BMI was included as a covariate. These effects were also specific to weight discrimination; other forms of discrimination (e.g., sex, race) were unrelated to risk of obesity at follow-up. The present research demonstrates that, in addition to poorer mental health outcomes, weight discrimination has implications for obesity. Rather than motivating individuals to lose weight, weight discrimination increases risk for obesity.

  18. Minimum complexity echo state network.

    Science.gov (United States)

    Rodan, Ali; Tino, Peter

    2011-01-01

    Reservoir computing (RC) refers to a new class of state-space models with a fixed state transition structure (the reservoir) and an adaptable readout form the state space. The reservoir is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be exploited by the reservoir-to-output readout mapping. The field of RC has been growing rapidly with many successful applications. However, RC has been criticized for not being principled enough. Reservoir construction is largely driven by a series of randomized model-building stages, with both researchers and practitioners having to rely on a series of trials and errors. To initialize a systematic study of the field, we concentrate on one of the most popular classes of RC methods, namely echo state network, and ask: What is the minimal complexity of reservoir construction for obtaining competitive models and what is the memory capacity (MC) of such simplified reservoirs? On a number of widely used time series benchmarks of different origin and characteristics, as well as by conducting a theoretical analysis we show that a simple deterministically constructed cycle reservoir is comparable to the standard echo state network methodology. The (short-term) MC of linear cyclic reservoirs can be made arbitrarily close to the proved optimal value.

  19. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;

    2011-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:01(n)01n with minimum distance (n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w=(n(lognloglogn)2) . (2) If d=3 then w=(nlglgn). (3...

  20. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  1. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  2. How Do Alternative Minimum Wage Variables Compare?

    OpenAIRE

    Sara Lemos

    2005-01-01

    Several minimum wage variables have been suggested in the literature. Such a variety of variables makes it difficult to compare the associated estimates across studies. One problem is that these estimates are not always calibrated to represent the effect of a 10% increase in the minimum wage. Another problem is that these estimates measure the effect of the minimum wage on the employment of different groups of workers. In this paper we critically compare employment effect estimates using five...

  3. Minimum wages, globalization and poverty in Honduras

    OpenAIRE

    Gindling, T. H.; Terrell, Katherine

    2008-01-01

    To be competitive in the global economy, some argue that Latin American countries need to reduce or eliminate labour market regulations such as minimum wage legislation because they constrain job creation and hence increase poverty. On the other hand, minimum wage increases can have a direct positive impact on family income and may therefore help to reduce poverty. We take advantage of a complex minimum wage system in a poor country that has been exposed to the forces of globalization to test...

  4. Variable Selection in Discriminant Analysis.

    Science.gov (United States)

    Huberty, Carl J.; Mourad, Salah A.

    Methods for ordering and selecting variables for discriminant analysis in multiple group comparison or group prediction studies include: univariate Fs, stepwise analysis, learning discriminant function (LDF) variable correlations, communalities, LDF standardized coefficients, and weighted standardized coefficients. Five indices based on distance,…

  5. Discrimination against Muslim American Adolescents

    Science.gov (United States)

    Aroian, Karen J.

    2012-01-01

    Although there is ample evidence of discrimination toward Muslim Americans in general, there is limited information specific to Muslim American adolescents. The few existing studies specific to this age group suggest that Muslim American adolescents encounter much discrimination from teachers, school administrators, and classmates. This…

  6. Price Discrimination in Academic Journals.

    Science.gov (United States)

    Joyce, Patrick; Merz, Thomas E.

    1985-01-01

    Analysis of price discrimination (charging different prices to different customers for same product) for 89 academic journals in 6 disciplines reveals: incidence of price discrimination rose between 1974 and 1984, increase in mean institutional (library) subscription price exceeded increase in mean individual subscription price. Journal list…

  7. MEANING DISCRIMINATION IN BILINGUAL DICTIONARIES.

    Science.gov (United States)

    IANNUCCI, JAMES E.

    SEMANTIC DISCRIMINATION OF POLYSEMOUS ENTRY WORDS IN BILINGUAL DICTIONARIES WAS DISCUSSED IN THE PAPER. HANDICAPS OF PRESENT BILINGUAL DICTIONARIES AND BARRIERS TO THEIR FULL UTILIZATION WERE ENUMERATED. THE AUTHOR CONCLUDED THAT (1) A BILINGUAL DICTIONARY SHOULD HAVE A DISCRIMINATION FOR EVERY TRANSLATION OF AN ENTRY WORD WHICH HAS SEVERAL…

  8. Children's Perceptions of Gender Discrimination

    Science.gov (United States)

    Brown, Christia Spears; Bigler, Rebecca S.

    2004-01-01

    Children (N = 76; ages 5-10 years) participated in a study designed to examine perceptions of gender discrimination. Children were read scenarios in which a teacher determined outcomes for 2 students (1 boy and 1 girl). Contextual information (i.e., teacher's past behavior), the gender of the target of discrimination (i.e., student), and the…

  9. Variable Selection in Discriminant Analysis.

    Science.gov (United States)

    Huberty, Carl J.; Mourad, Salah A.

    Methods for ordering and selecting variables for discriminant analysis in multiple group comparison or group prediction studies include: univariate Fs, stepwise analysis, learning discriminant function (LDF) variable correlations, communalities, LDF standardized coefficients, and weighted standardized coefficients. Five indices based on distance,…

  10. Addressing Discrimination in School Matters!

    Science.gov (United States)

    Sullivan, Amanda L.

    2009-01-01

    Every student has the right to an education free from discrimination that provides high-quality, equitable opportunities to learn. Unfortunately, sometimes individuals or systems may act in ways that violate this right. Discrimination occurs when people are treated unequally or less favorably than others because of some real or perceived…

  11. Invidious Discrimination: Second Generation Issues

    Science.gov (United States)

    Simpson, Robert J.; Dee, Paul

    1976-01-01

    Discusses school law issues dealing with various forms of invidious discrimination. Considers discrimination based on forms of involuntary association (ethnicity, economic status, primary language, and maturity) and forms of voluntary association (sexual proclivity, marital status, pregnancy and parenthood, self-expression and appearance, religion…

  12. Perceptions of Discrimination during Downsizing.

    Science.gov (United States)

    Larkey, Linda Kathryn

    1993-01-01

    Demonstrates that perceptions of ethnic discrimination during layoffs are moderately correlated with perceptions of selection fairness and information access during the layoff process. Shows that, in the company studied, both minority and majority ethnic group members felt equally discriminated against. (SR)

  13. Vibrotactile Discrimination of Musical Timbre

    Science.gov (United States)

    Russo, Frank A.; Ammirante, Paolo; Fels, Deborah I.

    2012-01-01

    Five experiments investigated the ability to discriminate between musical timbres based on vibrotactile stimulation alone. Participants made same/different judgments on pairs of complex waveforms presented sequentially to the back through voice coils embedded in a conforming chair. Discrimination between cello, piano, and trombone tones matched…

  14. Perceived discrimination in the Netherlands

    NARCIS (Netherlands)

    Iris Andriessen; Henk Fernee; Karin Wittebrood

    2014-01-01

    Only available in electronic version There is no systematic structure in the Netherlands for mapping out the discrimination experiences of different groups in different areas of society. As in many other countries, discrimination studies in the Netherlands mostly focus on the experiences

  15. Discrimination against Muslim American Adolescents

    Science.gov (United States)

    Aroian, Karen J.

    2012-01-01

    Although there is ample evidence of discrimination toward Muslim Americans in general, there is limited information specific to Muslim American adolescents. The few existing studies specific to this age group suggest that Muslim American adolescents encounter much discrimination from teachers, school administrators, and classmates. This…

  16. Effect of Pressure on Minimum Fluidization Velocity

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhiping; Na Yongjie; Lu Qinggang

    2007-01-01

    Minimum fluidization velocity of quartz sand and glass bead under different pressures of 0.5, 1.0, 1.5 and 2.0 Mpa were investigated. The minimum fluidization velocity decreases with the increasing of pressure. The influence of pressure to the minimum fluidization velocities is stronger for larger particles than for smaller ones.Based on the test results and Ergun equation, an experience equation of minimum fluidization velocity is proposed and the calculation results are comparable to other researchers' results.

  17. 7 CFR 35.11 - Minimum requirements.

    Science.gov (United States)

    2010-01-01

    ..., Denmark, East Germany, England, Finland, France, Greece, Hungary, Iceland, Ireland, Italy, Liechtenstein..., Switzerland, Wales, West Germany, Yugoslavia), or Greenland shall meet each applicable minimum requirement...

  18. On the assimilation-discrimination relationship in American English adults’ French vowel learning1

    Science.gov (United States)

    Levy, Erika S.

    2009-01-01

    A quantitative “cross-language assimilation overlap” method for testing predictions of the Perceptual Assimilation Model (PAM) was implemented to compare results of a discrimination experiment with the listeners’ previously reported assimilation data. The experiment examined discrimination of Parisian French (PF) front rounded vowels ∕y∕ and ∕œ∕. Three groups of American English listeners differing in their French experience (no experience [NoExp], formal experience [ModExp], and extensive formal-plus-immersion experience [HiExp]) performed discrimination of PF ∕y-u∕, ∕y-o∕, ∕œ-o∕, ∕œ-u∕, ∕y-i∕, ∕y-ɛ∕, ∕œ-ɛ∕, ∕œ-i∕, ∕y-œ∕, ∕u-i∕, and ∕a-ɛ∕. Vowels were in bilabial ∕rabVp∕ and alveolar ∕radVt∕ contexts. More errors were found for PF front vs back rounded vowel pairs (16%) than for PF front unrounded vs rounded pairs (2%). Overall, ModExp listeners did not perform more accurately (11% errors) than NoExp listeners (13% errors). Extensive immersion experience, however, was associated with fewer errors (3%) than formal experience alone, although discrimination of PF ∕y-u∕ remained relatively poor (12% errors) for HiExp listeners. More errors occurred on pairs involving front vs back rounded vowels in alveolar context (20% errors) than in bilabial (11% errors). Significant correlations were revealed between listeners’ assimilation overlap scores and their discrimination errors, suggesting that the PAM may be extended to second-language (L2) vowel learning. PMID:19894844

  19. Children's perceptions of gender discrimination.

    Science.gov (United States)

    Spears Brown, Christia; Bigler, Rebecca S

    2004-09-01

    Children (N = 76; ages 5-10 years) participated in a study designed to examine perceptions of gender discrimination. Children were read scenarios in which a teacher determined outcomes for 2 students (1 boy and 1 girl). Contextual information (i.e., teacher's past behavior), the gender of the target of discrimination (i.e., student), and the gender of the perpetrator (i.e., teacher) were manipulated. Results indicated that older children were more likely than younger children to make attributions to discrimination when contextual information suggested that it was likely. Girls (but not boys) were more likely to view girls than boys as victims of discrimination, and children with egalitarian gender attitudes were more likely to perceive discrimination than were their peers.

  20. Long-Term Capital Goods Importation and Minimum Wage Relationship in Turkey: Bounds Testing Approach

    Directory of Open Access Journals (Sweden)

    Tastan Serkan

    2015-04-01

    Full Text Available In order to examine the long-term relationship between capital goods importation and minimum wage, autoregressive distributed lag (ARDL bounds testing approach to the cointegration is used in the study. According to bounds test results, a cointegration relation exists between the capital goods importation and the minimum wage. Therefore an ARDL(4,0 model is estimated in order to determine the long and short term relations between variables. According to the empirical analysis, there is a positive and significant relationship between the capital goods importation and the minimum wage in Turkey in the long term. A 1% increase in the minimum wage leads to a 0.8% increase in the capital goods importation in the long term. The result is similar for short term coefficients. The relationship observed in the long term is preserved in short term, though in a lower level. In terms of error correction model, it can be concluded that error correction mechanism works as the error correction term is negative and significant. Short term deviations might be resolved with the error correction mechanism in the long term. Accordingly, approximately 75% of any deviation from equilibrium which might arise in the previous six month period will be resolved in the current six month period. This means that returning to long term equilibrium progresses rapidly.

  1. A Short Introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length (MDL)

    NARCIS (Netherlands)

    Nannen, Volker

    2010-01-01

    The concept of overtting in model selection is explained and demon- strated. After providing some background information on information theory and Kolmogorov complexity, we provide a short explanation of Minimum Description Length and error minimization. We conclude with a discussion of the typical

  2. JUSTIFICATION FOR INDIRECT DISCRIMINATION IN EU

    OpenAIRE

    Catalina-Adriana IVANUS

    2014-01-01

    The right to non-discrimination is very important for a civilized society. EU legislation establishes direct and indirect discrimination, harassment, sexual harassment, instruction to discriminate and any less favourable treatment of a woman related to pregnancy or maternity leave as forms of discrimination. The law and the Court of Justice permit the justification of indirect discrimination.

  3. Studies in genetic discrimination. Final progress report

    Energy Technology Data Exchange (ETDEWEB)

    1994-06-01

    We have screened 1006 respondents in a study of genetic discrimination. Analysis of these responses has produced evidence of the range of institutions engaged in genetic discrimination and demonstrates the impact of this discrimination on the respondents to the study. We have found that both ignorance and policy underlie genetic discrimination and that anti-discrimination laws are being violated.

  4. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  5. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  6. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  7. Aftershock Characteristics as a Means of Discriminating Explosions from Earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ford, S R; Walter, W R

    2009-05-20

    The behavior of aftershock sequences around the Nevada Test Site in the southern Great Basin is characterized as a potential discriminant between explosions and earthquakes. The aftershock model designed by Reasenberg and Jones (1989, 1994) allows for a probabilistic statement of earthquake-like aftershock behavior at any time after the mainshock. We use this model to define two types of aftershock discriminants. The first defines M{sub X}, or the minimum magnitude of an aftershock expected within a given duration after the mainshock with probability X. Of the 67 earthquakes with M > 4 in the study region, 63 of them produce an aftershock greater than M{sub 99} within the first seven days after a mainshock. This is contrasted with only six of 93 explosions with M > 4 that produce an aftershock greater than M{sub 99} for the same period. If the aftershock magnitude threshold is lowered and the M{sub 90} criteria is used, then no explosions produce an aftershock greater than M{sub 90} for durations that end more than 17 days after the mainshock. The other discriminant defines N{sub X}, or the minimum cumulative number of aftershocks expected for given time after the mainshock with probability X. Similar to the aftershock magnitude discriminant, five earthquakes do not produce more aftershocks than N{sub 99} within 7 days after the mainshock. However, within the same period all but one explosion produce less aftershocks then N{sub 99}. One explosion is added if the duration is shortened to two days after than mainshock. The cumulative number aftershock discriminant is more reliable, especially at short durations, but requires a low magnitude of completeness for the given earthquake catalog. These results at NTS are quite promising and should be evaluated at other nuclear test sites to understand the effects of differences in the geologic setting and nuclear testing practices on its performance.

  8. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    -) MFCC’s, autoregressive-moving-average (ARMA)-filtered CMSMFCC’s, velocity, and acceleration coefficients. In addition, the method is easily modified to take into account other compressive non-linearities than the logarithm traditionally used for MFCC computation. In terms of MFCC estimation performance......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....

  9. Minimum mean square error estimation and approximation of the Bayesian update

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(w), a measurement operator Y (u(q); q), where u(q; w) uncertain solution. Aim: to identify q(w). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(w) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a functional approximation, e.g. polynomial chaos expansion (PCE). New: We derive linear, quadratic etc approximation of full Bayesian update.

  10. INTERFERENCE REJECTION OF SIGNALS BY ADAPTIVE MINIMUM MEAN SQUARE ERROR CRITERION OVER RAYLEIGH FADING CHANNELS

    Directory of Open Access Journals (Sweden)

    AMITA SONI

    2009-03-01

    Full Text Available Channel time-variation (or fading is the major source of impairment in digital wireless communications. This occurs due to mobility of the user or of the objects in the propagation environment. The limited spectral bandwidth necessitates the use of resource sharing schemes between multiple users. As the transmission medium is shared between the users, this leads to interference between the users. Sharing of resource results in interference such as multiple access interference. This paper deals with methods to study and mitigate such interference considering Rayleigh fading channels. There are various classes of fading conditions. The use of CDMA is under active research as a viable alternative to TDMA and FDMA. Performance in this system is limited by narrowband and multiple access interference. Various methods are used to mitigate them. But here, linear MMSE detector is considered. MMSE technique results in interference rejection. Its adaptive form is applied to Rayleigh fading channels, which are reflective and nondispersive. It results into better results than before.

  11. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  12. Conclusive discrimination among N equidistant pure states

    Energy Technology Data Exchange (ETDEWEB)

    Roa, Luis; Hermann-Avigliano, Carla; Salazar, R. [Departamento de Fisica, Universidad de Concepcion, Barrio Universitario, Casilla 160-C, Concepcion (Chile); Klimov, A. B. [Departamento de Fisica, Universidad de Guadalajara, Revolucion 1500, 44420 Guadalajara, Jalisco (Mexico)

    2011-07-15

    We find the allowed complex overlaps for N equidistant pure quantum states. The accessible overlaps define a petal-shaped area on the Argand plane. Each point inside the petal represents a set of N linearly independent pure states and each point on its contour represents a set of N linearly dependent pure states. We find the optimal probabilities of success of discriminating unambiguously in which of the N equidistant states the system is. We show that the phase of the involved overlap plays an important role in the probability of success. For a fixed overlap modulus, the success probability is highest for the set of states with an overlap with phase equal to zero. In this case, if the process fails, then the information about the prepared state is lost. For states with a phase different from zero, the information could be obtained with an error-minimizing measurement protocol.

  13. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  14. Nursing students' medication errors and their opinions on the reasons of errors: A cross-sectional survey.

    Science.gov (United States)

    Cebeci, Fatma; Karazeybek, Ebru; Sucu, Gulten; Kahveci, Rabia

    2015-05-01

    To determine number and type of medication administration errors made by nursing students, and to explore the rate of reportings, emotions after the errors and the causes of errors. The cross-sectional study was conducted at the two schools of nursing, Akdeniz University, Antalya, Turkey, in February 2009, and comprised students having worked in hospital settings for a minimum of one semester and who had been involved in administering medications. SPSS 13 was used for statistical analysis. Of the 324 subjects in the study, 124(38.3%) had made an error in clinical/field applications. Overall, 402 medication administration errors had been reported of which 155 (38.6%) were detected and corrected by academic nurses. The most common error reported was deviation from aseptic technique in 96(23.8%) cases. Most common emotions resulting from errors were fear in 45(28.8%) and anxiety in 37(23.5%). Most common cause was performance deficit in 141(43.4%) cases and the most common contributing factor was workload declared by 179(55.2%). The error rate among nursing students was high whereas reporting of errors was low.

  15. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  16. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  17. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  18. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  19. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  20. Minimum Wage Effects in the Longer Run

    Science.gov (United States)

    Neumark, David; Nizalova, Olena

    2007-01-01

    Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…

  1. New Minimum Wage Research: A Symposium.

    Science.gov (United States)

    Ehrenberg, Ronald G.; And Others

    1992-01-01

    Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…

  2. 5 CFR 630.206 - Minimum charge.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum charge. 630.206 Section 630.206 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Definitions and General Provisions for Annual and Sick Leave § 630.206 Minimum charge. (a) Unless an agency...

  3. Stochastic variational approach to minimum uncertainty states

    CERN Document Server

    Illuminati, F; Illuminati, F; Viola, L

    1995-01-01

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schr\\"{o}dinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials.

  4. Monotonic Stable Solutions for Minimum Coloring Games

    NARCIS (Netherlands)

    Hamers, H.J.M.; Miquel, S.; Norde, H.W.

    2011-01-01

    For the class of minimum coloring games (introduced by Deng et al. (1999)) we investigate the existence of population monotonic allocation schemes (introduced by Sprumont (1990)). We show that a minimum coloring game on a graph G has a population monotonic allocation scheme if and only if G is (P4,

  5. Stimulus-dependent adjustment of reward prediction error in the midbrain.

    Directory of Open Access Journals (Sweden)

    Hiromasa Takemura

    Full Text Available Previous reports have described that neural activities in midbrain dopamine areas are sensitive to unexpected reward delivery and omission. These activities are correlated with reward prediction error in reinforcement learning models, the difference between predicted reward values and the obtained reward outcome. These findings suggest that the reward prediction error signal in the brain updates reward prediction through stimulus-reward experiences. It remains unknown, however, how sensory processing of reward-predicting stimuli contributes to the computation of reward prediction error. To elucidate this issue, we examined the relation between stimulus discriminability of the reward-predicting stimuli and the reward prediction error signal in the brain using functional magnetic resonance imaging (fMRI. Before main experiments, subjects learned an association between the orientation of a perceptually salient (high-contrast Gabor patch and a juice reward. The subjects were then presented with lower-contrast Gabor patch stimuli to predict a reward. We calculated the correlation between fMRI signals and reward prediction error in two reinforcement learning models: a model including the modulation of reward prediction by stimulus discriminability and a model excluding this modulation. Results showed that fMRI signals in the midbrain are more highly correlated with reward prediction error in the model that includes stimulus discriminability than in the model that excludes stimulus discriminability. No regions showed higher correlation with the model that excludes stimulus discriminability. Moreover, results show that the difference in correlation between the two models was significant from the first session of the experiment, suggesting that the reward computation in the midbrain was modulated based on stimulus discriminability before learning a new contingency between perceptually ambiguous stimuli and a reward. These results suggest that the human

  6. Lesbians still face job discrimination.

    Science.gov (United States)

    Ryniker, Margaret R

    2008-01-01

    This article examines continued discrimination against lesbians in the workplace. A number of cases from various jurisdictions in the United States are highlighted. The paper studies two common forms of discrimination: denial of employment benefits to same sex partners, and sexual harassment. On the first front, the case law suggests that health insurance coverage for one's partner is becoming the norm. On the question of sexual harassment in the workplace, the case law did not provide protection for lesbians. Finally, U.S. employment policies related to sexual orientation are contrasted with those in Israel, which provides much greater protection from discrimination.

  7. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    Science.gov (United States)

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint

  8. ALGORITHM FOR SPHERICITY ERROR AND THE NUMBER OF MEASURED POINTS

    Institute of Scientific and Technical Information of China (English)

    HE Gaiyun; WANG Taiyong; ZHAO Jian; YU Baoqin; LI Guoqin

    2006-01-01

    The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution,whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.

  9. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  10. Minimum probe length for unique identification of all open reading frames in a microbial genome

    Energy Technology Data Exchange (ETDEWEB)

    Sokhansanj, B A; Ng, J; Fitch, J P

    2000-03-05

    In this paper, we determine the minimum hybridization probe length to uniquely identify at least 95% of the open reading frame (ORF) in an organism. We analyze the whole genome sequences of 17 species, 11 bacteria, 4 archaea, and 2 eukaryotes. We also present a mathematical model for minimum probe length based on assuming that all ORFs are random, of constant length, and contain an equal distribution of bases. The model accurately predicts the minimum probe length for all species, but it incorrectly predicts that all ORFs may be uniquely identified. However, a probe length of just 9 bases is adequate to identify over 95% of the ORFs for all 15 prokaryotic species we studied. Using a minimum probe length, while accepting that some ORFs may not be identified and that data will be lost due to hybridization error, may result in significant savings in microarray and oligonucleotide probe design.

  11. Medical ultrasound imaging method combining minimum variance beamforming and general coherence factor

    Institute of Scientific and Technical Information of China (English)

    WU Wentao; PU Jie; LU Yi

    2012-01-01

    In medical ultrasound imaging field, in order to obtain high resolution and correct the phase errors induced by the velocity in-homogeneity of the tissue, a high-resolution medical ultrasound imaging method combining minimum variance beamforming and general coherence factor was presented. First, the data from the elements is delayed for focusing; then the multi-channel data is used for minimum variance beamforming; at the same time, the data is transformed from array space to beam space to calculate the general coherence factor; in the end, the general coherence factor is used to weight the results of minimum variance beamforming. The medical images are gotten by the imaging system. Experiments based on point object and anechoic cyst object are used to verify the proposed method. The results show the proposed method in the aspects of resolution, contrast and robustness is better than minimum variance beamforming and conventional beamforming.

  12. Discriminative likelihood score weighting based on acoustic-phonetic classification for speaker identification

    Science.gov (United States)

    Suh, Youngjoo; Kim, Hoirin

    2014-12-01

    In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification. The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification. Experiments performed on the Aurora noise-corrupted TIMIT database showed that the proposed approach provides meaningful performance improvement with an overall relative error reduction of 15.8% over the maximum likelihood-based baseline GMM approach.

  13. Medication Errors In Relation To Education & Years of Nursing Experience

    Directory of Open Access Journals (Sweden)

    Shweta D Singh

    2012-06-01

    Full Text Available Medication error is defined as any preventable event that might cause or lead to an inappropriate use orharming of the patient. The purpose of this study was to determine the relationship between the level ofeducation and medication errors; years of work experience and medication errors. With a betterunderstanding of these relationships, nursing professionals can learn what characteristics tend to make anurse prone to medication errors and can develop methods and procedures to reduce incidence. Thesurvey was conducted in 6 hospitals in Anand city. Approval had been obtained from the hospitalswhere the study was to be conducted. The survey form was divided into 5 different sections. Eachsection comprises of minimum 3 questions which relates to their basic information and their perceptionstowards medication error. The results of the study suggested that there is a direct relationship betweeneducation/experiences and medication errors. The study showed that medication error occurs due to lackof qualified nursing staff. The results showed that medication error were reported due to increaseworkload on nurses because of lack of number of nurses in hospitals.

  14. The minimum work requirement for distillation processes

    Energy Technology Data Exchange (ETDEWEB)

    Yunus, Cerci; Yunus, A. Cengel; Byard, Wood [Nevada Univ., Las Vegas, NV (United States). Dept. of Mechanical Engineering

    2000-07-01

    A typical ideal distillation process is proposed and analyzed using the first and second-laws of thermodynamics with particular attention to the minimum work requirement for individual processes. The distillation process consists of an evaporator, a condenser, a heat exchanger, and a number of heaters and coolers. Several Carnot engines are also employed to perform heat interactions of the distillation process with the surroundings and determine the minimum work requirement for processes. The Carnot engines give the maximum possible work output or the minimum work input associated with the processes, and therefore the net result of these inputs and outputs leads to the minimum work requirement for the entire distillation process. It is shown that the minimum work relation for the distillation process is the same as the minimum work input relation found by Cerci et al [1] for an incomplete separation of incoming saline water, and depends only on the properties of the incoming saline water and the outgoing pure water and brine. Also, certain aspects of the minimum work relation found are discussed briefly. (authors)

  15. EXPERIMENTAL STUDY OF MINIMUM IGNITION TEMPERATURE

    Directory of Open Access Journals (Sweden)

    Igor WACHTER

    2015-12-01

    Full Text Available The aim of this scientific paper is an analysis of the minimum ignition temperature of dust layer and the minimum ignition temperatures of dust clouds. It could be used to identify the threats in industrial production and civil engineering, on which a layer of combustible dust could occure. Research was performed on spent coffee grounds. Tests were performed according to EN 50281-2-1:2002 Methods for determining the minimum ignition temperatures of dust (Method A. Objective of method A is to determine the minimum temperature at which ignition or decomposition of dust occurs during thermal straining on a hot plate at a constant temperature. The highest minimum smouldering and carbonating temperature of spent coffee grounds for 5 mm high layer was determined at the interval from 280 °C to 310 °C during 600 seconds. Method B is used to determine the minimum ignition temperature of a dust cloud. Minimum ignition temperature of studied dust was determined to 470 °C (air pressure – 50 kPa, sample weight 0.3 g.

  16. Fast discriminative latent Dirichlet allocation

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the code for fast discriminative latent Dirichlet allocation, which is an algorithm for topic modeling and text classification. The related paper is at...

  17. Face adaptation improves gender discrimination.

    Science.gov (United States)

    Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang

    2011-01-01

    Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic.

  18. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  19. EU Law and Multiple Discrimination

    DEFF Research Database (Denmark)

    Nielsen, Ruth

    2006-01-01

    In EU law, nationality and gender were the only equality issues on the legal agenda from the outset in 1958 and for about 40 years. Multiple discrimination was not addressed until the 1990's. The intersectionality approach which has been widely discussed outside Europe has mainly been used...... with a view to gendermainstreaming the fight against other kinds of discrimination (on grounds of ethnic origin, age, etc)....

  20. Quantity discrimination in female mosquitofish.

    Science.gov (United States)

    Agrillo, Christian; Dadda, Marco; Bisazza, Angelo

    2007-01-01

    The ability in animals to count and represent different numbers of objects has received a great deal of attention in the past few decades. Cumulative evidence from comparative studies on number discriminations report obvious analogies among human babies, non-human primates and birds and are consistent with the hypothesis of two distinct and widespread mechanisms, one for counting small numbers (verbal creatures studied; results are in agreement with the hypothesis of the existence of two distinct systems for quantity discrimination in vertebrates.

  1. Does the Minimum Wage Cause Inefficient Rationing?

    Institute of Scientific and Technical Information of China (English)

    何满辉; 梁明秋

    2008-01-01

    By not allowing wages to dearthe labor market,the minimum wage could cause workers with low reservation wages to be rationed out while equally skilled woTkers with higher reservation wages are employed.I find that proxies for reservation wages of unskilled workers in high-impact stales did not rise relative to reservation wages in other states,suggesting that the increase in the minimum wage did not cause jobs to be allocated less efficiently.However,even if rationing is efficient,the minimum wage can still entail other efficiency costs.

  2. Minimum emittance in TBA and MBA lattices

    Science.gov (United States)

    Xu, Gang; Peng, Yue-Mei

    2015-03-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.

  3. Unambiguous discrimination among oracle operators

    Science.gov (United States)

    Chefles, Anthony; Kitagawa, Akira; Takeoka, Masahiro; Sasaki, Masahide; Twamley, Jason

    2007-08-01

    We address the problem of unambiguous discrimination among oracle operators. The general theory of unambiguous discrimination among unitary operators is extended with this application in mind. We prove that entanglement with an ancilla cannot assist any discrimination strategy for commuting unitary operators. We also obtain a simple, practical test for the unambiguous distinguishability of an arbitrary set of unitary operators on a given system. Using this result, we prove that the unambiguous distinguishability criterion is the same for both standard and minimal oracle operators. We then show that, except in certain trivial cases, unambiguous discrimination among all standard oracle operators corresponding to integer functions with fixed domain and range is impossible. However, we find that it is possible to unambiguously discriminate among the Grover oracle operators corresponding to an arbitrarily large unsorted database. The unambiguous distinguishability of standard oracle operators corresponding to totally indistinguishable functions, which possess a strong form of classical indistinguishability, is analysed. We prove that these operators are not unambiguously distinguishable for any finite set of totally indistinguishable functions on a Boolean domain and with arbitrary fixed range. Sets of such functions on a larger domain can have unambiguously distinguishable standard oracle operators, and we provide a complete analysis of the simplest case, that of four functions. We also examine the possibility of unambiguous oracle operator discrimination with multiple parallel calls and investigate an intriguing unitary superoperator transformation between standard and entanglement-assisted minimal oracle operators.

  4. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  5. Can post-error dynamics explain sequential reaction time patterns?

    Directory of Open Access Journals (Sweden)

    Stephanie eGoldfarb

    2012-07-01

    Full Text Available We investigate human error dynamics in sequential two-alternative choice tasks. When subjects repeatedly discriminate between two stimuli, their error rates and mean reaction times (RTs systematically depend on prior sequences of stimuli. We analyze these sequential effects on RTs, separating error and correct responses, and identify a sequential RT tradeoff: a sequence of stimuli which yields a relatively fast RT on error trials will produce a relatively slow RT on correct trials and vice versa. We reanalyze previous data and acquire and analyze new data in a choice task with stimulus sequences generated by a first-order Markov process having unequal probabilities of repetitions and alternations. We then show that relationships among these stimulus sequences and the corresponding RTs for correct trials, error trials, and averaged over all trials are significantly influenced by the probability of alternations; these relationships have not been captured by previous models. Finally, we show that simple, sequential updates to the initial condition and thresholds of a pure drift diffusion model can account for the trends in RT for correct and error trials. Our results suggest that error-based parameter adjustments are critical to modeling sequential effects.

  6. Variations on a theme: Songbirds, variability, and sensorimotor error correction.

    Science.gov (United States)

    Kuebrich, B D; Sober, S J

    2015-06-18

    Songbirds provide a powerful animal model for investigating how the brain uses sensory feedback to correct behavioral errors. Here, we review a recent study in which we used online manipulations of auditory feedback to quantify the relationship between sensory error size, motor variability, and vocal plasticity. We found that although inducing small auditory errors evoked relatively large compensatory changes in behavior, as error size increased the magnitude of error correction declined. Furthermore, when we induced large errors such that auditory signals no longer overlapped with the baseline distribution of feedback, the magnitude of error correction approached zero. This pattern suggests a simple and robust strategy for the brain to maintain the accuracy of learned behaviors by evaluating sensory signals relative to the previously experienced distribution of feedback. Drawing from recent studies of auditory neurophysiology and song discrimination, we then speculate as to the mechanistic underpinnings of the results obtained in our behavioral experiments. Finally, we review how our own and other studies exploit the strengths of the songbird system, both in the specific context of vocal systems and more generally as a model of the neural control of complex behavior.

  7. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  8. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  9. Long Term Care Minimum Data Set (MDS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...

  10. Quantitative Research on the Minimum Wage

    Science.gov (United States)

    Goldfarb, Robert S.

    1975-01-01

    The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)

  11. Impact of the Minimum Wage on Compression.

    Science.gov (United States)

    Wolfe, Michael N.; Candland, Charles W.

    1979-01-01

    Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)

  12. Long Term Care Minimum Data Set (MDS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...

  13. Minimum wages and employment in China

    National Research Council Canada - National Science Library

    Fang, Tony; Lin, Carl

    2015-01-01

    ... that minimum wage changes led to significant adverse effects on employment in the Eastern and Central regions of China, and resulted in disemployment for females, young adults, and low-skilled workers...

  14. Minimum Wage Policy and Country's Technical Efficiency

    National Research Council Canada - National Science Library

    Mohd Zaini Abd Karim; Sok-Gee Chan; Sallahuddin Hassan

    2016-01-01

    .... However, some quarters argued against the idea of a nationwide minimum wage asserting that it will lead to an increase in the cost of doing business and thus will hurt Malaysian competitiveness...

  15. Graph theory for FPGA minimum configurations

    Institute of Scientific and Technical Information of China (English)

    Ruan Aiwu; Li Wenchang; Xiang Chuanyin; Song Jiangmin; Kang Shi; Liao Yongbo

    2011-01-01

    A traditional bottom-up modeling method for minimum configuration numbers is adopted for the study of FPGA minimum configurations.This method is limited ifa large number of LUTs and multiplexers are presented.Since graph theory has been extensively applied to circuit analysis and test,this paper focuses on the modeling FPGA configurations.In our study,an internal logic block and interconnections of an FPGA are considered as a vertex and an edge connecting two vertices in the graph,respectively.A top-down modeling method is proposed in the paper to achieve minimum configuration numbers for CLB and IOB.Based on the proposed modeling approach and exhaustive analysis,the minimum configuration numbers for CLB and IOB are five and three,respectively.

  16. Discrimination Report: ESTCP UXO Discrimination Study, ESTCPProject #MM-0437

    Energy Technology Data Exchange (ETDEWEB)

    Gasperikova, Erika; Smith, J. Torquil; Morrison, H. Frank; Becker, Alex

    2007-12-21

    The FY06 Defense Appropriation contains funding for the 'Development of Advanced, Sophisticated, Discrimination Technologies for UXO Cleanup' in the Environmental Security Technology Certification Program. In 2003, the Defense Science Board observed: 'The problem is that instruments that can detect the buried UXOs also detect numerous scrap metal objects and other artifacts, which leads to an enormous amount of expensive digging. Typically 100 holes may be dug before a real UXO is unearthed! The Task Force assessment is that much of this wasteful digging can be eliminated by the use of more advanced technology instruments that exploit modern digital processing and advanced multi-mode sensors to achieve an improved level of discrimination of scrap from UXOs.' Significant progress has been made in discrimination technology. To date, testing of these approaches has been primarily limited to test sites with only limited application at live sites. Acceptance of discrimination technologies requires demonstration of system capabilities at real UXO sites under real world conditions. Any attempt to declare detected anomalies to be harmless and requiring no further investigation require demonstration to regulators of not only individual technologies, but of an entire decision making process. This discrimination study was be the first phase in what is expected to be a continuing effort that will span several years.

  17. Price pass-through and minimum wages

    OpenAIRE

    Daniel Aaronson

    1997-01-01

    A textbook consequence of competitive markets is that an industry-wide increase in the price of inputs will be passed on to consumers through an increase in prices. This fundamental implication has been explored by researchers interested in who bears the burden of taxation and exchange rate fluctuations. However, little attention has focused on the price implications of minimum wage hikes. From a policy perspective, this is an oversight. Welfare analysis of minimum wage laws should not ignore...

  18. The minimum wage and restaurant prices

    OpenAIRE

    Daniel Aaronson; Eric French; MacDonald, James M.

    2004-01-01

    Using both store-level and aggregated price data from the food away from home component of the Consumer Price Index survey, we show that restaurant prices rise in response to an increase in the minimum wage. These results hold up when using several different sources of variation in the data. We interpret these findings within a model of employment determination. The model implies that minimum wage hikes cause employment to fall and prices to rise if labor markets are competitive but potential...

  19. Minimum Dominating Tree Problem for Graphs

    Institute of Scientific and Technical Information of China (English)

    LIN Hao; LIN Lan

    2014-01-01

    A dominating tree T of a graph G is a subtree of G which contains at least one neighbor of each vertex of G. The minimum dominating tree problem is to find a dominating tree of G with minimum number of vertices, which is an NP-hard problem. This paper studies some polynomially solvable cases, including interval graphs, Halin graphs, special outer-planar graphs and others.

  20. Comparison of linear discriminant analysis methods for the classification of cancer based on gene expression data

    Directory of Open Access Journals (Sweden)

    He Miao

    2009-12-01

    Full Text Available Abstract Background More studies based on gene expression data have been reported in great detail, however, one major challenge for the methodologists is the choice of classification methods. The main purpose of this research was to compare the performance of linear discriminant analysis (LDA and its modification methods for the classification of cancer based on gene expression data. Methods The classification performance of linear discriminant analysis (LDA and its modification methods was evaluated by applying these methods to six public cancer gene expression datasets. These methods included linear discriminant analysis (LDA, prediction analysis for microarrays (PAM, shrinkage centroid regularized discriminant analysis (SCRDA, shrinkage linear discriminant analysis (SLDA and shrinkage diagonal discriminant analysis (SDDA. The procedures were performed by software R 2.80. Results PAM picked out fewer feature genes than other methods from most datasets except from Brain dataset. For the two methods of shrinkage discriminant analysis, SLDA selected more genes than SDDA from most datasets except from 2-class lung cancer dataset. When comparing SLDA with SCRDA, SLDA selected more genes than SCRDA from 2-class lung cancer, SRBCT and Brain dataset, the result was opposite for the rest datasets. The average test error of LDA modification methods was lower than LDA method. Conclusions The classification performance of LDA modification methods was superior to that of traditional LDA with respect to the average error and there was no significant difference between theses modification methods.

  1. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  2. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  3. Feature Referenced Error Correction Apparatus.

    Science.gov (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  4. Comparison of discriminant analysis methods: Application to occupational exposure to particulate matter

    Science.gov (United States)

    Ramos, M. Rosário; Carolino, E.; Viegas, Carla; Viegas, Sandra

    2016-06-01

    Health effects associated with occupational exposure to particulate matter have been studied by several authors. In this study were selected six industries of five different areas: Cork company 1, Cork company 2, poultry, slaughterhouse for cattle, riding arena and production of animal feed. The measurements tool was a portable device for direct reading. This tool provides information on the particle number concentration for six different diameters, namely 0.3 µm, 0.5 µm, 1 µm, 2.5 µm, 5 µm and 10 µm. The focus on these features is because they might be more closely related with adverse health effects. The aim is to identify the particles that better discriminate the industries, with the ultimate goal of classifying industries regarding potential negative effects on workers' health. Several methods of discriminant analysis were applied to data of occupational exposure to particulate matter and compared with respect to classification accuracy. The selected methods were linear discriminant analyses (LDA); linear quadratic discriminant analysis (QDA), robust linear discriminant analysis with selected estimators (MLE (Maximum Likelihood Estimators), MVE (Minimum Volume Elipsoid), "t", MCD (Minimum Covariance Determinant), MCD-A, MCD-B), multinomial logistic regression and artificial neural networks (ANN). The predictive accuracy of the methods was accessed through a simulation study. ANN yielded the highest rate of classification accuracy in the data set under study. Results indicate that the particle number concentration of diameter size 0.5 µm is the parameter that better discriminates industries.

  5. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  7. Beta systems error analysis

    Science.gov (United States)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  8. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  9. 20 CFR 405.30 - Discrimination complaints.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Discrimination complaints. 405.30 Section 405... INITIAL DISABILITY CLAIMS Introduction, General Description, and Definitions § 405.30 Discrimination... that an adjudicator has improperly discriminated against you, you may file a discrimination complaint...

  10. 45 CFR 1624.4 - Discrimination prohibited.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Discrimination prohibited. 1624.4 Section 1624.4... AGAINST DISCRIMINATION ON THE BASIS OF DISABILITY § 1624.4 Discrimination prohibited. (a) No qualified... the benefits of, or otherwise be subjected to discrimination by any legal services program, directly...

  11. Sensory Discrimination as Related to General Intelligence.

    Science.gov (United States)

    Acton, G. Scott; Schroeder, David H.

    2001-01-01

    Attempted to replicate the pitch discrimination findings of previous research and expand them to the modality of color discrimination in a sample of 899 teenagers and adults by correlating 2 sensory discrimination measures with the general factor from a battery of 13 cognitive ability tests. Results suggest that sensory discrimination is…

  12. 14 CFR 399.36 - Unreasonable discrimination.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Unreasonable discrimination. 399.36 Section... Unreasonable discrimination. (a) As used in this section: (1) Unreasonable discrimination means unjust discrimination or unreasonable preference or prejudice; and (2) Rate means rate, fare, or charge. (b) Except in...

  13. Broadband Minimum Variance Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2009-01-01

    to the ultrasound data. As the error increases, it is seen that the MV beamformer is not as robust compared with the DS beamformer with boxcar an Harming weights. Nevertheless, it is noted that the DS does not outperform the MV beamformer. For errors of 2% and 4% of the correct value, the FWHM are {0.81, 1.25, 0...

  14. Investigation of Sound Speed Errors in Adaptive Beamforming

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2008-01-01

    Previous studies have shown that adaptive beam-formers provide a significant increase of resolution and contrast, when the propagation speed is known precisely. This paper demonstrates the influence of sound speed errors on two adaptive beamformers; the minimum variance (MV) beamformer and the am......Previous studies have shown that adaptive beam-formers provide a significant increase of resolution and contrast, when the propagation speed is known precisely. This paper demonstrates the influence of sound speed errors on two adaptive beamformers; the minimum variance (MV) beamformer...... drop is proposed; diagonal loading (DL) and forward-backward (FB) averaging of the covariance matrix. The investigations show that DL provides a slightly decreased resolution and amplitude compared to FB. It is noted that APES provides more robust estimates than MV at the mere expense of a slight...

  15. Experimental repetitive quantum error correction.

    Science.gov (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  16. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  17. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  18. Prediction of discretization error using the error transport equation

    Science.gov (United States)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  19. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  20. Error-correcting codes and phase transitions

    CERN Document Server

    Manin, Yuri I

    2009-01-01

    The theory of error-correcting codes is concerned with constructing codes that optimize simultaneously transmission rate and relative minimum distance. These conflicting requirements determine an asymptotic bound, which is a continuous curve in the space of parameters. The main goal of this paper is to relate the asymptotic bound to phase diagrams of quantum statistical mechanical systems. We first identify the code parameters with Hausdorff and von Neumann dimensions, by considering fractals consisting of infinite sequences of code words. We then construct operator algebras associated to individual codes. These are Toeplitz algebras with a time evolution for which the KMS state at critical temperature gives the Hausdorff measure on the corresponding fractal. We extend this construction to algebras associated to limit points of codes, with non-uniform multi-fractal measures, and to tensor products over varying parameters.

  1. Calculating error bars for neutrino mixing parameters

    CERN Document Server

    Burroughs, H R; Escamilla-Roa, J; Latimer, D C; Ernst, D J

    2012-01-01

    One goal of contemporary particle physics is to determine the mixing angles and mass-squared differences that constitute the phenomenological constants that describe neutrino oscillations. Of great interest are not only the best fit values of these constants but also their errors. Some of the neutrino oscillation data is statistically poor and cannot be treated by normal (Gaussian) statistics. To extract confidence intervals when the statistics are not normal, one should not utilize the value for chisquare versus confidence level taken from normal statistics. Instead, we propose that one should use the normalized likelihood function as a probability distribution; the relationship between the correct chisquare and a given confidence level can be computed by integrating over the likelihood function. This allows for a definition of confidence level independent of the functional form of the !2 function; it is particularly useful for cases in which the minimum of the !2 function is near a boundary. We present two ...

  2. Increased taxon sampling greatly reduces phylogenetic error.

    Science.gov (United States)

    Zwickl, Derrick J; Hillis, David M

    2002-08-01

    Several authors have argued recently that extensive taxon sampling has a positive and important effect on the accuracy of phylogenetic estimates. However, other authors have argued that there is little benefit of extensive taxon sampling, and so phylogenetic problems can or should be reduced to a few exemplar taxa as a means of reducing the computational complexity of the phylogenetic analysis. In this paper we examined five aspects of study design that may have led to these different perspectives. First, we considered the measurement of phylogenetic error across a wide range of taxon sample sizes, and conclude that the expected error based on randomly selecting trees (which varies by taxon sample size) must be considered in evaluating error in studies of the effects of taxon sampling. Second, we addressed the scope of the phylogenetic problems defined by different samples of taxa, and argue that phylogenetic scope needs to be considered in evaluating the importance of taxon-sampling strategies. Third, we examined the claim that fast and simple tree searches are as effective as more thorough searches at finding near-optimal trees that minimize error. We show that a more complete search of tree space reduces phylogenetic error, especially as the taxon sample size increases. Fourth, we examined the effects of simple versus complex simulation models on taxonomic sampling studies. Although benefits of taxon sampling are apparent for all models, data generated under more complex models of evolution produce higher overall levels of error and show greater positive effects of increased taxon sampling. Fifth, we asked if different phylogenetic optimality criteria show different effects of taxon sampling. Although we found strong differences in effectiveness of different optimality criteria as a function of taxon sample size, increased taxon sampling improved the results from all the common optimality criteria. Nonetheless, the method that showed the lowest overall

  3. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    Science.gov (United States)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  4. Strong Converse Exponents for a Quantum Channel Discrimination Problem and Quantum-Feedback-Assisted Communication

    Science.gov (United States)

    Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.

    2016-06-01

    This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.

  5. Measurement uncertainty evaluation of conicity error inspected on CMM

    Science.gov (United States)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  6. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  7. Discriminative Multi-view Interactive Image Re-ranking.

    Science.gov (United States)

    Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng

    2017-01-10

    -Given unreliable visual patterns and insufficient query information, content-based image retrieval (CBIR) is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose Discriminative Multi-view INTeractive Image Re-ranking (DMINTIR), which integrates User Relevance Feedback (URF) capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for reranking. Compared to other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark datasets demonstrate that our approach boosts baseline retrieval quality and is competitive with other state-of-the-art re-ranking strategies.

  8. ODVBA: optimally-discriminative voxel-based analysis.

    Science.gov (United States)

    Zhang, Tianhao; Davatzikos, Christos

    2011-08-01

    Gaussian smoothing of images prior to applying voxel-based statistics is an important step in voxel-based analysis and statistical parametric mapping (VBA-SPM) and is used to account for registration errors, to Gaussianize the data and to integrate imaging signals from a region around each voxel. However, it has also become a limitation of VBA-SPM based methods, since it is often chosen empirically and lacks spatial adaptivity to the shape and spatial extent of the region of interest, such as a region of atrophy or functional activity. In this paper, we propose a new framework, named optimally-discriminative voxel-based analysis (ODVBA), for determining the optimal spatially adaptive smoothing of images, followed by applying voxel-based group analysis. In ODVBA, nonnegative discriminative projection is applied regionally to get the direction that best discriminates between two groups, e.g., patients and controls; this direction is equivalent to local filtering by an optimal kernel whose coefficients define the optimally discriminative direction. By considering all the neighborhoods that contain a given voxel, we then compose this information to produce the statistic for each voxel. Finally, permutation tests are used to obtain a statistical parametric map of group differences. ODVBA has been evaluated using simulated data in which the ground truth is known and with data from an Alzheimer's disease (AD) study. The experimental results have shown that the proposed ODVBA can precisely describe the shape and location of structural abnormality.

  9. The development of spatial frequency discrimination.

    Science.gov (United States)

    Patel, Ashna; Maurer, Daphne; Lewis, Terri L

    2010-12-31

    We compared thresholds for discriminating spatial frequency for children aged 5, 7, and 9 years, and adults at two baseline spatial frequencies (1 and 3 cpd). In Experiment 1, the minimum change from baseline necessary to detect a change in spatial frequency from either baseline decreased with age from 34% in 5-year-olds to 11% in 7-year-olds, 8% in 9-year-olds, and 6% in adults. The data were best fit by an exponential function reflecting the rapid improvement in thresholds between 5 and 7 years of age and more gradual improvement thereafter (r(2) = 0.50, p spatial frequencies side by side for an unlimited time. The pattern of development for sensitivity to spatial frequency (this study) resembles those for the development of sensitivity to orientation (T. L. Lewis, S. E. Chong, & D. Maurer, 2009) and contrast (D. Ellemberg, T. L. Lewis, C. H. Lui, & D. Maurer, 1999). The similar patterns are consistent with theories of common underlying mechanisms in primary visual cortex (A. Vincent & D. Regan, 1995; W. Zhu, M. Shelley, & R. Shapley, 2008) and suggest that those mechanisms continue to develop throughout childhood.

  10. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  11. Comparison of analytical error and sampling error for contaminated soil.

    Science.gov (United States)

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  12. A comparison of minimum norm and MUSIC for a combined MEG/EEG sensor array

    Science.gov (United States)

    Ahrens, H.; Argin, F.; Klinkenbusch, L.

    2012-09-01

    Many different algorithms for imaging neuronal activity with magnetoencephalography (MEG) or electroencephalography (EEG) have been developed so far. We validate the result of other authors that a combined MEG/EEG sensor array provides smaller source localisation errors than a single MEG or single EEG sensor array for the same total number of sensors. We show that Multiple Signal Classification (MUSIC) provides smaller localisation errors than an unweighted minimum norm method for activity located in the cortical sulcus regions. This is important for many medical applications, e.g. the localisation of the origin of epileptic seizures (focal epilepsy) that can be located very deep in the cortical sulcus.

  13. Discrimination and production of English vowels by bilingual speakers of Spanish and English.

    Science.gov (United States)

    Levey, Sandra

    2004-10-01

    The goal of this study was to examine whether listeners bilingual in Spanish and English would have difficulty in the discrimination of English vowel contrasts. An additional goal was to estimate the correlation between their discrimination and production of these vowels. Participants (40 bilingual Spanish- and English-speaking and 40 native monolingual English-speaking college students, 23-36 years of age) participated (M age = 25.3 yr., Mdn = 25.0). The discrimination and production of English vowels in real and novel words by adult participants bilingual in Spanish and English were examined and their discrimination was compared with that of 40 native monolingual English-speaking participants. Stimuli were presented within triads in an ABX paradigm. Novel words were chosen to represent new words when learning a new language and to provide a more valid test of discrimination. Bilingual participants' productions of vowels were judged by two independent listeners to estimate the correlation between discrimination and production. Discrimination accuracy was significantly greater for native English-speaking participants than for bilingual participants for vowel contrasts and novel words. Significant errors also appeared in the bilingual participants' productions of certain vowels. Earlier age of acquisition, absence of communication problems, and greater percentage of time devoted to communication contributed to greater accuracy in discrimination and production.

  14. Homodyne laser interferometer involving minimal quadrature phase error to obtain subnanometer nonlinearity.

    Science.gov (United States)

    Cui, Junning; He, Zhangqiang; Jiu, Yuanwei; Tan, Jiubin; Sun, Tao

    2016-09-01

    The demand for minimal cyclic nonlinearity error in laser interferometry is increasing as a result of advanced scientific research projects. Research shows that the quadrature phase error is the main effect that introduces cyclic nonlinearity error, and polarization-mixing cross talk during beam splitting is the main error source that causes the quadrature phase error. In this paper, a new homodyne quadrature laser interferometer configuration based on nonpolarization beam splitting and balanced interference between two circularly polarized laser beams is proposed. Theoretical modeling indicates that the polarization-mixing cross talk is elaborately avoided through nonpolarizing and Wollaston beam splitting, with a minimum number of quadrature phase error sources involved. Experimental results show that the cyclic nonlinearity error of the interferometer is up to 0.6 nm (peak-to-valley value) without any correction and can be further suppressed to 0.2 nm with a simple gain and offset correction method.

  15. Emotion-discrimination deficits in mild Alzheimer disease.

    Science.gov (United States)

    Kohler, Christian G; Anselmo-Gallagher, Gerri; Bilker, Warren; Karlawish, Jason; Gur, Raquel E; Clark, Christopher M

    2005-11-01

    Mild Alzheimer disease (AD) preferentially affects temporal lobe regions, which represent important structures in memory and emotional processes. This study investigated emotion discrimination in people with mild AD, versus Caretakers. Twenty AD subjects and 22 caretakers underwent computerized testing of emotion recognition and differentiation. Performances between groups were compared, controlling for possible effects of age and cognitive abilities. AD subjects showed diminished recognition of happy, sad, fearful, and neutral expressions. They also exhibited decreased differentiation between happy and sad expressions. Controlling for effects of cognitive dysfunction, AD subjects differed on recognition of happy and sad, and differentiation of sad facial expressions, and in error patterns for fearful and neutral faces. Diminished abilities for emotion discrimination are present in persons with mild AD. In persons with mild AD, who frequently reside in their own home or with close family, this diminished ability may adversely affect social functioning and quality of life.

  16. Discriminating dysplasia: Optical tomographic texture analysis of colorectal polyps.

    Science.gov (United States)

    Li, Wenqi; Coats, Maria; Zhang, Jianguo; McKenna, Stephen J

    2015-12-01

    Optical projection tomography enables 3-D imaging of colorectal polyps at resolutions of 5-10 µm. This paper investigates the ability of image analysis based on 3-D texture features to discriminate diagnostic levels of dysplastic change from such images, specifically, low-grade dysplasia, high-grade dysplasia and invasive cancer. We build a patch-based recognition system and evaluate both multi-class classification and ordinal regression formulations on a 90 polyp dataset. 3-D texture representations computed with a hand-crafted feature extractor, random projection, and unsupervised image filter learning are compared using a bag-of-words framework. We measure performance in terms of error rates, F-measures, and ROC surfaces. Results demonstrate that randomly projected features are effective. Discrimination was improved by carefully manipulating various important aspects of the system, including class balancing, output calibration and approximation of non-linear kernels.

  17. Multi spectral imaging analysis for meat spoilage discrimination

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael; Papadopoulou, Olga

    with corresponding sensory data would be of great interest. The purpose of this research was to produce a method capable of quantifying and/or predicting the spoilage status (e.g. express in TVC counts as well as on sensory evaluation) using a multi spectral image of a meat sample and thereby avoid any time...... classification methods: Naive Bayes Classifier as a reference model, Canonical Discriminant Analysis (CDA) and Support Vector Classification (SVC). As the final step, generalization of the models was performed using k-fold validation (k=10). Results showed that image analysis provided good discrimination of meat...... samples. In the case where all data were taken together the misclassification error amounted to 16%. When spoilage status was based on visual sensory data, the model produced a MER of 22% for the combined dataset. These results suggest that it is feasible to employ a multi spectral image...

  18. Discriminating the structure of rotated three-dimensional figures.

    Science.gov (United States)

    Barfield, W; Salvendy, G

    1987-10-01

    Visualizing the structure of transformed (by rotation) three-dimensional (3-D) figures is an important aspect of information processing for computer-graphics tasks. However, little research exists to establish the speed and accuracy in which subjects perform discrimination tasks for transformed images and the effects of rotation variables on perceiving transformed images. This research tests the effects of figural complexity, angles and axes of rotation on the speed and accuracy in which subjects discriminate the structure of rotated 3-D wireframe images. Results show that response times are affected more by angles than axes of rotation, the specific form of the image affects error rates, and the number of 90 degrees bends which determine the structure of an image may be an inadequate measure of form complexity for the task described here.

  19. Multi spectral imaging analysis for meat spoilage discrimination

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael; Papadopoulou, Olga

    ) was performed in parallel with videometer image snapshots and sensory analysis. Odour and colour characteristics of meat were determined by a test panel and attributed into three pre-characterized quality classes, namely Fresh; Semi Fresh and Spoiled during the days of its shelf life. So far, different...... classification methods: Naive Bayes Classifier as a reference model, Canonical Discriminant Analysis (CDA) and Support Vector Classification (SVC). As the final step, generalization of the models was performed using k-fold validation (k=10). Results showed that image analysis provided good discrimination of meat...... samples regarding the spoilage process as evaluated from sensory as well as from microbiological data. The support vector classification (SVC) model outperformed other models. Specifically, the misclassification error rate (MER), derived from odour characteristics, was 18% for both aerobic and MAP meat...

  20. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...... as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  1. Neural discriminability in rat lateral extrastriate cortex and deep but not superficial primary visual cortex correlates with shape discriminability.

    Science.gov (United States)

    Vermaercke, Ben; Van den Bergh, Gert; Gerich, Florian; Op de Beeck, Hans

    2015-01-01

    Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. It is unknown to what degree this functional organization is related to the well-known hierarchical organization of the visual system in primates. We designed a study in rats that targets one of the hallmarks of the hierarchical object vision pathway in primates: selectivity for behaviorally relevant dimensions. We compared behavioral performance in a visual water maze with neural discriminability in five visual cortical areas. We tested behavioral discrimination in two independent batches of six rats using six pairs of shapes used previously to probe shape selectivity in monkey cortex (Lehky and Sereno, 2007). The relative difficulty (error rate) of shape pairs was strongly correlated between the two batches, indicating that some shape pairs were more difficult to discriminate than others. Then, we recorded in naive rats from five visual areas from primary visual cortex (V1) over areas LM, LI, LL, up to lateral occipito-temporal cortex (TO). Shape selectivity in the upper layers of V1, where the information enters cortex, correlated mostly with physical stimulus dissimilarity and not with behavioral performance. In contrast, neural discriminability in lower layers of all areas was strongly correlated with behavioral performance. These findings, in combination with the results from Vermaercke et al. (2014b), suggest that the functional specialization in rodent lateral visual cortex reflects a processing hierarchy resulting in the emergence of complex selectivity that is related to behaviorally relevant stimulus differences.

  2. Deep solar minimum and global climate changes

    Directory of Open Access Journals (Sweden)

    Ahmed A. Hady

    2013-05-01

    Full Text Available This paper examines the deep minimum of solar cycle 23 and its potential impact on climate change. In addition, a source region of the solar winds at solar activity minimum, especially in the solar cycle 23, the deepest during the last 500 years, has been studied. Solar activities have had notable effect on palaeoclimatic changes. Contemporary solar activity are so weak and hence expected to cause global cooling. Prevalent global warming, caused by building-up of green-house gases in the troposphere, seems to exceed this solar effect. This paper discusses this issue.

  3. A minimum achievable PV electrical generating cost

    Energy Technology Data Exchange (ETDEWEB)

    Sabisky, E.S. [11 Carnation Place, Lawrenceville, NJ 08648 (United States)

    1996-03-22

    The role and share of photovoltaic (PV) generated electricity in our nation`s future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price

  4. Weight-Constrained Minimum Spanning Tree Problem

    OpenAIRE

    Henn, Sebastian Tobias

    2007-01-01

    In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the me...

  5. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  6. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  7. Wind adaptive modeling of transmission lines using minimum description length

    Science.gov (United States)

    Jaw, Yoonseok; Sohn, Gunho

    2017-03-01

    The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.

  8. Application of Phase Congruency for Discriminating Some Lung Diseases Using Chest Radiograph

    Directory of Open Access Journals (Sweden)

    Omar Mohd Rijal

    2015-01-01

    Full Text Available A novel procedure using phase congruency is proposed for discriminating some lung disease using chest radiograph. Phase congruency provides information about transitions between adjacent pixels. Abrupt changes of phase congruency values between pixels may suggest a possible boundary or another feature that may be used for discrimination. This property of phase congruency may have potential for deciding between disease present and disease absent where the regions of infection on the images have no obvious shape, size, or configuration. Five texture measures calculated from phase congruency and Gabor were shown to be normally distributed. This gave good indicators of discrimination errors in the form of the probability of Type I Error (δ and the probability of Type II Error (β. However, since 1 −  δ is the true positive fraction (TPF and β is the false positive fraction (FPF, an ROC analysis was used to decide on the choice of texture measures. Given that features are normally distributed, for the discrimination between disease present and disease absent, energy, contrast, and homogeneity from phase congruency gave better results compared to those using Gabor. Similarly, for the more difficult problem of discriminating lobar pneumonia and lung cancer, entropy and homogeneity from phase congruency gave better results relative to Gabor.

  9. Application of phase congruency for discriminating some lung diseases using chest radiograph.

    Science.gov (United States)

    Rijal, Omar Mohd; Ebrahimian, Hossein; Noor, Norliza Mohd; Hussin, Amran; Yunus, Ashari; Mahayiddin, Aziah Ahmad

    2015-01-01

    A novel procedure using phase congruency is proposed for discriminating some lung disease using chest radiograph. Phase congruency provides information about transitions between adjacent pixels. Abrupt changes of phase congruency values between pixels may suggest a possible boundary or another feature that may be used for discrimination. This property of phase congruency may have potential for deciding between disease present and disease absent where the regions of infection on the images have no obvious shape, size, or configuration. Five texture measures calculated from phase congruency and Gabor were shown to be normally distributed. This gave good indicators of discrimination errors in the form of the probability of Type I Error (δ) and the probability of Type II Error (β). However, since 1 -  δ is the true positive fraction (TPF) and β is the false positive fraction (FPF), an ROC analysis was used to decide on the choice of texture measures. Given that features are normally distributed, for the discrimination between disease present and disease absent, energy, contrast, and homogeneity from phase congruency gave better results compared to those using Gabor. Similarly, for the more difficult problem of discriminating lobar pneumonia and lung cancer, entropy and homogeneity from phase congruency gave better results relative to Gabor.

  10. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  11. Haptic Visual Discrimination and Intelligence.

    Science.gov (United States)

    McCarron, Lawrence; Horn, Paul W.

    1979-01-01

    The Haptic Visual Discrimination Test of tactual-visual information processing was administered to 39 first-graders, along with standard intelligence, academic potential, and spatial integration tests. Results revealed consistently significant associations between the importance of parieto-occipital areas for organizing sensory data as well as for…

  12. Structural Discrimination and Autonomous Vehicles

    DEFF Research Database (Denmark)

    Liu, Hin-Yan

    2016-01-01

    discrimination looms with the possibility of crash optimisation impulses in which a protective shield is cast over those individuals in which society may have a vested interest in prioritising or safeguarding. A stark dystopian scenario is introduced to sketch the contours whereby personal beacons signal...

  13. Don't demotivate, discriminate

    NARCIS (Netherlands)

    J.J.A. Kamphorst (Jurjen); O.H. Swank (Otto)

    2013-01-01

    markdownabstract__Abstract__ This paper offers a new theory of discrimination in the workplace. We consider a manager who has to assign two tasks to two employees. The manager has superior information about the employees' abilities. We show that besides an equilibrium where the manager does not dis

  14. Structural Discrimination and Autonomous Vehicles

    DEFF Research Database (Denmark)

    Liu, Hin-Yan

    2016-01-01

    discrimination looms with the possibility of crash optimisation impulses in which a protective shield is cast over those individuals in which society may have a vested interest in prioritising or safeguarding. A stark dystopian scenario is introduced to sketch the contours whereby personal beacons signal...

  15. Experiencing discrimination increases risk taking.

    Science.gov (United States)

    Jamieson, Jeremy P; Koslov, Katrina; Nock, Matthew K; Mendes, Wendy Berry

    2013-02-01

    Prior research has revealed racial disparities in health outcomes and health-compromising behaviors, such as smoking and drug abuse. It has been suggested that discrimination contributes to such disparities, but the mechanisms through which this might occur are not well understood. In the research reported here, we examined whether the experience of discrimination affects acute physiological stress responses and increases risk-taking behavior. Black and White participants each received rejecting feedback from partners who were either of their own race (in-group rejection) or of a different race (out-group rejection, which could be interpreted as discrimination). Physiological (cardiovascular and neuroendocrine) changes, cognition (memory and attentional bias), affect, and risk-taking behavior were assessed. Significant participant race × partner race interactions were observed. Cross-race rejection, compared with same-race rejection, was associated with lower levels of cortisol, increased cardiac output, decreased vascular resistance, greater anger, increased attentional bias, and more risk-taking behavior. These data suggest that perceived discrimination is associated with distinct profiles of physiological reactivity, affect, cognitive processing, and risk taking, implicating direct and indirect pathways to health disparities.

  16. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  17. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  18. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  19. Evaluating habitat selection with radio-telemetry triangulation error

    Science.gov (United States)

    Samuel, M.D.; Kenow, K.P.

    1992-01-01

    Radio-telemetry triangulation errors result in the mislocation of animals and misclassification of habitat use. We present analytical methods that provide improved estimates of habitat use when misclassification probabilities can be determined. When misclassification probabilities cannot be determined, we use random subsamples from the error distribution of an estimated animal location to improve habitat use estimates. We conducted Monte Carlo simulations to evaluate the effects of this subsampling method, triangulation error, number of animal locations, habitat availability, and habitat complexity on bias and variation in habitat use estimates. Results for the subsampling method are illustrated using habitat selection by redhead ducks (Aythya americana ). We recommend the subsampling method with a minimum of 50 random points to reduce problems associated with habitat misclassification.

  20. Minimum training requirement in ultrasound imaging of peripheral arterial disease

    DEFF Research Database (Denmark)

    Eiberg, J P; Hansen, M A; Grønvall Rasmussen, J B

    2008-01-01

    To demonstrate the minimum training requirement when performing ultrasound of peripheral arterial disease.......To demonstrate the minimum training requirement when performing ultrasound of peripheral arterial disease....

  1. PENERAPAN METODE LEAST MEDIAN SQUARE-MINIMUM COVARIANCE DETERMINANT (LMS-MCD DALAM REGRESI KOMPONEN UTAMA

    Directory of Open Access Journals (Sweden)

    I PUTU EKA IRAWAN

    2014-01-01

    Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.

  2. Completeness properties of the minimum uncertainty states

    Science.gov (United States)

    Trifonov, D. A.

    1993-01-01

    The completeness properties of the Schrodinger minimum uncertainty states (SMUS) and of some of their subsets are considered. The invariant measures and the resolution unity measures for the set of SMUS are constructed and the representation of squeezing and correlating operators and SMUS as superpositions of Glauber coherent states on the real line is elucidated.

  3. Minimum Wage Effects throughout the Wage Distribution

    Science.gov (United States)

    Neumark, David; Schweitzer, Mark; Wascher, William

    2004-01-01

    This paper provides evidence on a wide set of margins along which labor markets can adjust in response to increases in the minimum wage, including wages, hours, employment, and ultimately labor income. Not surprisingly, the evidence indicates that low-wage workers are most strongly affected, while higher-wage workers are little affected. Workers…

  4. A Minimum Relative Entropy Principle for AGI

    NARCIS (Netherlands)

    Ven, Antoine van de; Schouten, Ben

    2010-01-01

    In this paper the principle of minimum relative entropy (PMRE) is proposed as a fundamental principle and idea that can be used in the field of AGI. It is shown to have a very strong mathematical foundation, that it is even more fundamental then Bayes rule or MaxEnt alone and that it can be related

  5. What's Happening in Minimum Competency Testing.

    Science.gov (United States)

    Frahm, Robert; Covington, Jimmie

    An examination of the current status of minimum competency testing is presented in a series of short essays, which discuss case studies of individual school systems and state approaches. Sections are also included on the viewpoints of critics and supporters, teachers and teacher organizations, principals and students, and the federal government.…

  6. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    2002-01-01

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  7. Minimum Bias and Underlying Event at CMS

    CERN Document Server

    Fano, Livio

    2006-01-01

    The prospects of measuring minimum bias collisions (MB) and studying the underlying event (UE) at CMS are discussed. Two methods are described. The first is based on the measurement of charged tracks in the transverse region with respect to a charge-particle jet. The second technique relies on the selection of muon-pair events from Drell-Yan process.

  8. 44 CFR 62.6 - Minimum commissions.

    Science.gov (United States)

    2010-10-01

    ... ADJUSTMENT OF CLAIMS Issuance of Policies § 62.6 Minimum commissions. (a) The earned commission which shall be paid to any property or casualty insurance agent or broker duly licensed by a state insurance regulatory authority, with respect to each policy or renewal the agent duly procures on behalf of the...

  9. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols...

  10. A Minimum Relative Entropy Principle for AGI

    NARCIS (Netherlands)

    B.A.M. Ben Schouten; Antoine van de van de Ven

    2010-01-01

    In this paper the principle of minimum relative entropy (PMRE) is proposed as a fundamental principle and idea that can be used in the field of AGI. It is shown to have a very strong mathematical foundation, that it is even more fundamental then Bayes rule or MaxEnt alone and that it can be related

  11. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  12. Time Crystals from Minimum Time Uncertainty

    CERN Document Server

    Faizal, Mir; Das, Saurya

    2016-01-01

    Motivated by the Generalized Uncertainty Principle, covariance, and a minimum measurable time, we propose a deformation of the Heisenberg algebra, and show that this leads to corrections to all quantum mechanical systems. We also demonstrate that such a deformation implies a discrete spectrum for time. In other words, time behaves like a crystal.

  13. Minimum impact house prototype for sustainable building

    NARCIS (Netherlands)

    Drexler, H.; Jauslin, D.

    2010-01-01

    The Minihouse is a prototupe for a sustainable townhouse. On a site of only 29 sqm it offers 154 sqm of urban life. The project 'Minimum Impact House' adresses two important questions: How do we provide living space in the cities without distroying the landscape? How to improve sustainably the ecolo

  14. ASSESSMENT OF ANNUAL MINIMUM TEMPERATURE IN SOME ...

    African Journals Online (AJOL)

    USER

    2016-04-11

    Apr 11, 2016 ... This work attempts investigating the pattern of minimum temperature from 19 1 to 2006, an attempt was also .... Similarly the heavy cloud cover acts as blanket for terrestrial ... within a General Circulation Model. (GCM) can be ...

  15. Minimum Competency Testing--Grading or Evaluation?

    Science.gov (United States)

    Prakash, Madhu Suri

    The consequences of the minimum competency testing movement may bring into question the basic assumptions, goals, and expectations of our school system. The intended use of these tests is the assessment of students; the unintended consequence may be the assessment of the school system. There are two ways in which schools may fail in the context of…

  16. Minimum intervention dentistry: periodontics and implant dentistry.

    Science.gov (United States)

    Darby, I B; Ngo, L

    2013-06-01

    This article will look at the role of minimum intervention dentistry in the management of periodontal disease. It will discuss the role of appropriate assessment, treatment and risk factors/indicators. In addition, the role of the patient and early intervention in the continuing care of dental implants will be discussed as well as the management of peri-implant disease.

  17. Minimum output entropy of Gaussian channels

    CERN Document Server

    Lloyd, S; Maccone, L; Pirandola, S; Garcia-Patron, R

    2009-01-01

    We show that the minimum output entropy for all single-mode Gaussian channels is additive and is attained for Gaussian inputs. This allows the derivation of the channel capacity for a number of Gaussian channels, including that of the channel with linear loss, thermal noise, and linear amplification.

  18. 7 CFR 35.13 - Minimum quantity.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS EXPORT...

  19. 7 CFR 33.10 - Minimum requirements.

    Science.gov (United States)

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... Early: Provided, That apples for export to Pacific ports of Russia shall grade at least U.S. Utility...

  20. The periodicity of Grand Solar Minimum

    Science.gov (United States)

    Velasco Herrera, Victor Manuel

    2016-07-01

    The sunspot number is the most used index to quantify the solar activity. Nevertheless, the sunspot is a syn- thetic index and not a physical index. Therefore, we should be careful to use the sunspot number to quantify the low (high) solar activity. One of the major problems of using sunspot to quantify solar activity is that its minimum value is zero. This zero value hinders the reconstruction of the solar cycle during the Maunder minimum. All solar indexes can be used as analog signals, which can be easily converted into digital signals. In con- trast, the conversion of a digital signal into an analog signal is not in general a simple task. The sunspot number during the Maunder minimum can be studied as a digital signal of the solar activity In 1894, Maunder published a discovery that has maintained the Solar Physics in an impasse. In his fa- mous work on "A Prolonged Sunspot Minimum" Maunder wrote: "The sequence of maximum and minimum has, in fact, been unfailing during the present century [..] and yet there [..], the ordinary solar cycle was once interrupted, and one long period of almost unbroken quiescence prevailed". The search of new historical Grand solar minima has been one of the most important questions in Solar Physics. However, the possibility of estimating a new Grand solar minimum is even more valuable. Since solar activity is the result of electromagnetic processes; we propose to employ the power to quantify solar activity: this is a fundamental physics concept in electrodynamics. Total Solar Irradiance is the primary energy source of the Earth's climate system and therefore its variations can contribute to natural climate change. In this work, we propose to consider the fluctuations in the power of the Total Solar Irradiance as a physical measure of the energy released by the solar dynamo, which contributes to understanding the nature of "profound solar magnetic field in calm". Using a new reconstruction of the Total Solar Irradiance we found the

  1. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  2. Spatial frequency domain error budget

    Energy Technology Data Exchange (ETDEWEB)

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  3. Reducing errors in emergency surgery.

    Science.gov (United States)

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  4. An Optimization Approach of Deriving Bounds between Entropy and Error from Joint Distribution: Case Study for Binary Classifications

    Directory of Open Access Journals (Sweden)

    Bao-Gang Hu

    2016-02-01

    Full Text Available In this work, we propose a new approach of deriving the bounds between entropy and error from a joint distribution through an optimization means. The specific case study is given on binary classifications. Two basic types of classification errors are investigated, namely, the Bayesian and non-Bayesian errors. The consideration of non-Bayesian errors is due to the facts that most classifiers result in non-Bayesian solutions. For both types of errors, we derive the closed-form relations between each bound and error components. When Fano’s lower bound in a diagram of “Error Probability vs. Conditional Entropy” is realized based on the approach, its interpretations are enlarged by including non-Bayesian errors and the two situations along with independent properties of the variables. A new upper bound for the Bayesian error is derived with respect to the minimum prior probability, which is generally tighter than Kovalevskij’s upper bound.

  5. Space discriminative function for microphone array robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    Zhao Xianyu; Ou Zhijian; Wang Zuoying

    2005-01-01

    Based on W-disjoint orthogonality of speech mixtures, a space discriminative function was proposed to enumerate and localize competing speakers in the surrounding environments. Then, a Wiener-like post-filterer was developed to adaptively suppress interferences. Experimental results with a hands-free speech recognizer under various SNR and competing speakers settings show that nearly 69% error reduction can be obtained with a two-channel small aperture microphone array against the conventional single microphone baseline system. Comparisons were made against traditional delay-and-sum and Griffiths-Jim adaptive beamforming techniques to further assess the effectiveness of this method.

  6. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal

    2013-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...

  7. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;

    2012-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...

  8. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  9. Error Analysis And Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    王惠丽

    2016-01-01

    Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.

  10. Frequency discrimination in the common marmoset (Callithrix jacchus).

    Science.gov (United States)

    Osmanski, Michael S; Song, Xindong; Guo, Yueqi; Wang, Xiaoqin

    2016-11-01

    The common marmoset (Callithrix jacchus) is a highly vocal New World primate species that has emerged in recent years as a promising model system for studies of auditory and vocal processing. Our recent studies have examined perceptual mechanisms related to the pitch of harmonic complex tones in this species. However, no previous psychoacoustic work has measured marmosets' frequency discrimination abilities for pure tones across a broad frequency range. Here we systematically examined frequency difference limens (FDLs), which measure the minimum discriminable frequency difference between two pure tones, in marmosets across most of their hearing range. Results show that marmosets' FDLs are comparable to other New World primates, with lowest values in the frequency range of ∼3.5-14 kHz. This region of lowest FDLs corresponds with the region of lowest hearing thresholds in this species measured in our previous study and also with the greatest concentration of spectral energy in the major types of marmoset vocalizations. These data suggest that frequency discrimination in the common marmoset may have evolved to match the hearing sensitivity and spectral characteristics of this species' vocalizations.

  11. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  12. Discretization error of Stochastic Integrals

    CERN Document Server

    Fukasawa, Masaaki

    2010-01-01

    Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.

  13. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  14. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  15. Disentangling timing and amplitude errors in streamflow simulations

    Science.gov (United States)

    Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin

    2016-09-01

    This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to

  16. Error-driven learning in statistical summary perception.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B; Taylor, Jordan A

    2016-02-01

    We often interact with multiple objects at once, such as when balancing food and beverages on a dining tray. The success of these interactions relies upon representing not only individual objects, but also statistical summary features of the group (e.g., center-of-mass). Although previous research has established that humans can readily and accurately extract such statistical summary features, how this ability is acquired and refined through experience currently remains unaddressed. Here we ask if training and task feedback can improve summary perception. During training, participants practiced estimating the centroid (i.e., average location) of an array of objects on a touchscreen display. Before and after training, they completed a transfer test requiring perceptual discrimination of the centroid. Across 4 experiments, we manipulated the information in task feedback and how participants interacted with the objects during training. We found that vector error feedback, which conveys error both in terms of distance and direction, was the only form of feedback that improved perceptual discrimination of the centroid on the transfer test. Moreover, this form of feedback was effective only when coupled with reaching movements toward the visual objects. Taken together, these findings suggest that sensory-prediction error-signaling the mismatch between expected and actual consequences of an action-may play a previously unrecognized role in tuning perceptual representations. (PsycINFO Database Record

  17. Detecting categorical perception in continuous discrimination data

    NARCIS (Netherlands)

    Boersma, P.; Chládková, K.

    2010-01-01

    We present a method for assessing categorical perception from continuous discrimination data. Until recently, categorical perception of speech has exclusively been measured by discrimination and identification experiments with a small number of repeatedly presented stimuli. Experiments by Rogers and

  18. Racial Discrimination in the British Labor Market.

    Science.gov (United States)

    Firth, Michael

    1981-01-01

    Contains results of a study of racial discrimination in the British job market for accountants and financial executives. Results show that considerable discrimination remains several years after the adoption of the Race Relations Act of 1968. (CT)

  19. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  20. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  1. Binary Error Correcting Network Codes

    CERN Document Server

    Wang, Qiwen; Li, Shuo-Yen Robert

    2011-01-01

    We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.

  2. Error Propagation in the Hypercycle

    CERN Document Server

    Campos, P R A; Stadler, P F

    1999-01-01

    We study analytically the steady-state regime of a network of n error-prone self-replicating templates forming an asymmetric hypercycle and its error tail. We show that the existence of a master template with a higher non-catalyzed self-replicative productivity, a, than the error tail ensures the stability of chains in which merror tail is guaranteed for catalytic coupling strengths (K) of order of a. We find that the hypercycle becomes more stable than the chains only for K of order of a2. Furthermore, we show that the minimal replication accuracy per template needed to maintain the hypercycle, the so-called error threshold, vanishes like sqrt(n/K) for large K and n<=4.

  3. FPU-Supported Running Error Analysis

    OpenAIRE

    T. Zahradnický; R. Lórencz

    2010-01-01

    A-posteriori forward rounding error analyses tend to give sharper error estimates than a-priori ones, as they use actual data quantities. One of such a-posteriori analysis – running error analysis – uses expressions consisting of two parts; one generates the error and the other propagates input errors to the output. This paper suggests replacing the error generating term with an FPU-extracted rounding error estimate, which produces a sharper error bound.

  4. The legal understanding of intentional medical error

    Directory of Open Access Journals (Sweden)

    Totić Mirza

    2017-01-01

    Full Text Available The paper is devoted to doctor, a professional and humanist who dedicated himself to medicine and is committed to lifelong learning, ethics and assistance to victims, even against their express consent. The theme is focused on the problem of intentional medical error in order to negate it in the context that the conscientious doctors should be protected from tort and free of moral burden. This paper seeks to answer the question, if the error represents a doctor's failure to the detriment of the user (patient, how should we treat his attempt, made professionally and with the best intentions, regardless of the fatal outcome? In addition, medical-legal theory and practice beside intentional medical mistake mention also the unintentional, whose formation does not require any kind of responsibility because the doctor's behavior in that case was not inconsistent with medical ethics, standards and rules. In this regard, the author's research was based on the following questions: is there a deliberate medical error, who is ready to knowingly endanger the patient by doing medical procedures contrary to the rules (neglect, avoidance of assistance, misdiagnosis, improper treatment, indifference, discrimination, who is competent to qualify the taken action as an error (intentional, unintentional and what evidences are required for the brutal attack on the integrity of top experts, that will be charged and prosecuted? Literature abounds with assertions that medical errors are as old as medicine, which is not true. Also, it is incorrect to say that it had appeared for the first time in the middle of the nineteenth century. That would be a roughly canceling of ancient medical marking, bearing in mind that even before the mentioned period, there had been a very successful medicine with high quality doctors and their brilliant achievements, but also with illnesses and dead persons. As far the data on the exact occurrence of medical errors are considered, the

  5. Linguistic Discrimination in Writing Assessment: How Raters React to African American "Errors," ESL Errors, and Standard English Errors on a State-Mandated Writing Exam

    Science.gov (United States)

    Johnson, David; VanBrackle, Lewis

    2012-01-01

    Raters of Georgia's (USA) state-mandated college-level writing exam, which is intended to ensure a minimal university-level writing competency, are trained to grade holistically when assessing these exams. A guiding principle in holistic grading is to not focus exclusively on any one aspect of writing but rather to give equal weight to style,…

  6. Kernel Model Applied in Kernel Direct Discriminant Analysis for the Recognition of Face with Nonlinear Variations

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A kernel-based discriminant analysis method called kernel direct discriminant analysis is employed, which combines the merit of direct linear discriminant analysis with that of kernel trick. In order to demonstrate its better robustness to the complex and nonlinear variations of real face images, such as illumination, facial expression, scale and pose variations, experiments are carried out on the Olivetti Research Laboratory, Yale and self-built face databases. The results indicate that in contrast to kernel principal component analysis and kernel linear discriminant analysis, the method can achieve lower (7%) error rate using only a very small set of features. Furthermore, a new corrected kernel model is proposed to improve the recognition performance. Experimental results confirm its superiority (1% in terms of recognition rate) to other polynomial kernel models.

  7. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  8. Fast Erasure and Error decoding of Algebraic Geometry Codes up to the Feng-Rao Bound

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Sakata, S.; Leonard, D.

    1996-01-01

    This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao.......This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao....

  9. Illustrations of Price Discrimination in Baseball

    OpenAIRE

    Daniel, Rascher; Andrew, Schwarz

    2010-01-01

    Price discrimination of this nature, focused on differing degrees of quality, bundled goods, volume discounts, and other forms of second-degree price discrimination, is commonplace in MLB. Indeed, it is safe to say that every single MLB ticket is sold under some form of price discrimination. As teams grow increasingly sophisticated in their pricing strategies, price discrimination is becoming more precise, more wide-spread, and more profitable, while at the same time providing for more oppo...

  10. Reasons for Supporting the Minimum Wage: Asking Signatories of the "Raise the Minimum Wage" Statement

    OpenAIRE

    2007-01-01

    In October 2006, the Economic Policy Institute released a “Raise the Minimum Wage†statement signed by more than 650 individuals. Using an open-ended, non-anonymous questionnaire, we asked the signatories to explain their thinking on the issue. The questionnaire asked about the specific mechanisms at work, possible downsides, and whether the minimum wage violates liberty. Ninety-five participated. This article reports the responses. It also summarizes findings from minimum-wage surveys sin...

  11. Type I Error Inflation in DIF Identification with Mantel-Haenszel: An Explanation and a Solution

    Science.gov (United States)

    Magis, David; De Boeck, Paul

    2014-01-01

    It is known that sum score-based methods for the identification of differential item functioning (DIF), such as the Mantel-Haenszel (MH) approach, can be affected by Type I error inflation in the absence of any DIF effect. This may happen when the items differ in discrimination and when there is item impact. On the other hand, outlier DIF methods…

  12. Automatic detection of frequent pronunciation errors made by L2-learners

    NARCIS (Netherlands)

    Truong, K.P.; Neri, A.; Wet, F. de; Cucchiarini, C.; Strik, H.

    2005-01-01

    In this paper, we present an acoustic-phonetic approach to automatic pronunciation error detection. Classifiers using techniques such as Linear Discriminant Analysis and Decision Trees were developed for three sounds that are frequently pronounced incorrectly by L2-learners of Dutch: /a/, /y/ and /x

  13. Automatic detection of frequent pronunciation errors made by L2-learners

    NARCIS (Netherlands)

    Truong, K.P.; Neri, A.; Wet, F. de; Cucchiarini, C.; Strik, H.

    2005-01-01

    In this paper, we present an acoustic-phonetic approach to automatic pronunciation error detection. Classifiers using techniques such as Linear Discriminant Analysis and Decision Trees were developed for three sounds that are frequently pronounced incorrectly by L2-learners of Dutch: /a/, /y/ and

  14. 18 CFR 1302.4 - Discrimination prohibited.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Discrimination... § 1302.4 Discrimination prohibited. (a) General. No person in the United States shall, on the ground of... otherwise subjected to discrimination under any program or activity receiving Federal financial assistance...

  15. 28 CFR 42.510 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Discrimination prohibited. 42.510 Section...-Implementation of Section 504 of the Rehabilitation Act of 1973 Employment § 42.510 Discrimination prohibited. (a) General. (1) No qualified handicapped person shall on the basis of handicap be subjected to discrimination...

  16. 22 CFR 142.11 - Discrimination prohibited.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discrimination prohibited. 142.11 Section 142... PROGRAMS OR ACTIVITIES RECEIVING FEDERAL FINANCIAL ASSISTANCE Employment Practices § 142.11 Discrimination... discrimination in employment under any program or activity receiving Federal financial assistance. (2) A...

  17. 5 CFR 900.704 - Discrimination prohibited.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Discrimination prohibited. 900.704... Federally Assisted Programs of the Office of Personnel Management § 900.704 Discrimination prohibited. (a..., be denied the benefits of, or otherwise be subjected to discrimination under any program or activity...

  18. 45 CFR 1110.3 - Discrimination prohibited.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Discrimination prohibited. 1110.3 Section 1110.3... HUMANITIES GENERAL NONDISCRIMINATION IN FEDERALLY ASSISTED PROGRAMS § 1110.3 Discrimination prohibited. (a... from participation in, be denied the benefits of, or be otherwise subjected, to discrimination under...

  19. 38 CFR 18.411 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Discrimination prohibited... Practices § 18.411 Discrimination prohibited. (a) General. (1) No qualified handicapped person shall, on the basis of handicap, be subjected to discrimination in employment under any program or activity to which...

  20. 22 CFR 217.11 - Discrimination prohibited.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discrimination prohibited. 217.11 Section 217... Discrimination prohibited. (a) General. (1) No qualified handicapped person shall, on the basis of handicap, be subjected to discrimination in employment under any program or activity to which this part applies. (2) A...

  1. 28 CFR 42.203 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Discrimination prohibited. 42.203 Section...) of the Justice System Improvement Act of 1979 § 42.203 Discrimination prohibited. (a) No person in... participation in, be denied the benefits of, be subjected to discrimination under, or denied employment in...

  2. 45 CFR 605.11 - Discrimination prohibited.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Discrimination prohibited. 605.11 Section 605.11... Employment Practices § 605.11 Discrimination prohibited. (a) General. (1) No qualified handicapped person shall, on the basis of handicap, be subjected to discrimination in employment under any program or...

  3. 45 CFR 605.21 - Discrimination prohibited.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Discrimination prohibited. 605.21 Section 605.21... Accessibility § 605.21 Discrimination prohibited. No qualified handicapped person shall, because a recipient's... from participation in, or otherwise be subjected to discrimination under any program or activity to...

  4. 5 CFR 900.404 - Discrimination prohibited.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Discrimination prohibited. 900.404... § 900.404 Discrimination prohibited. (a) General. A person in the United States shall not, on the ground... be otherwise subjected to discrimination under, a program to which this subpart applies. (b) Specific...

  5. 43 CFR 17.203 - Discrimination prohibited.

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Discrimination prohibited. 17.203 Section... Discrimination prohibited. (a) General. No qualified handicapped person shall, on the basis of handicap, be excluded from participation in, be denied the benefits of, or otherwise be subjected to discrimination...

  6. 38 CFR 18.404 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Discrimination prohibited... Provisions § 18.404 Discrimination prohibited. (a) General. No qualified handicapped person shall, on the... subjected to discrimination under any program or activity which receives Federal financial assistance. (b...

  7. 38 CFR 18.421 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Discrimination prohibited... Accessibility § 18.421 Discrimination prohibited. No qualified handicapped person shall, because a recipient's... from participation in, or otherwise be subjected to discrimination under any program or activity to...

  8. 28 CFR 35.149 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Discrimination prohibited. 35.149 Section... STATE AND LOCAL GOVERNMENT SERVICES Program Accessibility § 35.149 Discrimination prohibited. Except as... subjected to discrimination by any public entity. ...

  9. 45 CFR 1203.4 - Discrimination prohibited.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Discrimination prohibited. 1203.4 Section 1203.4... OF 1964 § 1203.4 Discrimination prohibited. (a) General. A person in the United States shall not, on... benefits of, or be otherwise subjected to discrimination under, a program to which this part applies. (b...

  10. 22 CFR 217.21 - Discrimination prohibited.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discrimination prohibited. 217.21 Section 217... Discrimination prohibited. No qualified handicapped person shall, because a recipient's facilities within the... excluded from participation in, or otherwise be subjected to discrimination under any program or activity...

  11. 28 CFR 42.520 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Discrimination prohibited. 42.520 Section...-Implementation of Section 504 of the Rehabilitation Act of 1973 Accessibility § 42.520 Discrimination prohibited... participation in, or otherwise subjected to discrimination under any program or activity receiving Federal...

  12. 22 CFR 142.15 - Discrimination prohibited.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discrimination prohibited. 142.15 Section 142... PROGRAMS OR ACTIVITIES RECEIVING FEDERAL FINANCIAL ASSISTANCE Accessibility § 142.15 Discrimination... be subjected to discrimination under any program or activity to which the part applies. ...

  13. 34 CFR 104.21 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Discrimination prohibited. 104.21 Section 104.21... ASSISTANCE Accessibility § 104.21 Discrimination prohibited. No qualified handicapped person shall, because a... excluded from participation in, or otherwise be subjected to discrimination under any program or activity...

  14. 28 CFR 42.104 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Discrimination prohibited. 42.104 Section... Civil Rights Act of 1964 1 § 42.104 Discrimination prohibited. (a) General. No person in the United... denied the benefits of, or be otherwise subjected to discrimination under any program to which this...

  15. 18 CFR 1307.4 - Discrimination prohibited.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Discrimination... NONDISCRIMINATION WITH RESPECT TO HANDICAP § 1307.4 Discrimination prohibited. (a) General. No qualified handicapped... otherwise be subjected to discrimination under any program or activity to which this part applies. (b...

  16. 28 CFR 42.503 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Discrimination prohibited. 42.503 Section...-Implementation of Section 504 of the Rehabilitation Act of 1973 General Provisions § 42.503 Discrimination... from participation in, be denied the benefits of, or otherwise be subjected to discrimination under any...

  17. 29 CFR 1630.4 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Discrimination prohibited. 1630.4 Section 1630.4 Labor... EQUAL EMPLOYMENT PROVISIONS OF THE AMERICANS WITH DISABILITIES ACT § 1630.4 Discrimination prohibited..., or privilege of employment. The term discrimination includes, but is not limited to, the acts...

  18. 34 CFR 104.11 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Discrimination prohibited. 104.11 Section 104.11... ASSISTANCE Employment Practices § 104.11 Discrimination prohibited. (a) General. (1) No qualified handicapped person shall, on the basis of handicap, be subjected to discrimination in employment under any program or...

  19. Experienced discrimination amongst European old citizens

    NARCIS (Netherlands)

    van den Heuvel, Wim J. A.; van Santvoort, Marc M.

    2011-01-01

    This study analyses the experienced age discrimination of old European citizens and the factors related to this discrimination. Differences in experienced discrimination between old citizens of different European countries are explored. Data from the 2008 ESS survey are used. Old age is defined as b

  20. 34 CFR 100.3 - Discrimination prohibited.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Discrimination prohibited. 100.3 Section 100.3... EFFECTUATION OF TITLE VI OF THE CIVIL RIGHTS ACT OF 1964 § 100.3 Discrimination prohibited. (a) General. No... participation in, be denied the benefits of, or be otherwise subjected to discrimination under any program...