WorldWideScience

Sample records for minimum distance classifier

  1. The Minimum Distance of Graph Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....

  2. Minimum Distance Estimation on Time Series Analysis With Little Data

    National Research Council Canada - National Science Library

    Tekin, Hakan

    2001-01-01

    .... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...

  3. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    Science.gov (United States)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the

  4. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  5. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  6. 47 CFR 73.807 - Minimum distance separation between stations.

    Science.gov (United States)

    2010-10-01

    ... and the right-hand column lists (for informational purposes only) the minimum distance necessary for...) Within 320 km of the Mexican border, LP100 stations must meet the following separations with respect to any Mexican stations: Mexican station class Co-channel (km) First-adjacent channel (km) Second-third...

  7. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  8. Lower bounds for the minimum distance of algebraic geometry codes

    DEFF Research Database (Denmark)

    Beelen, Peter

    , such as the Goppa bound, the Feng-Rao bound and the Kirfel-Pellikaan bound. I will finish my talk by giving several examples. Especially for two-point codes, the generalized order bound is fairly easy to compute. As an illustration, I will indicate how a lower bound can be obtained for the minimum distance of some...... description of these codes in terms of order domains has been found. In my talk I will indicate how one can use the ideas behind the order bound to obtain a lower bound for the minimum distance of any AG-code. After this I will compare this generalized order bound with other known lower bounds...

  9. Rate-Compatible LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  10. Toward the minimum inner edge distance of the habitable zone

    Energy Technology Data Exchange (ETDEWEB)

    Zsom, Andras; Seager, Sara; De Wit, Julien; Stamenković, Vlada, E-mail: zsom@mit.edu [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2013-12-01

    We explore the minimum distance from a host star where an exoplanet could potentially be habitable in order not to discard close-in rocky exoplanets for follow-up observations. We find that the inner edge of the Habitable Zone for hot desert worlds can be as close as 0.38 AU around a solar-like star, if the greenhouse effect is reduced (∼1% relative humidity) and the surface albedo is increased. We consider a wide range of atmospheric and planetary parameters such as the mixing ratios of greenhouse gases (water vapor and CO{sub 2}), surface albedo, pressure, and gravity. Intermediate surface pressure (∼1-10 bars) is necessary to limit water loss and to simultaneously sustain an active water cycle. We additionally find that the water loss timescale is influenced by the atmospheric CO{sub 2} level, because it indirectly influences the stratospheric water mixing ratio. If the CO{sub 2} mixing ratio of dry planets at the inner edge is smaller than 10{sup –4}, the water loss timescale is ∼1 billion years, which is considered here too short for life to evolve. We also show that the expected transmission spectra of hot desert worlds are similar to an Earth-like planet. Therefore, an instrument designed to identify biosignature gases in an Earth-like atmosphere can also identify similarly abundant gases in the atmospheres of dry planets. Our inner edge limit is closer to the host star than previous estimates. As a consequence, the occurrence rate of potentially habitable planets is larger than previously thought.

  11. Toward the minimum inner edge distance of the habitable zone

    International Nuclear Information System (INIS)

    Zsom, Andras; Seager, Sara; De Wit, Julien; Stamenković, Vlada

    2013-01-01

    We explore the minimum distance from a host star where an exoplanet could potentially be habitable in order not to discard close-in rocky exoplanets for follow-up observations. We find that the inner edge of the Habitable Zone for hot desert worlds can be as close as 0.38 AU around a solar-like star, if the greenhouse effect is reduced (∼1% relative humidity) and the surface albedo is increased. We consider a wide range of atmospheric and planetary parameters such as the mixing ratios of greenhouse gases (water vapor and CO 2 ), surface albedo, pressure, and gravity. Intermediate surface pressure (∼1-10 bars) is necessary to limit water loss and to simultaneously sustain an active water cycle. We additionally find that the water loss timescale is influenced by the atmospheric CO 2 level, because it indirectly influences the stratospheric water mixing ratio. If the CO 2 mixing ratio of dry planets at the inner edge is smaller than 10 –4 , the water loss timescale is ∼1 billion years, which is considered here too short for life to evolve. We also show that the expected transmission spectra of hot desert worlds are similar to an Earth-like planet. Therefore, an instrument designed to identify biosignature gases in an Earth-like atmosphere can also identify similarly abundant gases in the atmospheres of dry planets. Our inner edge limit is closer to the host star than previous estimates. As a consequence, the occurrence rate of potentially habitable planets is larger than previously thought.

  12. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  13. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition

    Science.gov (United States)

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987

  14. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Science.gov (United States)

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  15. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Directory of Open Access Journals (Sweden)

    QingJun Song

    Full Text Available Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB algorithm plus Support vector machine (SVM is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  16. Minimum distance determination between consecutive carriers in the gamma irradiator IR-200 K trajectory

    International Nuclear Information System (INIS)

    Achmad Suntoro

    2014-01-01

    A design to determine the minimum distance between the consecutive carriers at the trajectory of gamma irradiators IR-200K is implemented. Equilibrium between centrifugal force of a moving carrier in circular trajectory and its gravity force as well as carrier dimensions are used as parameters in determining such a minimum distance. The minimum distance between the consecutive carriers in the design is defined 1.2 meters. The distance is 11.5% greater than the minimum distance theoretically calculated, namely 1,076 meters. Errors tolerance in construction/installation of the trajectory and other unexpected things during irradiator's operation are part of the consideration to enlarge the minimum distance from its theoretical value. The distance between the consecutive carriers will not affect throughput and efficiency of using radiation due to the straight trajectory segments do not need to follow such the minimum distance between the carriers, as the trajectory segments around the i radiation sources are straight. (author)

  17. A linear time algorithm for minimum fill-in and treewidth for distance heredity graphs

    NARCIS (Netherlands)

    Broersma, Haitze J.; Dahlhaus, E.; Kloks, A.J.J.; Kloks, T.

    2000-01-01

    A graph is distance hereditary if it preserves distances in all its connected induced subgraphs. The MINIMUM FILL-IN problem is the problem of finding a chordal supergraph with the smallest possible number of edges. The TREEWIDTH problem is the problem of finding a chordal embedding of the graph

  18. Decoding and finding the minimum distance with Gröbner bases : history and new insights

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.; Woungang, I.; Misra, S.; Misra, S.C.

    2010-01-01

    In this chapter, we discuss decoding techniques and finding the minimum distance of linear codes with the use of Grobner bases. First, we give a historical overview of decoding cyclic codes via solving systems polynominal equations over finite fields. In particular, we mention papers of Cooper,.

  19. On the sizes of expander graphs and minimum distances of graph codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2014-01-01

    We give lower bounds for the minimum distances of graph codes based on expander graphs. The bounds depend only on the second eigenvalue of the graph and the parameters of the component codes. We also give an upper bound on the size of a degree regular graph with given second eigenvalue....

  20. A unique concept for automatically controlling the braking action of wheeled vehicles during minimum distance stops

    Science.gov (United States)

    Barthlome, D. E.

    1975-01-01

    Test results of a unique automatic brake control system are outlined and a comparison is made of its mode of operation to that of an existing skid control system. The purpose of the test system is to provide automatic control of braking action such that hydraulic brake pressure is maintained at a near constant, optimum value during minimum distance stops.

  1. 30 CFR 77.807-2 - Booms and masts; minimum distance from high-voltage lines.

    Science.gov (United States)

    2010-07-01

    ...-voltage lines. 77.807-2 Section 77.807-2 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-2 Booms and masts; minimum distance from high-voltage lines. The booms and masts of equipment operated on the surface of any...

  2. 30 CFR 77.807-3 - Movement of equipment; minimum distance from high-voltage lines.

    Science.gov (United States)

    2010-07-01

    ... high-voltage lines. 77.807-3 Section 77.807-3 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-3 Movement of equipment; minimum distance from high-voltage lines. When any part of any equipment operated on the surface of any...

  3. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    Science.gov (United States)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  4. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  5. Improved training for target detection using Fukunaga-Koontz transform and distance classifier correlation filter

    Science.gov (United States)

    Elbakary, M. I.; Alam, M. S.; Aslan, M. S.

    2008-03-01

    In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and crucial information such as direction, position and size related to the target are lost. If the target reappears at a later frame, it may not be tracked again because the 3D orientation, size and location of the target might be changed. To obtain information about the target before disappearing and to detect the target after reappearing, distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of selecting the training chips manually and selecting the number of the training chips randomly, we adopted the K-means algorithm to cluster the training frames and based on the number of clusters we select the training chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview ,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed algorithm.

  6. Distance and Density Similarity Based Enhanced k-NN Classifier for Improving Fault Diagnosis Performance of Bearings

    Directory of Open Access Journals (Sweden)

    Sharif Uddin

    2016-01-01

    Full Text Available An enhanced k-nearest neighbor (k-NN classification algorithm is presented, which uses a density based similarity measure in addition to a distance based similarity measure to improve the diagnostic performance in bearing fault diagnosis. Due to its use of distance based similarity measure alone, the classification accuracy of traditional k-NN deteriorates in case of overlapping samples and outliers and is highly susceptible to the neighborhood size, k. This study addresses these limitations by proposing the use of both distance and density based measures of similarity between training and test samples. The proposed k-NN classifier is used to enhance the diagnostic performance of a bearing fault diagnosis scheme, which classifies different fault conditions based upon hybrid feature vectors extracted from acoustic emission (AE signals. Experimental results demonstrate that the proposed scheme, which uses the enhanced k-NN classifier, yields better diagnostic performance and is more robust to variations in the neighborhood size, k.

  7. Principle of minimum distance in space of states as new principle in quantum physics

    International Nuclear Information System (INIS)

    Ion, D. B.; Ion, M. L. D.

    2007-01-01

    The mathematician Leonhard Euler (1707-1783) appears to have been a philosophical optimist having written: 'Since the fabric of universe is the most perfect and is the work of the most wise Creator, nothing whatsoever take place in this universe in which some relation of maximum or minimum does not appear. Wherefore, there is absolutely no doubt that every effect in universe can be explained as satisfactory from final causes themselves the aid of the method of Maxima and Minima, as can from the effective causes'. Having in mind this kind of optimism in the papers mentioned in this work we introduced and investigated the possibility to construct a predictive analytic theory of the elementary particle interaction based on the principle of minimum distance in the space of quantum states (PMD-SQS). So, choosing the partial transition amplitudes as the system variational variables and the distance in the space of the quantum states as a measure of the system effectiveness, we obtained the results presented in this paper. These results proved that the principle of minimum distance in space of quantum states (PMD-SQS) can be chosen as variational principle by which we can find the analytic expressions of the partial transition amplitudes. In this paper we present a description of hadron-hadron scattering via principle of minimum distance PMD-SQS when the distance in space of states is minimized with two directional constraints: dσ/dΩ(±1) = fixed. Then by using the available experimental (pion-nucleon and kaon-nucleon) phase shifts we obtained not only consistent experimental tests of the PMD-SQS optimality, but also strong experimental evidences for new principles in hadronic physics such as: Principle of nonextensivity conjugation via the Riesz-Thorin relation (1/2p + 1/2q = 1) and a new Principle of limited uncertainty in nonextensive quantum physics. The strong experimental evidence obtained here for the nonextensive statistical behavior of the [J,

  8. Analysis of the minimum swerving distance for the development of a motorcycle autonomous braking system.

    Science.gov (United States)

    Giovannini, Federico; Savino, Giovanni; Pierini, Marco; Baldanzini, Niccolò

    2013-10-01

    In the recent years the autonomous emergency brake (AEB) was introduced in the automotive field to mitigate the injury severity in case of unavoidable collisions. A crucial element for the activation of the AEB is to establish when the obstacle is no longer avoidable by lateral evasive maneuvers (swerving). In the present paper a model to compute the minimum swerving distance needed by a powered two-wheeler (PTW) to avoid the collision against a fixed obstacle, named last-second swerving model (Lsw), is proposed. The effectiveness of the model was investigated by an experimental campaign involving 12 volunteers riding a scooter equipped with a prototype autonomous emergency braking, named motorcycle autonomous emergency braking system (MAEB). The tests showed the performance of the model in evasive trajectory computation for different riding styles and fixed obstacles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Rate-compatible protograph LDPC code families with linear minimum distance

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  10. Protein-protein interaction site predictions with minimum covariance determinant and Mahalanobis distance.

    Science.gov (United States)

    Qiu, Zhijun; Zhou, Bo; Yuan, Jiangfeng

    2017-11-21

    Protein-protein interaction site (PPIS) prediction must deal with the diversity of interaction sites that limits their prediction accuracy. Use of proteins with unknown or unidentified interactions can also lead to missing interfaces. Such data errors are often brought into the training dataset. In response to these two problems, we used the minimum covariance determinant (MCD) method to refine the training data to build a predictor with better performance, utilizing its ability of removing outliers. In order to predict test data in practice, a method based on Mahalanobis distance was devised to select proper test data as input for the predictor. With leave-one-validation and independent test, after the Mahalanobis distance screening, our method achieved higher performance according to Matthews correlation coefficient (MCC), although only a part of test data could be predicted. These results indicate that data refinement is an efficient approach to improve protein-protein interaction site prediction. By further optimizing our method, it is hopeful to develop predictors of better performance and wide range of application. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A latent class distance association model for cross-classified data with a categorical response variable.

    Science.gov (United States)

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  12. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    Science.gov (United States)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  13. Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.

    2009-01-01

    In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.

  14. FastTree: Computing Large Minimum Evolution Trees with Profiles instead of a Distance Matrix

    OpenAIRE

    Price, Morgan N.; Dehal, Paramvir S.; Arkin, Adam P.

    2009-01-01

    Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement Neighbor-Joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest neighbor in...

  15. Fast Tree: Computing Large Minimum-Evolution Trees with Profiles instead of a Distance Matrix

    Energy Technology Data Exchange (ETDEWEB)

    N. Price, Morgan; S. Dehal, Paramvir; P. Arkin, Adam

    2009-07-31

    Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement neighbor-joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest-neighbor interchanges to reduce the length of the tree. For an alignment with N sequences, L sites, and a different characters, a distance matrix requires O(N^2) space and O(N^2 L) time, but FastTree requires just O( NLa + N sqrt(N) ) memory and O( N sqrt(N) log(N) L a ) time. To estimate the tree's reliability, FastTree uses local bootstrapping, which gives another 100-fold speedup over a distance matrix. For example, FastTree computed a tree and support values for 158,022 distinct 16S ribosomal RNAs in 17 hours and 2.4 gigabytes of memory. Just computing pairwise Jukes-Cantor distances and storing them, without inferring a tree or bootstrapping, would require 17 hours and 50 gigabytes of memory. In simulations, FastTree was slightly more accurate than neighbor joining, BIONJ, or FastME; on genuine alignments, FastTree's topologies had higher likelihoods. FastTree is available at http://microbesonline.org/fasttree.

  16. Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads

    DEFF Research Database (Denmark)

    Hermansson, Henrik Alf Jonas; Cross, James

    2015-01-01

    Political scientists often nd themselves tracking amendments to political texts. As different actors weigh in, texts change as they are drafted and redrafted, reflecting political preferences and power. This study provides a novel solution to the problem of detecting amendments to political text......) and substantive amount of amendments made between version of texts. To illustrate the usefulness and eciency of the approach we replicate two existing studies from the field of legislative studies. Our results demonstrate that minimum edit distance methods can produce superior measures of text amendments to hand...

  17. Fast Tree: Computing Large Minimum-Evolution Trees with Profiles instead of a Distance Matrix

    OpenAIRE

    N. Price, Morgan

    2009-01-01

    Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement neighbor-joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest-neighbor i...

  18. Drift correction for single-molecule imaging by molecular constraint field, a distance minimum metric

    International Nuclear Information System (INIS)

    Han, Renmin; Wang, Liansan; Xu, Fan; Zhang, Yongdeng; Zhang, Mingshu; Liu, Zhiyong; Ren, Fei; Zhang, Fa

    2015-01-01

    The recent developments of far-field optical microscopy (single molecule imaging techniques) have overcome the diffraction barrier of light and improve image resolution by a factor of ten compared with conventional light microscopy. These techniques utilize the stochastic switching of probe molecules to overcome the diffraction limit and determine the precise localizations of molecules, which often requires a long image acquisition time. However, long acquisition times increase the risk of sample drift. In the case of high resolution microscopy, sample drift would decrease the image resolution. In this paper, we propose a novel metric based on the distance between molecules to solve the drift correction. The proposed metric directly uses the position information of molecules to estimate the frame drift. We also designed an algorithm to implement the metric for the general application of drift correction. There are two advantages of our method: First, because our method does not require space binning of positions of molecules but directly operates on the positions, it is more natural for single molecule imaging techniques. Second, our method can estimate drift with a small number of positions in each temporal bin, which may extend its potential application. The effectiveness of our method has been demonstrated by both simulated data and experiments on single molecular images

  19. New neural network classifier of fall-risk based on the Mahalanobis distance and kinematic parameters assessed by a wearable device

    International Nuclear Information System (INIS)

    Giansanti, Daniele; Macellari, Velio; Maccioni, Giovanni

    2008-01-01

    Fall prevention lacks easy, quantitative and wearable methods for the classification of fall-risk (FR). Efforts must be thus devoted to the choice of an ad hoc classifier both to reduce the size of the sample used to train the classifier and to improve performances. A new methodology that uses a neural network (NN) and a wearable device are hereby proposed for this purpose. The NN uses kinematic parameters assessed by a wearable device with accelerometers and rate gyroscopes during a posturography protocol. The training of the NN was based on the Mahalanobis distance and was carried out on two groups of 30 elderly subjects with varying fall-risk Tinetti scores. The validation was done on two groups of 100 subjects with different fall-risk Tinetti scores and showed that, both in terms of specificity and sensitivity, the NN performed better than other classifiers (naive Bayes, Bayes net, multilayer perceptron, support vector machines, statistical classifiers). In particular, (i) the proposed NN methodology improved the specificity and sensitivity by a mean of 3% when compared to the statistical classifier based on the Mahalanobis distance (SCMD) described in Giansanti (2006 Physiol. Meas. 27 1081–90); (ii) the assessed specificity was 97%, the assessed sensitivity was 98% and the area under receiver operator characteristics was 0.965. (note)

  20. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  1. Machine-Learning Classifier for Patients with Major Depressive Disorder: Multifeature Approach Based on a High-Order Minimum Spanning Tree Functional Brain Network.

    Science.gov (United States)

    Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie

    2017-01-01

    High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.

  2. Foraging range, habitat use and minimum flight distances of East Atlantic Light-bellied Brent Geese Branta bernicla hrota in their spring staging areas

    DEFF Research Database (Denmark)

    Clausen, Kevin Kuhlmann; Clausen, Preben; Hounisen, Jens Peder

    2013-01-01

    Global Positioning System (GPS) satellite telemetry was used to determine the foraging range, habitat use and minimum flight distances for individual East Atlantic Light-bellied Brent Geese Branta bernicla hrota at two spring staging areas in Denmark. Foraging ranges (mean ± s.d. = 53.0 ± 23.4 km...

  3. A new reliability measure based on specified minimum distances before the locations of random variables in a finite interval

    International Nuclear Information System (INIS)

    Todinov, M.T.

    2004-01-01

    A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level

  4. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    Science.gov (United States)

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  5. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...

  6. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...

  7. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    Science.gov (United States)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a

  8. Navigation of Chang'E-2 asteroid exploration mission and the minimum distance estimation during its fly-by of Toutatis

    Science.gov (United States)

    Cao, Jianfeng; Liu, Yong; Hu, Songjie; Liu, Lei; Tang, Geshi; Huang, Yong; Li, Peijia

    2015-01-01

    China's space probe Chang'E-2 began its asteroid exploration mission on April 15, 2012 and had been in space for 243 days before its encounter with Toutatis. With no onboard navigation equipment available, the navigation of CE-2 during its fly-by of the asteroid relied totally on ground-based Unified S-Band (USB) and Very Long Baseline Interferometry (VLBI) tracking data. The orbit determination of Toutatis was achieved by using a combination of optical measurements and radar ranging. On November 30, 2012, CE-2 was targeted at a destination that was 15 km away from the asteroid as it performed its third trajectory correction maneuver. Later orbit determination analysis showed that a correction residual was still present, which necessitated another maneuver on December 12. During the two maneuvers, ground-based navigation faced a challenge in terms of the orbit determination accuracy. With the optimization of our strategy, an accuracy of better than 15 km was finally achieved for the post-maneuver orbit solution. On December 13, CE-2 successfully passed by Toutatis and conducted continuous photographing of Toutatis during the entire process. An analysis of the images that were taken from the solar panel monitoring camera and the satellite attitude information demonstrates that the closest distance obtained between CE-2 and Toutatis (Toutatis's surface) was 1.9 km, which is considerably better than the 30 km fly-by distance that we originally hoped based on the accuracies that we can obtain on the satellite and Toutatis' orbits.

  9. Representing distance, consuming distance

    DEFF Research Database (Denmark)

    Larsen, Gunvor Riber

    Title: Representing Distance, Consuming Distance Abstract: Distance is a condition for corporeal and virtual mobilities, for desired and actual travel, but yet it has received relatively little attention as a theoretical entity in its own right. Understandings of and assumptions about distance...... are being consumed in the contemporary society, in the same way as places, media, cultures and status are being consumed (Urry 1995, Featherstone 2007). An exploration of distance and its representations through contemporary consumption theory could expose what role distance plays in forming...

  10. Analytic processing of distance.

    Science.gov (United States)

    Dopkins, Stephen; Galyer, Darin

    2018-01-01

    How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  12. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Science.gov (United States)

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  13. Distance Learning

    National Research Council Canada - National Science Library

    Braddock, Joseph

    1997-01-01

    A study reviewing the existing Army Distance Learning Plan (ADLP) and current Distance Learning practices, with a focus on the Army's training and educational challenges and the benefits of applying Distance Learning techniques...

  14. Coupling between minimum scattering antennas

    DEFF Research Database (Denmark)

    Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans

    1974-01-01

    Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...

  15. Steiner Distance in Graphs--A Survey

    OpenAIRE

    Mao, Yaping

    2017-01-01

    For a connected graph $G$ of order at least $2$ and $S\\subseteq V(G)$, the \\emph{Steiner distance} $d_G(S)$ among the vertices of $S$ is the minimum size among all connected subgraphs whose vertex sets contain $S$. In this paper, we summarize the known results on the Steiner distance parameters, including Steiner distance, Steiner diameter, Steiner center, Steiner median, Steiner interval, Steiner distance hereditary graph, Steiner distance stable graph, average Steiner distance, and Steiner ...

  16. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  17. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  18. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  19. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  20. Data characteristics that determine classifier performance

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2006-11-01

    Full Text Available available at [11]. The kNN uses a LinearNN nearest neighbour search algorithm with an Euclidean distance metric [8]. The optimal k value is determined by performing 10-fold cross-validation. An optimal k value between 1 and 10 is used for Experiments 1... classifiers. 10-fold cross-validation is used to evaluate and compare the performance of the classifiers on the different data sets. 3.1. Artificial data generation Multivariate Gaussian distributions are used to generate artificial data sets. We use d...

  1. modelling distances

    Directory of Open Access Journals (Sweden)

    Robert F. Love

    2001-01-01

    Full Text Available Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance predicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations, SD (Squared Deviations and NAD (Normalized Absolute Deviations are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the ℓkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute prediction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.

  2. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  3. A Linguistic Image of Nature: The Burmese Numerative Classifier System

    Science.gov (United States)

    Becker, Alton L.

    1975-01-01

    The Burmese classifier system is coherent because it is based upon a single elementary semantic dimension: deixis. On that dimension, four distances are distinguished, distances which metaphorically substitute for other conceptual relations between people and other living beings, people and things, and people and concepts. (Author/RM)

  4. Ultrametric Distance in Syntax

    Directory of Open Access Journals (Sweden)

    Roberts Mark D.

    2015-04-01

    Full Text Available Phrase structure trees have a hierarchical structure. In many subjects, most notably in taxonomy such tree structures have been studied using ultrametrics. Here syntactical hierarchical phrase trees are subject to a similar analysis, which is much simpler as the branching structure is more readily discernible and switched. The ambiguity of which branching height to choose, is resolved by postulating that branching occurs at the lowest height available. An ultrametric produces a measure of the complexity of sentences: presumably the complexity of sentences increases as a language is acquired so that this can be tested. All ultrametric triangles are equilateral or isosceles. Here it is shown that X̅ structure implies that there are no equilateral triangles. Restricting attention to simple syntax a minimum ultrametric distance between lexical categories is calculated. A matrix constructed from this ultrametric distance is shown to be different than the matrix obtained from features. It is shown that the definition of C-COMMAND can be replaced by an equivalent ultrametric definition. The new definition invokes a minimum distance between nodes and this is more aesthetically satisfying than previous varieties of definitions. From the new definition of C-COMMAND follows a new definition of of the central notion in syntax namely GOVERNMENT.

  5. Distance learning

    Directory of Open Access Journals (Sweden)

    Katarina Pucelj

    2006-12-01

    Full Text Available I would like to underline the role and importance of knowledge, which is acquired by individuals as a result of a learning process and experience. I have established that a form of learning, such as distance learning definitely contributes to a higher learning quality and leads to innovative, dynamic and knowledgebased society. Knowledge and skills enable individuals to cope with and manage changes, solve problems and also create new knowledge. Traditional learning practices face new circumstances, new and modern technologies appear, which enable quick and quality-oriented knowledge implementation. The centre of learning process at distance learning is to increase the quality of life of citizens, their competitiveness on the workforce market and ensure higher economic growth. Intellectual capital is the one, which represents the biggest capital of each society and knowledge is the key factor for succes of everybody, who are fully aware of this. Flexibility, openness and willingness of people to follow new IT solutions form suitable environment for developing and deciding to take up distance learning.

  6. 47 CFR 73.207 - Minimum distance separation between stations.

    Science.gov (United States)

    2010-10-01

    ... kW ERP and 100 meters antenna HAAT (or equivalent lower ERP and higher antenna HAAT based on a class... which have been notified internationally as Class A are limited to a maximum of 3.0 kW ERP at 100 meters... internationally as Class AA are limited to a maximum of 6.0 kW ERP at 100 meters HAAT, or the equivalent; (iii) U...

  7. Decoding Reed-Muller Codes beyond Half the Minimum Distance

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen; Jakobsen, Thomas

    1999-01-01

    vanishing when evaluated at points in $\\ff^m_2$ joint with the corresponding received bits. To obtain a list of codewords closest to the received word we need to factor $Q$ considered as an element of the quotient ring of boolean polynomials which is not a unique factorization domain. Therefore we introduce...

  8. Minimum Wages and Poverty

    OpenAIRE

    Fields, Gary S.; Kanbur, Ravi

    2005-01-01

    Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...

  9. Real-time stop sign detection and distance estimation using a single camera

    Science.gov (United States)

    Wang, Wenpeng; Su, Yuxuan; Cheng, Ming

    2018-04-01

    In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.

  10. Fingerprint prediction using classifier ensembles

    CSIR Research Space (South Africa)

    Molale, P

    2011-11-01

    Full Text Available ); logistic discrimination (LgD), k-nearest neighbour (k-NN), artificial neural network (ANN), association rules (AR) decision tree (DT), naive Bayes classifier (NBC) and the support vector machine (SVM). The performance of several multiple classifier systems...

  11. 18 CFR 367.18 - Criteria for classifying leases.

    Science.gov (United States)

    2010-04-01

    ... the lessee) must not give rise to a new classification of a lease for accounting purposes. ... classifying the lease. (4) The present value at the beginning of the lease term of the minimum lease payments... taxes to be paid by the lessor, including any related profit, equals or exceeds 90 percent of the excess...

  12. The Minimum Wage and the Employment of Teenagers. Recent Research.

    Science.gov (United States)

    Fallick, Bruce; Currie, Janet

    A study used individual-level data from the National Longitudinal Study of Youth to examine the effects of changes in the federal minimum wage on teenage employment. Individuals in the sample were classified as either likely or unlikely to be affected by these increases in the federal minimum wage on the basis of their wage rates and industry of…

  13. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  14. Edit Distance to Monotonicity in Sliding Windows

    DEFF Research Database (Denmark)

    Chan, Ho-Leung; Lam, Tak-Wah; Lee, Lap Kei

    2011-01-01

    Given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non-decreasing with respect to the numerical value. The space complexity of estimating the edit distance to monotonicity of a ...

  15. Classifying Sluice Occurrences in Dialogue

    DEFF Research Database (Denmark)

    Baird, Austin; Hamza, Anissa; Hardt, Daniel

    2018-01-01

    perform manual annotation with acceptable inter-coder agreement. We build classifier models with Decision Trees and Naive Bayes, with accuracy of 67%. We deploy a classifier to automatically classify sluice occurrences in OpenSubtitles, resulting in a corpus with 1.7 million occurrences. This will support....... Despite this, the corpus can be of great use in research on sluicing and development of systems, and we are making the corpus freely available on request. Furthermore, we are in the process of improving the accuracy of sluice identification and annotation for the purpose of created a subsequent version...

  16. Minimum critical mass systems

    International Nuclear Information System (INIS)

    Dam, H. van; Leege, P.F.A. de

    1987-01-01

    An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)

  17. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Blanco, A; Rodriguez, R; Martinez-Maranon, I

    2014-01-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  18. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  19. Quantum ensembles of quantum classifiers.

    Science.gov (United States)

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  20. IAEA safeguards and classified materials

    International Nuclear Information System (INIS)

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-01-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA's safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials

  1. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  2. Minimum entropy production principle

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel

    2013-01-01

    Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle

  3. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  4. Effect of Weight Transfer on a Vehicle's Stopping Distance.

    Science.gov (United States)

    Whitmire, Daniel P.; Alleman, Timothy J.

    1979-01-01

    An analysis of the minimum stopping distance problem is presented taking into account the effect of weight transfer on nonskidding vehicles and front- or rear-wheels-skidding vehicles. Expressions for the minimum stopping distances are given in terms of vehicle geometry and the coefficients of friction. (Author/BB)

  5. Classifying smoking urges via machine learning.

    Science.gov (United States)

    Dumortier, Antoine; Beckjord, Ellen; Shiffman, Saul; Sejdić, Ervin

    2016-12-01

    Smoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states. To test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision. The outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications. In conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions. Copyright © 2016 Elsevier Ireland Ltd. All rights

  6. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  7. Minimum-link paths among obstacles in the plane

    NARCIS (Netherlands)

    Mitchell, J.S.B.; Rote, G.; Woeginger, G.J.

    1992-01-01

    Given a set of nonintersecting polygonal obstacles in the plane, thelink distance between two pointss andt is the minimum number of edges required to form a polygonal path connectings tot that avoids all obstacles. We present an algorithm that computes the link distance (and a corresponding

  8. Knowledge Uncertainty and Composed Classifier

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  9. Correlation Dimension-Based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2014-01-01

    Roč. 44, č. 12 (2014), s. 2253-2263 ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  10. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  11. Classified facilities for environmental protection

    International Nuclear Information System (INIS)

    Anon.

    1993-02-01

    The legislation of the classified facilities governs most of the dangerous or polluting industries or fixed activities. It rests on the law of 9 July 1976 concerning facilities classified for environmental protection and its application decree of 21 September 1977. This legislation, the general texts of which appear in this volume 1, aims to prevent all the risks and the harmful effects coming from an installation (air, water or soil pollutions, wastes, even aesthetic breaches). The polluting or dangerous activities are defined in a list called nomenclature which subjects the facilities to a declaration or an authorization procedure. The authorization is delivered by the prefect at the end of an open and contradictory procedure after a public survey. In addition, the facilities can be subjected to technical regulations fixed by the Environment Minister (volume 2) or by the prefect for facilities subjected to declaration (volume 3). (A.B.)

  12. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  13. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... MARINE MAMMAL COMMISSION Classified National Security Information [Directive 11-01] AGENCY: Marine... Commission's (MMC) policy on classified information, as directed by Information Security Oversight Office... of Executive Order 13526, ``Classified National Security Information,'' and 32 CFR part 2001...

  14. Rising above the Minimum Wage.

    Science.gov (United States)

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  15. Design for minimum energy in interstellar communication

    Science.gov (United States)

    Messerschmitt, David G.

    2015-02-01

    Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.

  16. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  17. Waste classifying and separation device

    International Nuclear Information System (INIS)

    Kakiuchi, Hiroki.

    1997-01-01

    A flexible plastic bags containing solid wastes of indefinite shape is broken and the wastes are classified. The bag cutting-portion of the device has an ultrasonic-type or a heater-type cutting means, and the cutting means moves in parallel with the transferring direction of the plastic bags. A classification portion separates and discriminates the plastic bag from the contents and conducts classification while rotating a classification table. Accordingly, the plastic bag containing solids of indefinite shape can be broken and classification can be conducted efficiently and reliably. The device of the present invention has a simple structure which requires small installation space and enables easy maintenance. (T.M.)

  18. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam

    2014-01-01

    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged...... in large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...... in the organizational attributes of specific interest group types. As expected, our comparison of coding schemes reveals a closer link between group attributes and group type in narrower classification schemes based on group organizational characteristics than those based on a behavioral definition of lobbying....

  19. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  20. Encyclopedia of distances

    CERN Document Server

    Deza, Michel Marie

    2016-01-01

    This 4th edition of the leading reference volume on distance metrics is characterized by updated and rewritten sections on some items suggested by experts and readers, as well a general streamlining of content and the addition of essential new topics. Though the structure remains unchanged, the new edition also explores recent advances in the use of distances and metrics for e.g. generalized distances, probability theory, graph theory, coding theory, data analysis. New topics in the purely mathematical sections include e.g. the Vitanyi multiset-metric, algebraic point-conic distance, triangular ratio metric, Rossi-Hamming metric, Taneja distance, spectral semimetric between graphs, channel metrization, and Maryland bridge distance. The multidisciplinary sections have also been supplemented with new topics, including: dynamic time wrapping distance, memory distance, allometry, atmospheric depth, elliptic orbit distance, VLBI distance measurements, the astronomical system of units, and walkability distance. Lea...

  1. A Minimum Spanning Tree Representation of Anime Similarities

    OpenAIRE

    Wibowo, Canggih Puspo

    2016-01-01

    In this work, a new way to represent Japanese animation (anime) is presented. We applied a minimum spanning tree to show the relation between anime. The distance between anime is calculated through three similarity measurements, namely crew, score histogram, and topic similarities. Finally, the centralities are also computed to reveal the most significance anime. The result shows that the minimum spanning tree can be used to determine the similarity anime. Furthermore, by using centralities c...

  2. Employment effects of minimum wages

    OpenAIRE

    Neumark, David

    2014-01-01

    The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.

  3. Minimum wakefield achievable by waveguide damped cavity

    International Nuclear Information System (INIS)

    Lin, X.E.; Kroll, N.M.

    1995-01-01

    The authors use an equivalent circuit to model a waveguide damped cavity. Both exponentially damped and persistent (decay t -3/2 ) components of the wakefield are derived from this model. The result shows that for a cavity with resonant frequency a fixed interval above waveguide cutoff, the persistent wakefield amplitude is inversely proportional to the external Q value of the damped mode. The competition of the two terms results in an optimal Q value, which gives a minimum wakefield as a function of the distance behind the source particle. The minimum wakefield increases when the resonant frequency approaches the waveguide cutoff. The results agree very well with computer simulation on a real cavity-waveguide system

  4. 75 FR 6151 - Minimum Capital

    Science.gov (United States)

    2010-02-08

    ... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...

  5. Experimental study on multi-sub-classifier for land cover classification: a case study in Shangri-La, China

    Science.gov (United States)

    Wang, Yan-ying; Wang, Jin-liang; Wang, Ping; Hu, Wen-yin; Su, Shao-hua

    2015-12-01

    High accuracy remote sensed image classification technology is a long-term and continuous pursuit goal of remote sensing applications. In order to evaluate single classification algorithm accuracy, take Landsat TM image as data source, Northwest Yunnan as study area, seven types of land cover classification like Maximum Likelihood Classification has been tested, the results show that: (1)the overall classification accuracy of Maximum Likelihood Classification(MLC), Artificial Neural Network Classification(ANN), Minimum Distance Classification(MinDC) is higher, which is 82.81% and 82.26% and 66.41% respectively; the overall classification accuracy of Parallel Hexahedron Classification(Para), Spectral Information Divergence Classification(SID), Spectral Angle Classification(SAM) is low, which is 37.29%, 38.37, 53.73%, respectively. (2) from each category classification accuracy: although the overall accuracy of the Para is the lowest, it is much higher on grasslands, wetlands, forests, airport land, which is 89.59%, 94.14%, and 89.04%, respectively; the SAM, SID are good at forests classification with higher overall classification accuracy, which is 89.8% and 87.98%, respectively. Although the overall classification accuracy of ANN is very high, the classification accuracy of road, rural residential land and airport land is very low, which is 10.59%, 11% and 11.59% respectively. Other classification methods have their advantages and disadvantages. These results show that, under the same conditions, the same images with different classification methods to classify, there will be a classifier to some features has higher classification accuracy, a classifier to other objects has high classification accuracy, and therefore, we may select multi sub-classifier integration to improve the classification accuracy.

  6. Training for Distance Teaching through Distance Learning.

    Science.gov (United States)

    Cadorath, Jill; Harris, Simon; Encinas, Fatima

    2002-01-01

    Describes a mixed-mode bachelor degree course in English language teaching at the Universidad Autonoma de Puebla (Mexico) that was designed to help practicing teachers write appropriate distance education materials by giving them the experience of being distance students. Includes a course outline and results of a course evaluation. (Author/LRW)

  7. The Distance Standard Deviation

    OpenAIRE

    Edelmann, Dominic; Richards, Donald; Vogel, Daniel

    2017-01-01

    The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...

  8. SpectraClassifier 1.0: a user friendly, automated MRS-based classifier-development system

    Directory of Open Access Journals (Sweden)

    Julià-Sapé Margarida

    2010-02-01

    Full Text Available Abstract Background SpectraClassifier (SC is a Java solution for designing and implementing Magnetic Resonance Spectroscopy (MRS-based classifiers. The main goal of SC is to allow users with minimum background knowledge of multivariate statistics to perform a fully automated pattern recognition analysis. SC incorporates feature selection (greedy stepwise approach, either forward or backward, and feature extraction (PCA. Fisher Linear Discriminant Analysis is the method of choice for classification. Classifier evaluation is performed through various methods: display of the confusion matrix of the training and testing datasets; K-fold cross-validation, leave-one-out and bootstrapping as well as Receiver Operating Characteristic (ROC curves. Results SC is composed of the following modules: Classifier design, Data exploration, Data visualisation, Classifier evaluation, Reports, and Classifier history. It is able to read low resolution in-vivo MRS (single-voxel and multi-voxel and high resolution tissue MRS (HRMAS, processed with existing tools (jMRUI, INTERPRET, 3DiCSI or TopSpin. In addition, to facilitate exchanging data between applications, a standard format capable of storing all the information needed for a dataset was developed. Each functionality of SC has been specifically validated with real data with the purpose of bug-testing and methods validation. Data from the INTERPRET project was used. Conclusions SC is a user-friendly software designed to fulfil the needs of potential users in the MRS community. It accepts all kinds of pre-processed MRS data types and classifies them semi-automatically, allowing spectroscopists to concentrate on interpretation of results with the use of its visualisation tools.

  9. Composite Classifiers for Automatic Target Recognition

    National Research Council Canada - National Science Library

    Wang, Lin-Cheng

    1998-01-01

    ...) using forward-looking infrared (FLIR) imagery. Two existing classifiers, one based on learning vector quantization and the other on modular neural networks, are used as the building blocks for our composite classifiers...

  10. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....

  11. Encyclopedia of distances

    CERN Document Server

    Deza, Michel Marie

    2014-01-01

    This updated and revised third edition of the leading reference volume on distance metrics includes new items from very active research areas in the use of distances and metrics such as geometry, graph theory, probability theory and analysis. Among the new topics included are, for example, polyhedral metric space, nearness matrix problems, distances between belief assignments, distance-related animal settings, diamond-cutting distances, natural units of length, Heidegger’s de-severance distance, and brain distances. The publication of this volume coincides with intensifying research efforts into metric spaces and especially distance design for applications. Accurate metrics have become a crucial goal in computational biology, image analysis, speech recognition and information retrieval. Leaving aside the practical questions that arise during the selection of a ‘good’ distance function, this work focuses on providing the research community with an invaluable comprehensive listing of the main available di...

  12. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily the...

  13. A Bayesian Classifier for X-Ray Pulsars Recognition

    Directory of Open Access Journals (Sweden)

    Hao Liang

    2016-01-01

    Full Text Available Recognition for X-ray pulsars is important for the problem of spacecraft’s attitude determination by X-ray Pulsar Navigation (XPNAV. By using the nonhomogeneous Poisson model of the received photons and the minimum recognition error criterion, a classifier based on the Bayesian theorem is proposed. For X-ray pulsars recognition with unknown Doppler frequency and initial phase, the features of every X-ray pulsar are extracted and the unknown parameters are estimated using the Maximum Likelihood (ML method. Besides that, a method to recognize unknown X-ray pulsars or X-ray disturbances is proposed. Simulation results certificate the validity of the proposed Bayesian classifier.

  14. Timed Fast Exact Euclidean Distance (tFEED) maps

    NARCIS (Netherlands)

    Kehtarnavaz, Nasser; Schouten, Theo E.; Laplante, Philip A.; Kuppens, Harco; van den Broek, Egon

    2005-01-01

    In image and video analysis, distance maps are frequently used. They provide the (Euclidean) distance (ED) of background pixels to the nearest object pixel. In a naive implementation, each object pixel feeds its (exact) ED to each background pixel; then the minimum of these values denotes the ED to

  15. Brownian distance covariance

    OpenAIRE

    Székely, Gábor J.; Rizzo, Maria L.

    2010-01-01

    Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...

  16. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  17. 76 FR 63811 - Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and...

    Science.gov (United States)

    2011-10-13

    ... implementation of policies and minimum standards regarding information security, personnel security, and systems security; address both internal and external security threats and vulnerabilities; and provide policies and... policies and minimum standards will address all agencies that operate or access classified computer...

  18. Comparison of Classifier Architectures for Online Neural Spike Sorting.

    Science.gov (United States)

    Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood

    2017-04-01

    High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.

  19. Haptic Discrimination of Distance

    Science.gov (United States)

    van Beek, Femke E.; Bergmann Tiest, Wouter M.; Kappers, Astrid M. L.

    2014-01-01

    While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive) and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices. PMID:25116638

  20. Haptic discrimination of distance.

    Directory of Open Access Journals (Sweden)

    Femke E van Beek

    Full Text Available While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices.

  1. Interface Simulation Distances

    Directory of Open Access Journals (Sweden)

    Pavol Černý

    2012-10-01

    Full Text Available The classical (boolean notion of refinement for behavioral interfaces of system components is the alternating refinement preorder. In this paper, we define a distance for interfaces, called interface simulation distance. It makes the alternating refinement preorder quantitative by, intuitively, tolerating errors (while counting them in the alternating simulation game. We show that the interface simulation distance satisfies the triangle inequality, that the distance between two interfaces does not increase under parallel composition with a third interface, and that the distance between two interfaces can be bounded from above and below by distances between abstractions of the two interfaces. We illustrate the framework, and the properties of the distances under composition of interfaces, with two case studies.

  2. Tourists consuming distance

    DEFF Research Database (Denmark)

    Larsen, Gunvor Riber

    The environmental impact of tourism mobility is linked to the distances travelled in order to reach a holiday destination, and with tourists travelling more and further than previously, an understanding of how the tourists view the distance they travel across becomes relevant. Based on interviews...... contribute to an understanding of how it is possible to change tourism travel behaviour towards becoming more sustainable. How tourists 'consume distance' is discussed, from the practical level of actually driving the car or sitting in the air plane, to the symbolic consumption of distance that occurs when...... travelling on holiday becomes part of a lifestyle and a social positioning game. Further, different types of tourist distance consumers are identified, ranging from the reluctant to the deliberate and nonchalant distance consumers, who display very differing attitudes towards the distance they all travel...

  3. Error minimizing algorithms for nearest eighbor classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  4. Safety distance between underground natural gas and water pipeline facilities

    International Nuclear Information System (INIS)

    Mohsin, R.; Majid, Z.A.; Yusof, M.Z.

    2014-01-01

    A leaking water pipe bursting high pressure water jet in the soil will create slurry erosion which will eventually erode the adjacent natural gas pipe, thus causing its failure. The standard 300 mm safety distance used to place natural gas pipe away from water pipeline facilities needs to be reviewed to consider accidental damage and provide safety cushion to the natural gas pipe. This paper presents a study on underground natural gas pipeline safety distance via experimental and numerical approaches. The pressure–distance characteristic curve obtained from this experimental study showed that the pressure was inversely proportional to the square of the separation distance. Experimental testing using water-to-water pipeline system environment was used to represent the worst case environment, and could be used as a guide to estimate appropriate safety distance. Dynamic pressures obtained from the experimental measurement and simulation prediction mutually agreed along the high-pressure water jetting path. From the experimental and simulation exercises, zero effect distance for water-to-water medium was obtained at an estimated horizontal distance at a minimum of 1500 mm, while for the water-to-sand medium, the distance was estimated at a minimum of 1200 mm. - Highlights: • Safe separation distance of underground natural gas pipes was determined. • Pressure curve is inversely proportional to separation distance. • Water-to-water system represents the worst case environment. • Measured dynamic pressures mutually agreed with simulation results. • Safe separation distance of more than 1200 mm should be applied

  5. Traversing psychological distance.

    Science.gov (United States)

    Liberman, Nira; Trope, Yaacov

    2014-07-01

    Traversing psychological distance involves going beyond direct experience, and includes planning, perspective taking, and contemplating counterfactuals. Consistent with this view, temporal, spatial, and social distances as well as hypotheticality are associated, affect each other, and are inferred from one another. Moreover, traversing all distances involves the use of abstraction, which we define as forming a belief about the substitutability for a specific purpose of subjectively distinct objects. Indeed, across many instances of both abstraction and psychological distancing, more abstract constructs are used for more distal objects. Here, we describe the implications of this relation for prediction, choice, communication, negotiation, and self-control. We ask whether traversing distance is a general mental ability and whether distance should replace expectancy in expected-utility theories. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Cyclic labellings with constraints at two distances

    OpenAIRE

    Leese, R; Noble, S D

    2004-01-01

    Motivated by problems in radio channel assignment, we consider the vertex-labelling of graphs with non-negative integers. The objective is to minimise the span of the labelling, subject to constraints imposed at graph distances one and two. We show that the minimum span is (up to rounding) a piecewise linear function of the constraints, and give a complete specification, together with associated optimal assignments, for trees and cycles.

  7. Hierarchical mixtures of naive Bayes classifiers

    NARCIS (Netherlands)

    Wiering, M.A.

    2002-01-01

    Naive Bayes classifiers tend to perform very well on a large number of problem domains, although their representation power is quite limited compared to more sophisticated machine learning algorithms. In this pa- per we study combining multiple naive Bayes classifiers by using the hierar- chical

  8. Comparing classifiers for pronunciation error detection

    NARCIS (Netherlands)

    Strik, H.; Truong, K.; Wet, F. de; Cucchiarini, C.

    2007-01-01

    Providing feedback on pronunciation errors in computer assisted language learning systems requires that pronunciation errors be detected automatically. In the present study we compare four types of classifiers that can be used for this purpose: two acoustic-phonetic classifiers (one of which employs

  9. Feature extraction for dynamic integration of classifiers

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Patterson, D.W.

    2007-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique

  10. Numerical distance protection

    CERN Document Server

    Ziegler, Gerhard

    2011-01-01

    Distance protection provides the basis for network protection in transmission systems and meshed distribution systems. This book covers the fundamentals of distance protection and the special features of numerical technology. The emphasis is placed on the application of numerical distance relays in distribution and transmission systems.This book is aimed at students and engineers who wish to familiarise themselves with the subject of power system protection, as well as the experienced user, entering the area of numerical distance protection. Furthermore it serves as a reference guide for s

  11. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  12. Deconvolution When Classifying Noisy Data Involving Transformations.

    Science.gov (United States)

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  13. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-01-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  14. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  15. Mahalanobis Distance-Based Classifiers are Able to Recognize EEG Patterns by Using Few EEG Electrodes

    Science.gov (United States)

    2001-10-25

    Mouriño 3 , Angela Cattini 4 , Serenella Salinari 4 , Maria Grazia Marciani 2,5 and Febo Cincotti 5 1 Dip. Fisiologia umana e Farmacologia...Performing Organization Name(s) and Address(es) Dip. Fisiologia umana e Farmacologia, Università "La Sapienza", Rome, ITALY Performing Organization

  16. ORDERED WEIGHTED DISTANCE MEASURE

    Institute of Scientific and Technical Information of China (English)

    Zeshui XU; Jian CHEN

    2008-01-01

    The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.

  17. Distance-transitive graphs

    NARCIS (Netherlands)

    Cohen, A.M.; Beineke, L.W.; Wilson, R.J.; Cameron, P.J.

    2004-01-01

    In this chapter we investigate the classification of distance-transitive graphs: these are graphs whose automorphism groups are transitive on each of the sets of pairs of vertices at distance i, for i = 0, 1,.... We provide an introduction into the field. By use of the classification of finite

  18. Distance Education in Entwicklungslandern.

    Science.gov (United States)

    German Foundation for International Development, Bonn (West Germany).

    Seminar and conference reports and working papers on distance education of adults, which reflect the experiences of many countries, are presented. Contents include the draft report of the 1979 International Seminar on Distance Education held in Addis Ababa, Ethiopia, which was jointly sponsored by the United Nations Economic Commission for Africa…

  19. Encyclopedia of distances

    CERN Document Server

    Deza, Michel Marie

    2009-01-01

    Distance metrics and distances have become an essential tool in many areas of pure and applied Mathematics. This title offers both independent introductions and definitions, while at the same time making cross-referencing easy through hyperlink-like boldfaced references to original definitions.

  20. Distance Education in Turkey

    Directory of Open Access Journals (Sweden)

    Dr. Nursel Selver RUZGAR,

    2004-04-01

    Full Text Available Distance Education in Turkey Assistant Professor Dr. Nursel Selver RUZGAR Technical Education Faculty Marmara University, TURKEY ABSTRACT Many countries of the world are using distance education with various ways, by internet, by post and by TV. In this work, development of distance education in Turkey has been presented from the beginning. After discussing types and applications for different levels of distance education in Turkey, the distance education was given in the cultural aspect of the view. Then, in order to create the tendencies and thoughts of graduates of Higher Education Institutions and Distance Education Institutions about being competitors in job markets, sufficiency of education level, advantages for education system, continuing education in different Institutions, a face-to-face survey was applied to 1284 graduates, 958 from Higher Education Institutions and 326 from Distance Education Institutions. The results were evaluated and discussed. In the last part of this work, suggestions to become widespread and improve the distance education in the country were made.

  1. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. UPGMA and the normalized equidistant minimum evolution problem

    OpenAIRE

    Moulton, Vincent; Spillner, Andreas; Wu, Taoyang

    2017-01-01

    UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is a widely used clustering method. Here we show that UPGMA is a greedy heuristic for the normalized equidistant minimum evolution (NEME) problem, that is, finding a rooted tree that minimizes the minimum evolution score relative to the dissimilarity matrix among all rooted trees with the same leaf-set in which all leaves have the same distance to the root. We prove that the NEME problem is NP-hard. In addition, we present some heurist...

  3. A CLASSIFIER SYSTEM USING SMOOTH GRAPH COLORING

    Directory of Open Access Journals (Sweden)

    JORGE FLORES CRUZ

    2017-01-01

    Full Text Available Unsupervised classifiers allow clustering methods with less or no human intervention. Therefore it is desirable to group the set of items with less data processing. This paper proposes an unsupervised classifier system using the model of soft graph coloring. This method was tested with some classic instances in the literature and the results obtained were compared with classifications made with human intervention, yielding as good or better results than supervised classifiers, sometimes providing alternative classifications that considers additional information that humans did not considered.

  4. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  5. Feedback brake distribution control for minimum pitch

    Science.gov (United States)

    Tavernini, Davide; Velenis, Efstathios; Longo, Stefano

    2017-06-01

    The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.

  6. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  7. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  8. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  9. Combining multiple classifiers for age classification

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-11-01

    Full Text Available The authors compare several different classifier combination methods on a single task, namely speaker age classification. This task is well suited to combination strategies, since significantly different feature classes are employed. Support vector...

  10. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  11. Permutation-invariant distance between atomic configurations

    Science.gov (United States)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-09-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity.

  12. Permutation-invariant distance between atomic configurations

    International Nuclear Information System (INIS)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-01-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity

  13. Ethnic and social distance towards Roma population

    Directory of Open Access Journals (Sweden)

    Miladinović Slobodan

    2008-01-01

    Full Text Available The Roma people are one of social marginalised ethnic groups which can easily be classified as underclass or ethno-class. In this work are presented the results of survey data analysis of ethnic and social distance towards the Roma population (2007. It concludes that the Roma people are one of ethnic groups with the highest social and ethnic distances in all observed social relations. The conclusion is that their generations poverty and still of life which the poverty produces are main causes of the high distance. The society has very important task to overcome situation which keep their social situation. In this paper are showed some of possible solutions of solving global Romas social situation and their integration in the rest of society. .

  14. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  15. Motivation in Distance Leaming

    Directory of Open Access Journals (Sweden)

    Daniela Brečko

    1996-12-01

    Full Text Available It is estimated that motivation is one of the most important psychological functions making it possible for people to leam even in conditions that do not meet their needs. In distance learning, a form of autonomous learning, motivation is of outmost importance. When adopting this method in learning an individual has to stimulate himself and take learning decisions on his or her own. These specific characteristics of distance learning should be taken into account. This all different factors maintaining the motivation of partici­pants in distance learning are to be included. Moreover, motivation in distance learning can be stimulated with specific learning materials, clear instructions and guide-lines, an efficient feed back, personal contact between tutors and parti­cipants, stimulating learning letters, telephone calls, encouraging letters and through maintaining a positive relationship between tutor and participant.

  16. Minimum Q Electrically Small Antennas

    DEFF Research Database (Denmark)

    Kim, O. S.

    2012-01-01

    Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....

  17. Einstein at a distance

    Energy Technology Data Exchange (ETDEWEB)

    Lambourne, Robert [Department of Physics and Astronomy, Open University, Milton Keynes (United Kingdom)

    2005-11-01

    This paper examines the challenges and rewards that can arise when the teaching of Einsteinian physics has to be accomplished by means of distance education. The discussion is mainly based on experiences gathered over the past 35 years at the UK Open University, where special and general relativity, relativistic cosmology and other aspects of Einsteinian physics, have been taught at a variety of levels, and using a range of techniques, to students studying at a distance.

  18. Long distance quantum teleportation

    Science.gov (United States)

    Xia, Xiu-Xiu; Sun, Qi-Chao; Zhang, Qiang; Pan, Jian-Wei

    2018-01-01

    Quantum teleportation is a core protocol in quantum information science. Besides revealing the fascinating feature of quantum entanglement, quantum teleportation provides an ultimate way to distribute quantum state over extremely long distance, which is crucial for global quantum communication and future quantum networks. In this review, we focus on the long distance quantum teleportation experiments, especially those employing photonic qubits. From the viewpoint of real-world application, both the technical advantages and disadvantages of these experiments are discussed.

  19. The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule

    Science.gov (United States)

    Eskandari, M. R.; Faghihi, F.; Mahdavi, M.

    The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.

  20. Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection

    Science.gov (United States)

    Sarojini, Balakrishnan; Ramaraj, Narayanasamy; Nickolas, Savarimuthu

    Medical Data mining is the search for relationships and patterns within the medical datasets that could provide useful knowledge for effective clinical decisions. The inclusion of irrelevant, redundant and noisy features in the process model results in poor predictive accuracy. Much research work in data mining has gone into improving the predictive accuracy of the classifiers by applying the techniques of feature selection. Feature selection in medical data mining is appreciable as the diagnosis of the disease could be done in this patient-care activity with minimum number of significant features. The objective of this work is to show that selecting the more significant features would improve the performance of the classifier. We empirically evaluate the classification effectiveness of LibSVM classifier on the reduced feature subset of diabetes dataset. The evaluations suggest that the feature subset selected improves the predictive accuracy of the classifier and reduce false negatives and false positives.

  1. Fermat and the Minimum Principle

    Indian Academy of Sciences (India)

    Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...

  2. Mahalanobis distance and variable selection to optimize dose response

    International Nuclear Information System (INIS)

    Moore, D.H. II; Bennett, D.E.; Wyrobek, A.J.; Kranzler, D.

    1979-01-01

    A battery of statistical techniques are combined to improve detection of low-level dose response. First, Mahalanobis distances are used to classify objects as normal or abnormal. Then the proportion classified abnormal is regressed on dose. Finally, a subset of regressor variables is selected which maximizes the slope of the dose response line. Use of the techniques is illustrated by application to mouse sperm damaged by low doses of x-rays

  3. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  4. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  5. On the short distance behavior of string theories

    International Nuclear Information System (INIS)

    Guida, R.; Konishi, K.; Provero, P.

    1991-01-01

    Short distance behavior of string theories is investigated by the use of the discretized path-integral formulation. In particular, the minimum physical length and the generalized uncertainty relation are re-derived from a set of Ward-Takahashi identities. In this paper several issues related to the form of the generalized uncertainty relation and to its implications are discussed. A consistent qualitative picture of short distance behavior of string theory seems to emerge from such a study

  6. Classifying sows' activity types from acceleration patterns

    DEFF Research Database (Denmark)

    Cornou, Cecile; Lundbye-Christensen, Søren

    2008-01-01

    An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen....... This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three...

  7. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  8. A survey of decision tree classifier methodology

    Science.gov (United States)

    Safavian, S. R.; Landgrebe, David

    1991-01-01

    Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  9. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  10. Distance Teaching on Bornholm

    DEFF Research Database (Denmark)

    Hansen, Finn J. S.; Clausen, Christian

    2001-01-01

    The case study represents an example of a top-down introduction of distance teaching as part of Danish trials with the introduction of multimedia in education. The study is concerned with the background, aim and context of the trial as well as the role and working of the technology and the organi......The case study represents an example of a top-down introduction of distance teaching as part of Danish trials with the introduction of multimedia in education. The study is concerned with the background, aim and context of the trial as well as the role and working of the technology...

  11. Classifier utility modeling and analysis of hypersonic inlet start/unstart considering training data costs

    Science.gov (United States)

    Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen

    2011-11-01

    Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.

  12. Quantum mechanics the theoretical minimum

    CERN Document Server

    Susskind, Leonard

    2014-01-01

    From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.

  13. Minimum resolvable power contrast model

    Science.gov (United States)

    Qian, Shuai; Wang, Xia; Zhou, Jingjing

    2018-01-01

    Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.

  14. 75 FR 37253 - Classified National Security Information

    Science.gov (United States)

    2010-06-28

    ... ``Secret.'' (3) Each interior page of a classified document shall be marked at the top and bottom either... ``(TS)'' for Top Secret, ``(S)'' for Secret, and ``(C)'' for Confidential will be used. (2) Portions... from the informational text. (1) Conspicuously place the overall classification at the top and bottom...

  15. 75 FR 707 - Classified National Security Information

    Science.gov (United States)

    2010-01-05

    ... classified at one of the following three levels: (1) ``Top Secret'' shall be applied to information, the... exercise this authority. (2) ``Top Secret'' original classification authority may be delegated only by the... official has been delegated ``Top Secret'' original classification authority by the agency head. (4) Each...

  16. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  17. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  18. Classifying web pages with visual features

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Lupascu, T.; Filipe, J.; Cordeiro, J.

    2010-01-01

    To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying (HTML) code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual

  19. Theoretical Principles of Distance Education.

    Science.gov (United States)

    Keegan, Desmond, Ed.

    This book contains the following papers examining the didactic, academic, analytic, philosophical, and technological underpinnings of distance education: "Introduction"; "Quality and Access in Distance Education: Theoretical Considerations" (D. Randy Garrison); "Theory of Transactional Distance" (Michael G. Moore);…

  20. Understanding the Minimum Wage: Issues and Answers.

    Science.gov (United States)

    Employment Policies Inst. Foundation, Washington, DC.

    This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…

  1. 5 CFR 551.301 - Minimum wage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...

  2. Fast Computing for Distance Covariance

    OpenAIRE

    Huo, Xiaoming; Szekely, Gabor J.

    2014-01-01

    Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...

  3. Planning with Reachable Distances

    KAUST Repository

    Tang, Xinyu; Thomas, Shawna; Amato, Nancy M.

    2009-01-01

    reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot's number of degrees of freedom. In addition

  4. De-severing distance

    DEFF Research Database (Denmark)

    Jensen, Hanne Louise; de Neergaard, Maja

    2016-01-01

    De-severing Distance This paper draws on the growing body of mobility literature that shows how mobility can be viewed as meaningful everyday practices (Freudendal –Pedersen 2007, Cresswell 2006) this paper examines how Heidegger’s term de-severing can help us understand the everyday coping with ...

  5. The Euclidean distance degree

    NARCIS (Netherlands)

    Draisma, J.; Horobet, E.; Ottaviani, G.; Sturmfels, B.; Thomas, R.R.; Zhi, L.; Watt, M.

    2014-01-01

    The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low rank matrices, the Eckart-Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest

  6. Electromagnetic distance measurement

    CERN Document Server

    1967-01-01

    This book brings together the work of forty-eight geodesists from twenty-five countries. They discuss various new electromagnetic distance measurement (EDM) instruments - among them the Tellurometer, Geodimeter, and air- and satellite-borne systems - and investigate the complex sources of error.

  7. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  8. Prospect of Distance Learning

    Science.gov (United States)

    Rahman, Monsurur; Karim, Reza; Byramjee, Framarz

    2015-01-01

    Many educational institutions in the United States are currently offering programs through distance learning, and that trend is rising. In almost all spheres of education a developing country like Bangladesh needs to make available the expertise of the most qualified faculty to her distant people. But the fundamental question remains as to whether…

  9. 80537 based distance relay

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1999-01-01

    A method for implementing a digital distance relay in the power system is described.Instructions are given on how to program this relay on a 80537 based microcomputer system.The problem is used as a practical case study in the course 53113: Micocomputer applications in the power system.The relay...

  10. Classification of Pulse Waveforms Using Edit Distance with Real Penalty

    Directory of Open Access Journals (Sweden)

    Zhang Dongyu

    2010-01-01

    Full Text Available Abstract Advances in sensor and signal processing techniques have provided effective tools for quantitative research in traditional Chinese pulse diagnosis (TCPD. Because of the inevitable intraclass variation of pulse patterns, the automatic classification of pulse waveforms has remained a difficult problem. In this paper, by referring to the edit distance with real penalty (ERP and the recent progress in -nearest neighbors (KNN classifiers, we propose two novel ERP-based KNN classifiers. Taking advantage of the metric property of ERP, we first develop an ERP-induced inner product and a Gaussian ERP kernel, then embed them into difference-weighted KNN classifiers, and finally develop two novel classifiers for pulse waveform classification. The experimental results show that the proposed classifiers are effective for accurate classification of pulse waveform.

  11. Metrics for measuring distances in configuration spaces

    International Nuclear Information System (INIS)

    Sadeghi, Ali; Ghasemi, S. Alireza; Schaefer, Bastian; Mohr, Stephan; Goedecker, Stefan; Lill, Markus A.

    2013-01-01

    In order to characterize molecular structures we introduce configurational fingerprint vectors which are counterparts of quantities used experimentally to identify structures. The Euclidean distance between the configurational fingerprint vectors satisfies the properties of a metric and can therefore safely be used to measure dissimilarities between configurations in the high dimensional configuration space. In particular we show that these metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not. We introduce a Monte Carlo approach to obtain the global minimum of the RMSD between configurations, which is obtained from a global minimization over all translations, rotations, and permutations of atomic indices

  12. Towards an intelligent environment for distance learning

    Directory of Open Access Journals (Sweden)

    Rafael Morales

    2009-12-01

    Full Text Available Mainstream distance learning nowadays is heavily influenced by traditional educational approaches that produceshomogenised learning scenarios for all learners through learning management systems. Any differentiation betweenlearners and personalisation of their learning scenarios is left to the teacher, who gets minimum support from the system inthis respect. This way, the truly digital native, the computer, is left out of the move, unable to better support the teachinglearning processes because it is not provided with the means to transform into knowledge all the information that it storesand manages. I believe learning management systems should care for supporting adaptation and personalisation of bothindividual learning and the formation of communities of learning. Open learner modelling and intelligent collaborativelearning environments are proposed as a means to care. The proposal is complemented with a general architecture for anintelligent environment for distance learning and an educational model based on the principles of self-management,creativity, significance and participation.

  13. Disassembly and Sanitization of Classified Matter

    International Nuclear Information System (INIS)

    Stockham, Dwight J.; Saad, Max P.

    2008-01-01

    The Disassembly Sanitization Operation (DSO) process was implemented to support weapon disassembly and disposition by using recycling and waste minimization measures. This process was initiated by treaty agreements and reconfigurations within both the DOD and DOE Complexes. The DOE is faced with disassembling and disposing of a huge inventory of retired weapons, components, training equipment, spare parts, weapon maintenance equipment, and associated material. In addition, regulations have caused a dramatic increase in the need for information required to support the handling and disposition of these parts and materials. In the past, huge inventories of classified weapon components were required to have long-term storage at Sandia and at many other locations throughout the DoE Complex. These materials are placed in onsite storage unit due to classification issues and they may also contain radiological and/or hazardous components. Since no disposal options exist for this material, the only choice was long-term storage. Long-term storage is costly and somewhat problematic, requiring a secured storage area, monitoring, auditing, and presenting the potential for loss or theft of the material. Overall recycling rates for materials sent through the DSO process have enabled 70 to 80% of these components to be recycled. These components are made of high quality materials and once this material has been sanitized, the demand for the component metals for recycling efforts is very high. The DSO process for NGPF, classified components established the credibility of this technique for addressing the long-term storage requirements of the classified weapons component inventory. The success of this application has generated interest from other Sandia organizations and other locations throughout the complex. Other organizations are requesting the help of the DSO team and the DSO is responding to these requests by expanding its scope to include Work-for- Other projects. For example

  14. Comparing cosmic web classifiers using information theory

    International Nuclear Information System (INIS)

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin; Jasche, Jens

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  15. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  16. Comparing cosmic web classifiers using information theory

    Energy Technology Data Exchange (ETDEWEB)

    Leclercq, Florent [Institute of Cosmology and Gravitation (ICG), University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX (United Kingdom); Lavaux, Guilhem; Wandelt, Benjamin [Institut d' Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris (France); Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: lavaux@iap.fr, E-mail: j.jasche@tum.de, E-mail: wandelt@iap.fr [Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  17. Detection of Fundus Lesions Using Classifier Selection

    Science.gov (United States)

    Nagayoshi, Hiroto; Hiramatsu, Yoshitaka; Sako, Hiroshi; Himaga, Mitsutoshi; Kato, Satoshi

    A system for detecting fundus lesions caused by diabetic retinopathy from fundus images is being developed. The system can screen the images in advance in order to reduce the inspection workload on doctors. One of the difficulties that must be addressed in completing this system is how to remove false positives (which tend to arise near blood vessels) without decreasing the detection rate of lesions in other areas. To overcome this difficulty, we developed classifier selection according to the position of a candidate lesion, and we introduced new features that can distinguish true lesions from false positives. A system incorporating classifier selection and these new features was tested in experiments using 55 fundus images with some lesions and 223 images without lesions. The results of the experiments confirm the effectiveness of the proposed system, namely, degrees of sensitivity and specificity of 98% and 81%, respectively.

  18. Classifying objects in LWIR imagery via CNNs

    Science.gov (United States)

    Rodger, Iain; Connor, Barry; Robertson, Neil M.

    2016-10-01

    The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.

  19. Learning for VMM + WTA Embedded Classifiers

    Science.gov (United States)

    2016-03-31

    Learning for VMM + WTA Embedded Classifiers Jennifer Hasler and Sahil Shah Electrical and Computer Engineering Georgia Institute of Technology...enabling correct classification of each novel acoustic signal (generator, idle car, and idle truck ). The classification structure requires, after...measured on our SoC FPAA IC. The test input is composed of signals from urban environment for 3 objects (generator, idle car, and idle truck

  20. Bayes classifiers for imbalanced traffic accidents datasets.

    Science.gov (United States)

    Mujalli, Randa Oqab; López, Griselda; Garach, Laura

    2016-03-01

    Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A Bayesian classifier for symbol recognition

    OpenAIRE

    Barrat , Sabine; Tabbone , Salvatore; Nourrissier , Patrick

    2007-01-01

    URL : http://www.buyans.com/POL/UploadedFile/134_9977.pdf; International audience; We present in this paper an original adaptation of Bayesian networks to symbol recognition problem. More precisely, a descriptor combination method, which enables to improve significantly the recognition rate compared to the recognition rates obtained by each descriptor, is presented. In this perspective, we use a simple Bayesian classifier, called naive Bayes. In fact, probabilistic graphical models, more spec...

  2. A Survey of Binary Similarity and Distance Measures

    Directory of Open Access Journals (Sweden)

    Seung-Seok Choi

    2010-02-01

    Full Text Available The binary feature vector is one of the most common representations of patterns and measuring similarity and distance measures play a critical role in many problems such as clustering, classification, etc. Ever since Jaccard proposed a similarity measure to classify ecological species in 1901, numerous binary similarity and distance measures have been proposed in various fields. Applying appropriate measures results in more accurate data analysis. Notwithstanding, few comprehensive surveys on binary measures have been conducted. Hence we collected 76 binary similarity and distance measures used over the last century and reveal their correlations through the hierarchical clustering technique.

  3. Optimization of short amino acid sequences classifier

    Science.gov (United States)

    Barcz, Aleksy; Szymański, Zbigniew

    This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers. SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the SPDE algorithm to the task of selecting classifiers with best performance measures values.

  4. SVM classifier on chip for melanoma detection.

    Science.gov (United States)

    Afifi, Shereen; GholamHosseini, Hamid; Sinha, Roopak

    2017-07-01

    Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy.

  5. Distance between images

    Science.gov (United States)

    Gualtieri, J. A.; Le Moigne, J.; Packer, C. V.

    1992-01-01

    Comparing two binary images and assigning a quantitative measure to this comparison finds its purpose in such tasks as image recognition, image compression, and image browsing. This quantitative measurement may be computed by utilizing the Hausdorff distance of the images represented as two-dimensional point sets. In this paper, we review two algorithms that have been proposed to compute this distance, and we present a parallel implementation of one of them on the MasPar parallel processor. We study their complexity and the results obtained by these algorithms for two different types of images: a set of displaced pairs of images of Gaussian densities, and a comparison of a Canny edge image with several edge images from a hierarchical region growing code.

  6. THE EXTRAGALACTIC DISTANCE DATABASE

    International Nuclear Information System (INIS)

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  7. Distance to Cure

    OpenAIRE

    Capachi, Casey

    2013-01-01

    Distance to Cure A three-part television series by Casey Capachi www.distancetocure.com   Abstract   How far would you go for health care? This three-part television series, featuring two introductory segments between each piece, focuses on the physical, cultural, and political obstacles facing rural Native American patients and the potential of health technology to break down those barriers to care.   Part one,Telemedici...

  8. Complex networks in the Euclidean space of communicability distances

    Science.gov (United States)

    Estrada, Ernesto

    2012-06-01

    We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.

  9. The minimum yield in channeling

    International Nuclear Information System (INIS)

    Uguzzoni, A.; Gaertner, K.; Lulli, G.; Andersen, J.U.

    2000-01-01

    A first estimate of the minimum yield was obtained from Lindhard's theory, with the assumption of a statistical equilibrium in the transverse phase-space of channeled particles guided by a continuum axial potential. However, computer simulations have shown that this estimate should be corrected by a fairly large factor, C (approximately equal to 2.5), called the Barrett factor. We have shown earlier that the concept of a statistical equilibrium can be applied to understand this result, with the introduction of a constraint in phase-space due to planar channeling of axially channeled particles. Here we present an extended test of these ideas on the basis of computer simulation of the trajectories of 2 MeV α particles in Si. In particular, the gradual trend towards a full statistical equilibrium is studied. We also discuss the introduction of this modification of standard channeling theory into descriptions of the multiple scattering of channeled particles (dechanneling) by a master equation and show that the calculated minimum yields are in very good agreement with the results of a full computer simulation

  10. Minimum Bias Trigger in ATLAS

    International Nuclear Information System (INIS)

    Kwee, Regina

    2010-01-01

    Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)

  11. Minimum triplet covers of binary phylogenetic X-trees.

    Science.gov (United States)

    Huber, K T; Moulton, V; Steel, M

    2017-12-01

    Trees with labelled leaves and with all other vertices of degree three play an important role in systematic biology and other areas of classification. A classical combinatorial result ensures that such trees can be uniquely reconstructed from the distances between the leaves (when the edges are given any strictly positive lengths). Moreover, a linear number of these pairwise distance values suffices to determine both the tree and its edge lengths. A natural set of pairs of leaves is provided by any 'triplet cover' of the tree (based on the fact that each non-leaf vertex is the median vertex of three leaves). In this paper we describe a number of new results concerning triplet covers of minimum size. In particular, we characterize such covers in terms of an associated graph being a 2-tree. Also, we show that minimum triplet covers are 'shellable' and thereby provide a set of pairs for which the inter-leaf distance values will uniquely determine the underlying tree and its associated branch lengths.

  12. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Directory of Open Access Journals (Sweden)

    Shehzad Khalid

    2014-01-01

    Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  13. The Protection of Classified Information: The Legal Framework

    National Research Council Canada - National Science Library

    Elsea, Jennifer K

    2006-01-01

    Recent incidents involving leaks of classified information have heightened interest in the legal framework that governs security classification, access to classified information, and penalties for improper disclosure...

  14. Classifying spaces of degenerating polarized Hodge structures

    CERN Document Server

    Kato, Kazuya

    2009-01-01

    In 1970, Phillip Griffiths envisioned that points at infinity could be added to the classifying space D of polarized Hodge structures. In this book, Kazuya Kato and Sampei Usui realize this dream by creating a logarithmic Hodge theory. They use the logarithmic structures begun by Fontaine-Illusie to revive nilpotent orbits as a logarithmic Hodge structure. The book focuses on two principal topics. First, Kato and Usui construct the fine moduli space of polarized logarithmic Hodge structures with additional structures. Even for a Hermitian symmetric domain D, the present theory is a refinem

  15. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  16. Cubical sets as a classifying topos

    DEFF Research Database (Denmark)

    Spitters, Bas

    Coquand’s cubical set model for homotopy type theory provides the basis for a computational interpretation of the univalence axiom and some higher inductive types, as implemented in the cubical proof assistant. We show that the underlying cube category is the opposite of the Lawvere theory of De...... Morgan algebras. The topos of cubical sets itself classifies the theory of ‘free De Morgan algebras’. This provides us with a topos with an internal ‘interval’. Using this interval we construct a model of type theory following van den Berg and Garner. We are currently investigating the precise relation...

  17. Double Ramp Loss Based Reject Option Classifier

    Science.gov (United States)

    2015-05-22

    of convex (DC) functions. To minimize it, we use DC programming approach [1]. The proposed method has following advantages: (1) the proposed loss LDR ...space constraints. We see that LDR does not put any restriction on ρ for it to be an upper bound of L0−d−1. 2.2 Risk Formulation Using LDR Let S = {(xn...classifier learnt using LDR based approach (C = 100, μ = 1, d = .2). Filled circles and triangles represent the support vectors. 4 Experimental Results We show

  18. Relativistic distances, sizes, lengths

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1992-01-01

    Such notion as light or retarded distance, field size, formation way, visible size of a body, relativistic or radar length and wave length of light from a moving atom are considered. The relation between these notions is cleared up, their classification is given. It is stressed that the formation way is defined by the field size of a moving particle. In the case of the electromagnetic field, longitudinal sizes increase proportionally γ 2 with growing charge velocity (γ is the Lorentz-factor). 18 refs

  19. Distance Metric Tracking

    Science.gov (United States)

    2016-03-02

    whereBψ is any Bregman divergence and ηt is the learning rate parameter. From (Hall & Willett, 2015) we have: Theorem 1. G` = max θ∈Θ,`∈L ‖∇f(θ)‖ φmax = 1...Kullback-Liebler divergence between an initial guess of the matrix that parameterizes the Mahalanobis distance and a solution that satisfies a set of...Bregman divergence and ηt is the learning rate parameter. M̂0, µ̂0 are initialized to some initial value. In [18] a closed-form algorithm for solving

  20. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-11-09

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  1. Approximating the minimum cycle mean

    Directory of Open Access Journals (Sweden)

    Krishnendu Chatterjee

    2013-07-01

    Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.

  2. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-01-08

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  3. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong; Sundaramoorthi, Ganesh

    2017-01-01

    We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  4. Youth minimum wages and youth employment

    NARCIS (Netherlands)

    Marimpi, Maria; Koning, Pierre

    2018-01-01

    This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median

  5. Do Some Workers Have Minimum Wage Careers?

    Science.gov (United States)

    Carrington, William J.; Fallick, Bruce C.

    2001-01-01

    Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…

  6. Does the Minimum Wage Affect Welfare Caseloads?

    Science.gov (United States)

    Page, Marianne E.; Spetz, Joanne; Millar, Jane

    2005-01-01

    Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…

  7. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its

  8. PERBANDINGAN EUCLIDEAN DISTANCE DENGAN CANBERRA DISTANCE PADA FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    Sendhy Rachmat Wurdianarto

    2014-08-01

    Full Text Available Perkembangan ilmu pada dunia komputer sangatlah pesat. Salah satu yang menandai hal ini adalah ilmu komputer telah merambah pada dunia biometrik. Arti biometrik sendiri adalah karakter-karakter manusia yang dapat digunakan untuk membedakan antara orang yang satu dengan yang lainnya. Salah satu pemanfaatan karakter / organ tubuh pada setiap manusia yang digunakan untuk identifikasi (pengenalan adalah dengan memanfaatkan wajah. Dari permasalahan diatas dalam pengenalan lebih tentang aplikasi Matlab pada Face Recognation menggunakan metode Euclidean Distance dan Canberra Distance. Model pengembangan aplikasi yang digunakan adalah model waterfall. Model waterfall beriisi rangkaian aktivitas proses yang disajikan dalam proses analisa kebutuhan, desain menggunakan UML (Unified Modeling Language, inputan objek gambar diproses menggunakan Euclidean Distance dan Canberra Distance. Kesimpulan yang dapat ditarik adalah aplikasi face Recognation menggunakan metode euclidean Distance dan Canverra Distance terdapat kelebihan dan kekurangan masing-masing. Untuk kedepannya aplikasi tersebut dapat dikembangkan dengan menggunakan objek berupa video ataupun objek lainnya.   Kata kunci : Euclidean Distance, Face Recognition, Biometrik, Canberra Distance

  9. A systematic comparison of supervised classifiers.

    Directory of Open Access Journals (Sweden)

    Diego Raphael Amancio

    Full Text Available Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM. In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.

  10. STATISTICAL TOOLS FOR CLASSIFYING GALAXY GROUP DYNAMICS

    International Nuclear Information System (INIS)

    Hou, Annie; Parker, Laura C.; Harris, William E.; Wilman, David J.

    2009-01-01

    The dynamical state of galaxy groups at intermediate redshifts can provide information about the growth of structure in the universe. We examine three goodness-of-fit tests, the Anderson-Darling (A-D), Kolmogorov, and χ 2 tests, in order to determine which statistical tool is best able to distinguish between groups that are relaxed and those that are dynamically complex. We perform Monte Carlo simulations of these three tests and show that the χ 2 test is profoundly unreliable for groups with fewer than 30 members. Power studies of the Kolmogorov and A-D tests are conducted to test their robustness for various sample sizes. We then apply these tests to a sample of the second Canadian Network for Observational Cosmology Redshift Survey (CNOC2) galaxy groups and find that the A-D test is far more reliable and powerful at detecting real departures from an underlying Gaussian distribution than the more commonly used χ 2 and Kolmogorov tests. We use this statistic to classify a sample of the CNOC2 groups and find that 34 of 106 groups are inconsistent with an underlying Gaussian velocity distribution, and thus do not appear relaxed. In addition, we compute velocity dispersion profiles (VDPs) for all groups with more than 20 members and compare the overall features of the Gaussian and non-Gaussian groups, finding that the VDPs of the non-Gaussian groups are distinct from those classified as Gaussian.

  11. Mercury⊕: An evidential reasoning image classifier

    Science.gov (United States)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  12. Distance collaborations with industry

    Energy Technology Data Exchange (ETDEWEB)

    Peskin, A.; Swyler, K.

    1998-06-01

    The college industry relationship has been identified as a key policy issue in Engineering Education. Collaborations between academic institutions and the industrial sector have a long history and a bright future. For Engineering and Engineering Technology programs in particular, industry has played a crucial role in many areas including advisement, financial support, and practical training of both faculty and students. Among the most important and intimate interactions are collaborative projects and formal cooperative education arrangements. Most recently, such collaborations have taken on a new dimension, as advances in technology have made possible meaningful technical collaboration at a distance. There are several obvious technology areas that have contributed significantly to this trend. Foremost is the ubiquitous presence of the Internet. Perhaps almost as important are advances in computer based imaging. Because visual images offer a compelling user experience, it affords greater knowledge transfer efficiency than other modes of delivery. Furthermore, the quality of the image appears to have a strongly correlated effect on insight. A good visualization facility offers both a means for communication and a shared information space for the subjects, which are among the essential features of both peer collaboration and distance learning.

  13. Ensemble Clustering Classification Applied to Competing SVM and One-Class Classifiers Exemplified by Plant MicroRNAs Data

    Directory of Open Access Journals (Sweden)

    Yousef Malik

    2016-12-01

    Full Text Available The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN. In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.

  14. 36 CFR 1256.46 - National security-classified information.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National security-classified... Restrictions § 1256.46 National security-classified information. In accordance with 5 U.S.C. 552(b)(1), NARA... properly classified under the provisions of the pertinent Executive Order on Classified National Security...

  15. Minimum wage development in the Russian Federation

    OpenAIRE

    Bolsheva, Anna

    2012-01-01

    The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...

  16. A note on the minimum Lee distance of certain self-dual modular codes

    NARCIS (Netherlands)

    Asch, van A.G.; Martens, F.J.L.

    2012-01-01

    In a former paper we investigated the connection between p -ary linear codes, p prime, and theta functions. Corresponding to a given code a suitable lattice and its associated theta function were defined. Using results from the theory of modular forms we got an algorithm to determine an upper bound

  17. Evaluating the concept specialization distance from an end-user perspective: The case of AGROVOC

    NARCIS (Netherlands)

    Martín-Moncunill, David; Sicilia-Urban, Miguel Angel; García-Barriocanal, Elena; Stracke, Christian M.

    2017-01-01

    Purpose – The common understanding of generalization/specialization relations assumes the relation to be equally strong between a classifier and any of its related classifiers and also at every level of the hierarchy. Assigning a grade of relative distance to represent the level of similarity

  18. Distance-Based Image Classification: Generalizing to New Classes at Near Zero Cost

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.

    2013-01-01

    We study large-scale image classification methods that can incorporate new classes and training images continuously over time at negligible cost. To this end, we consider two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers, and introduce a new

  19. The structure of water around the compressibility minimum

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, L. B. [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Mineral Physics Institute, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Benmore, C. J., E-mail: benmore@aps.anl.gov [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Neuefeind, J. C. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37922 (United States); Parise, J. B. [Mineral Physics Institute, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Department of Geosciences, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Photon Sciences Division, Brookhaven National Laboratory, Upton, New York 11973 (United States)

    2014-12-07

    Here we present diffraction data that yield the oxygen-oxygen pair distribution function, g{sub OO}(r) over the range 254.2–365.9 K. The running O-O coordination number, which represents the integral of the pair distribution function as a function of radial distance, is found to exhibit an isosbestic point at 3.30(5) Å. The probability of finding an oxygen atom surrounding another oxygen at this distance is therefore shown to be independent of temperature and corresponds to an O-O coordination number of 4.3(2). Moreover, the experimental data also show a continuous transition associated with the second peak position in g{sub OO}(r) concomitant with the compressibility minimum at 319 K.

  20. Two channel EEG thought pattern classifier.

    Science.gov (United States)

    Craig, D A; Nguyen, H T; Burchey, H A

    2006-01-01

    This paper presents a real-time electro-encephalogram (EEG) identification system with the goal of achieving hands free control. With two EEG electrodes placed on the scalp of the user, EEG signals are amplified and digitised directly using a ProComp+ encoder and transferred to the host computer through the RS232 interface. Using a real-time multilayer neural network, the actual classification for the control of a powered wheelchair has a very fast response. It can detect changes in the user's thought pattern in 1 second. Using only two EEG electrodes at positions O(1) and C(4) the system can classify three mental commands (forward, left and right) with an accuracy of more than 79 %

  1. Classifying Drivers' Cognitive Load Using EEG Signals.

    Science.gov (United States)

    Barua, Shaibal; Ahmed, Mobyen Uddin; Begum, Shahina

    2017-01-01

    A growing traffic safety issue is the effect of cognitive loading activities on traffic safety and driving performance. To monitor drivers' mental state, understanding cognitive load is important since while driving, performing cognitively loading secondary tasks, for example talking on the phone, can affect the performance in the primary task, i.e. driving. Electroencephalography (EEG) is one of the reliable measures of cognitive load that can detect the changes in instantaneous load and effect of cognitively loading secondary task. In this driving simulator study, 1-back task is carried out while the driver performs three different simulated driving scenarios. This paper presents an EEG based approach to classify a drivers' level of cognitive load using Case-Based Reasoning (CBR). The results show that for each individual scenario as well as using data combined from the different scenarios, CBR based system achieved approximately over 70% of classification accuracy.

  2. Classifying prion and prion-like phenomena.

    Science.gov (United States)

    Harbi, Djamel; Harrison, Paul M

    2014-01-01

    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  3. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  4. Feature and score fusion based multiple classifier selection for iris recognition.

    Science.gov (United States)

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  5. Classifying Transition Behaviour in Postural Activity Monitoring

    Directory of Open Access Journals (Sweden)

    James BRUSEY

    2009-10-01

    Full Text Available A few accelerometers positioned on different parts of the body can be used to accurately classify steady state behaviour, such as walking, running, or sitting. Such systems are usually built using supervised learning approaches. Transitions between postures are, however, difficult to deal with using posture classification systems proposed to date, since there is no label set for intermediary postures and also the exact point at which the transition occurs can sometimes be hard to pinpoint. The usual bypass when using supervised learning to train such systems is to discard a section of the dataset around each transition. This leads to poorer classification performance when the systems are deployed out of the laboratory and used on-line, particularly if the regimes monitored involve fast paced activity changes. Time-based filtering that takes advantage of sequential patterns is a potential mechanism to improve posture classification accuracy in such real-life applications. Also, such filtering should reduce the number of event messages needed to be sent across a wireless network to track posture remotely, hence extending the system’s life. To support time-based filtering, understanding transitions, which are the major event generators in a classification system, is a key. This work examines three approaches to post-process the output of a posture classifier using time-based filtering: a naïve voting scheme, an exponentially weighted voting scheme, and a Bayes filter. Best performance is obtained from the exponentially weighted voting scheme although it is suspected that a more sophisticated treatment of the Bayes filter might yield better results.

  6. Just-in-time adaptive classifiers-part II: designing the classifier.

    Science.gov (United States)

    Alippi, Cesare; Roveri, Manuel

    2008-12-01

    Aging effects, environmental changes, thermal drifts, and soft and hard faults affect physical systems by changing their nature and behavior over time. To cope with a process evolution adaptive solutions must be envisaged to track its dynamics; in this direction, adaptive classifiers are generally designed by assuming the stationary hypothesis for the process generating the data with very few results addressing nonstationary environments. This paper proposes a methodology based on k-nearest neighbor (NN) classifiers for designing adaptive classification systems able to react to changing conditions just-in-time (JIT), i.e., exactly when it is needed. k-NN classifiers have been selected for their computational-free training phase, the possibility to easily estimate the model complexity k and keep under control the computational complexity of the classifier through suitable data reduction mechanisms. A JIT classifier requires a temporal detection of a (possible) process deviation (aspect tackled in a companion paper) followed by an adaptive management of the knowledge base (KB) of the classifier to cope with the process change. The novelty of the proposed approach resides in the general framework supporting the real-time update of the KB of the classification system in response to novel information coming from the process both in stationary conditions (accuracy improvement) and in nonstationary ones (process tracking) and in providing a suitable estimate of k. It is shown that the classification system grants consistency once the change targets the process generating the data in a new stationary state, as it is the case in many real applications.

  7. Interactive Distance Learning in Connecticut.

    Science.gov (United States)

    Pietras, Jesse John; Murphy, Robert J.

    This paper provides an overview of distance learning activities in Connecticut and addresses the feasibility of such activities. Distance education programs have evolved from the one dimensional electronic mail systems to the use of sophisticated digital fiber networks. The Middlesex Distance Learning Consortium has developed a long-range plan to…

  8. Distance covariance for stochastic processes

    DEFF Research Database (Denmark)

    Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady

    2017-01-01

    The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...

  9. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-05-14

    This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  10. DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES

    International Nuclear Information System (INIS)

    Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.; Hoare, Melvin G.; Benjamin, Robert A.

    2012-01-01

    We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distances (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.

  11. Social Web Content Enhancement in a Distance Learning Environment: Intelligent Metadata Generation for Resources

    Science.gov (United States)

    García-Floriano, Andrés; Ferreira-Santiago, Angel; Yáñez-Márquez, Cornelio; Camacho-Nieto, Oscar; Aldape-Pérez, Mario; Villuendas-Rey, Yenny

    2017-01-01

    Social networking potentially offers improved distance learning environments by enabling the exchange of resources between learners. The existence of properly classified content results in an enhanced distance learning experience in which appropriate materials can be retrieved efficiently; however, for this to happen, metadata needs to be present.…

  12. Molecular Characteristics in MRI-classified Group 1 Glioblastoma Multiforme

    Directory of Open Access Journals (Sweden)

    William E Haskins

    2013-07-01

    Full Text Available Glioblastoma multiforme (GBM is a clinically and pathologically heterogeneous brain tumor. Previous study of MRI-classified GBM has revealed a spatial relationship between Group 1 GBM (GBM1 and the subventricular zone (SVZ. The SVZ is an adult neural stem cell niche and is also suspected to be the origin of a subtype of brain tumor. The intimate contact between GBM1 and the SVZ raises the possibility that tumor cells in GBM1 may be most related to SVZ cells. In support of this notion, we found that neural stem cell and neuroblast markers are highly expressed in GBM1. Additionally, we identified molecular characteristics in this type of GBM that include up-regulation of metabolic enzymes, ribosomal proteins, heat shock proteins, and c-Myc oncoprotein. As GBM1 often recurs at great distances from the initial lesion, the rewiring of metabolism and ribosomal biogenesis may facilitate cancer cells’ growth and survival during tumor migration. Taken together, combined our findings and MRI-based classification of GBM1 would offer better prediction and treatment for this multifocal GBM.

  13. Minimum Additive Waste Stabilization (MAWS)

    International Nuclear Information System (INIS)

    1994-02-01

    In the Minimum Additive Waste Stabilization(MAWS) concept, actual waste streams are utilized as additive resources for vitrification, which may contain the basic components (glass formers and fluxes) for making a suitable glass or glassy slag. If too much glass former is present, then the melt viscosity or temperature will be too high for processing; while if there is too much flux, then the durability may suffer. Therefore, there are optimum combinations of these two important classes of constituents depending on the criteria required. The challenge is to combine these resources in such a way that minimizes the use of non-waste additives yet yields a processable and durable final waste form for disposal. The benefit to this approach is that the volume of the final waste form is minimized (waste loading maximized) since little or no additives are used and vitrification itself results in volume reduction through evaporation of water, combustion of organics, and compaction of the solids into a non-porous glass. This implies a significant reduction in disposal costs due to volume reduction alone, and minimizes future risks/costs due to the long term durability and leach resistance of glass. This is accomplished by using integrated systems that are both cost-effective and produce an environmentally sound waste form for disposal. individual component technologies may include: vitrification; thermal destruction; soil washing; gas scrubbing/filtration; and, ion-exchange wastewater treatment. The particular combination of technologies will depend on the waste streams to be treated. At the heart of MAWS is vitrification technology, which incorporates all primary and secondary waste streams into a final, long-term, stabilized glass wasteform. The integrated technology approach, and view of waste streams as resources, is innovative yet practical to cost effectively treat a broad range of DOE mixed and low-level wastes

  14. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  15. Planning with Reachable Distances

    KAUST Repository

    Tang, Xinyu

    2009-01-01

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot\\'s number of degrees of freedom. In addition to supporting efficient sampling, we show that the RD-space formulation naturally supports planning, and in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1000 links in time comparable to open chain sampling, and we can generate samples for 1000-link multi-loop systems of varying topology in less than a second. © 2009 Springer-Verlag.

  16. Classifying Adverse Events in the Dental Office.

    Science.gov (United States)

    Kalenderian, Elsbeth; Obadan-Udoh, Enihomo; Maramaldi, Peter; Etolue, Jini; Yansane, Alfa; Stewart, Denice; White, Joel; Vaderhobli, Ram; Kent, Karla; Hebballi, Nutan B; Delattre, Veronique; Kahn, Maria; Tokede, Oluwabunmi; Ramoni, Rachel B; Walji, Muhammad F

    2017-06-30

    Dentists strive to provide safe and effective oral healthcare. However, some patients may encounter an adverse event (AE) defined as "unnecessary harm due to dental treatment." In this research, we propose and evaluate two systems for categorizing the type and severity of AEs encountered at the dental office. Several existing medical AE type and severity classification systems were reviewed and adapted for dentistry. Using data collected in previous work, two initial dental AE type and severity classification systems were developed. Eight independent reviewers performed focused chart reviews, and AEs identified were used to evaluate and modify these newly developed classifications. A total of 958 charts were independently reviewed. Among the reviewed charts, 118 prospective AEs were found and 101 (85.6%) were verified as AEs through a consensus process. At the end of the study, a final AE type classification comprising 12 categories, and an AE severity classification comprising 7 categories emerged. Pain and infection were the most common AE types representing 73% of the cases reviewed (56% and 17%, respectively) and 88% were found to cause temporary, moderate to severe harm to the patient. Adverse events found during the chart review process were successfully classified using the novel dental AE type and severity classifications. Understanding the type of AEs and their severity are important steps if we are to learn from and prevent patient harm in the dental office.

  17. Is it important to classify ischaemic stroke?

    LENUS (Irish Health Repository)

    Iqbal, M

    2012-02-01

    Thirty-five percent of all ischemic events remain classified as cryptogenic. This study was conducted to ascertain the accuracy of diagnosis of ischaemic stroke based on information given in the medical notes. It was tested by applying the clinical information to the (TOAST) criteria. Hundred and five patients presented with acute stroke between Jan-Jun 2007. Data was collected on 90 patients. Male to female ratio was 39:51 with age range of 47-93 years. Sixty (67%) patients had total\\/partial anterior circulation stroke; 5 (5.6%) had a lacunar stroke and in 25 (28%) the mechanism of stroke could not be identified. Four (4.4%) patients with small vessel disease were anticoagulated; 5 (5.6%) with atrial fibrillation received antiplatelet therapy and 2 (2.2%) patients with atrial fibrillation underwent CEA. This study revealed deficiencies in the clinical assessment of patients and treatment was not tailored to the mechanism of stroke in some patients.

  18. Stress fracture development classified by bone scintigraphy

    International Nuclear Information System (INIS)

    Zwas, S.T.; Elkanovich, R.; Frank, G.; Aharonson, Z.

    1985-01-01

    There is no consensus on classifying stress fractures (SF) appearing on bone scans. The authors present a system of classification based on grading the severity and development of bone lesions by visual inspection, according to three main scintigraphic criteria: focality and size, intensity of uptake compare to adjacent bone, and local medular extension. Four grades of development (I-IV) were ranked, ranging from ill defined slightly increased cortical uptake to well defined regions with markedly increased uptake extending transversely bicortically. 310 male subjects aged 19-2, suffering several weeks from leg pains occurring during intensive physical training underwent bone scans of the pelvis and lower extremities using Tc-99-m-MDP. 76% of the scans were positive with 354 lesions, of which 88% were in th4e mild (I-II) grades and 12% in the moderate (III) and severe (IV) grades. Post-treatment scans were obtained in 65 cases having 78 lesions during 1- to 6-month intervals. Complete resolution was found after 1-2 months in 36% of the mild lesions but in only 12% of the moderate and severe ones, and after 3-6 months in 55% of the mild lesions and 15% of the severe ones. 75% of the moderate and severe lesions showed residual uptake in various stages throughout the follow-up period. Early recognition and treatment of mild SF lesions in this study prevented protracted disability and progression of the lesions and facilitated complete healing

  19. Minimum emittance of three-bend achromats

    International Nuclear Information System (INIS)

    Li Xiaoyu; Xu Gang

    2012-01-01

    The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)

  20. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  1. The minimum wage in the Czech enterprises

    OpenAIRE

    Eva Lajtkepová

    2010-01-01

    Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...

  2. Improved initial guess for minimum energy path calculations

    International Nuclear Information System (INIS)

    Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt; Jónsson, Hannes

    2014-01-01

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used

  3. Romanian Libray Science Distance Education. Current Context and Possible Solutions

    Directory of Open Access Journals (Sweden)

    Silvia-Adriana Tomescu

    2012-01-01

    We thought it would be very useful to propose a model of teaching, learning and assessment for distance higher librarianship tested on www.oll.ro, Open learning library platform to analyze the impact on students, and especially to test the effectiveness of teaching and assessing knowledge from distance. We set a rigorous approach that reflects the problems facing the Romanian LIS education system and emphasizes the optimal strategies that need to be implemented. The benefits of such an approach can and classified as: innovation in education, communicative facilities, and effective strategies for teaching library science.

  4. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)

    2016-03-15

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  5. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    International Nuclear Information System (INIS)

    Kaya, Savaş; Kaya, Cemal; Islam, Nazmul

    2016-01-01

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  6. 41 CFR 105-62.102 - Authority to originally classify.

    Science.gov (United States)

    2010-07-01

    ... originally classify. (a) Top secret, secret, and confidential. The authority to originally classify information as Top Secret, Secret, or Confidential may be exercised only by the Administrator and is delegable...

  7. Naive Bayesian classifiers for multinomial features: a theoretical analysis

    CSIR Research Space (South Africa)

    Van Dyk, E

    2007-11-01

    Full Text Available The authors investigate the use of naive Bayesian classifiers for multinomial feature spaces and derive error estimates for these classifiers. The error analysis is done by developing a mathematical model to estimate the probability density...

  8. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  9. Are contemporary tourists consuming distance?

    DEFF Research Database (Denmark)

    Larsen, Gunvor Riber

    2012. Background The background for this research, which explores how tourists represent distance and whether or not distance can be said to be consumed by contemporary tourists, is the increasing leisure mobility of people. Travelling for the purpose of visiting friends and relatives is increasing...... of understanding mobility at a conceptual level, and distance matters to people's manifest mobility: how they travel and how far they travel are central elements of their movements. Therefore leisure mobility (indeed all mobility) is the activity of relating across distance, either through actual corporeal...... metric representation. These representations are the focus for this research. Research Aim and Questions The aim of this research is thus to explore how distance is being represented within the context of leisure mobility. Further the aim is to explore how or whether distance is being consumed...

  10. Distance : between deixis and perspectivity

    OpenAIRE

    Meermann, Anastasia; Sonnenhauser, Barbara

    2015-01-01

    Discussing exemplary applications of the notion of distance in linguistic analysis, this paper shows that very different phenomena are described in terms of this concept. It is argued that in order to overcome the problems arising from this mixup, deixis, distance and perspectivity have to be distinguished and their interrelations need to be described. Thereby, distance emerges as part of a recursive process mediating between situation-bound deixis and discourse-level perspectivity. This is i...

  11. Fast Most Similar Neighbor (MSN) classifiers for Mixed Data

    OpenAIRE

    Hernández Rodríguez, Selene

    2010-01-01

    The k nearest neighbor (k-NN) classifier has been extensively used in Pattern Recognition because of its simplicity and its good performance. However, in large datasets applications, the exhaustive k-NN classifier becomes impractical. Therefore, many fast k-NN classifiers have been developed; most of them rely on metric properties (usually the triangle inequality) to reduce the number of prototype comparisons. Hence, the existing fast k-NN classifiers are applicable only when the comparison f...

  12. Three data partitioning strategies for building local classifiers (Chapter 14)

    NARCIS (Netherlands)

    Zliobaite, I.; Okun, O.; Valentini, G.; Re, M.

    2011-01-01

    Divide-and-conquer approach has been recognized in multiple classifier systems aiming to utilize local expertise of individual classifiers. In this study we experimentally investigate three strategies for building local classifiers that are based on different routines of sampling data for training.

  13. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  14. 32 CFR 2400.28 - Dissemination of classified information.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Dissemination of classified information. 2400.28... SECURITY PROGRAM Safeguarding § 2400.28 Dissemination of classified information. Heads of OSTP offices... originating official may prescribe specific restrictions on dissemination of classified information when...

  15. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  16. Zero forcing parameters and minimum rank problems

    NARCIS (Netherlands)

    Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.

    2010-01-01

    The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero

  17. 30 CFR 281.30 - Minimum royalty.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...

  18. New Minimum Wage Research: A Symposium.

    Science.gov (United States)

    Ehrenberg, Ronald G.; And Others

    1992-01-01

    Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…

  19. Minimum Wage Effects in the Longer Run

    Science.gov (United States)

    Neumark, David; Nizalova, Olena

    2007-01-01

    Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…

  20. Support vector machine as a binary classifier for automated object detection in remotely sensed data

    International Nuclear Information System (INIS)

    Wardaya, P D

    2014-01-01

    In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result

  1. Support vector machine as a binary classifier for automated object detection in remotely sensed data

    Science.gov (United States)

    Wardaya, P. D.

    2014-02-01

    In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result.

  2. Building gene expression profile classifiers with a simple and efficient rejection option in R.

    Science.gov (United States)

    Benso, Alfredo; Di Carlo, Stefano; Politano, Gianfranco; Savino, Alessandro; Hafeezurrehman, Hafeez

    2011-01-01

    The collection of gene expression profiles from DNA microarrays and their analysis with pattern recognition algorithms is a powerful technology applied to several biological problems. Common pattern recognition systems classify samples assigning them to a set of known classes. However, in a clinical diagnostics setup, novel and unknown classes (new pathologies) may appear and one must be able to reject those samples that do not fit the trained model. The problem of implementing a rejection option in a multi-class classifier has not been widely addressed in the statistical literature. Gene expression profiles represent a critical case study since they suffer from the curse of dimensionality problem that negatively reflects on the reliability of both traditional rejection models and also more recent approaches such as one-class classifiers. This paper presents a set of empirical decision rules that can be used to implement a rejection option in a set of multi-class classifiers widely used for the analysis of gene expression profiles. In particular, we focus on the classifiers implemented in the R Language and Environment for Statistical Computing (R for short in the remaining of this paper). The main contribution of the proposed rules is their simplicity, which enables an easy integration with available data analysis environments. Since in the definition of a rejection model tuning of the involved parameters is often a complex and delicate task, in this paper we exploit an evolutionary strategy to automate this process. This allows the final user to maximize the rejection accuracy with minimum manual intervention. This paper shows how the use of simple decision rules can be used to help the use of complex machine learning algorithms in real experimental setups. The proposed approach is almost completely automated and therefore a good candidate for being integrated in data analysis flows in labs where the machine learning expertise required to tune traditional

  3. Precision Near-Field Reconstruction in the Time Domain via Minimum Entropy for Ultra-High Resolution Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jiwoong Yu

    2017-05-01

    Full Text Available Ultra-high resolution (UHR radar imaging is used to analyze the internal structure of objects and to identify and classify their shapes based on ultra-wideband (UWB signals using a vector network analyzer (VNA. However, radar-based imaging is limited by microwave propagation effects, wave scattering, and transmit power, thus the received signals are inevitably weak and noisy. To overcome this problem, the radar may be operated in the near-field. The focusing of UHR radar signals over a close distance requires precise geometry in order to accommodate the spherical waves. In this paper, a geometric estimation and compensation method that is based on the minimum entropy of radar images with sub-centimeter resolution is proposed and implemented. Inverse synthetic aperture radar (ISAR imaging is used because it is applicable to several fields, including medical- and security-related applications, and high quality images of various targets have been produced to verify the proposed method. For ISAR in the near-field, the compensation for the time delay depends on the distance from the center of rotation and the internal RF circuits and cables. Required parameters for the delay compensation algorithm that can be used to minimize the entropy of the radar images are determined so that acceptable results can be achieved. The processing speed can be enhanced by performing the calculations in the time domain without the phase values, which are removed after upsampling. For comparison, the parameters are also estimated by performing random sampling in the data set. Although the reduced data set contained only 5% of the observed angles, the parameter optimization method is shown to operate correctly.

  4. Peat classified as slowly renewable biomass fuel

    International Nuclear Information System (INIS)

    2001-01-01

    thousands of years. The report states also that peat should be classified as biomass fuel instead of biofuels, such as wood, or fossil fuels such as coal. According to the report peat is a renewable biomass fuel like biofuels, but due to slow accumulation it should be considered as slowly renewable fuel. The report estimates that bonding of carbon in both virgin and forest drained peatlands are so high that it can compensate the emissions formed in combustion of energy peat

  5. Max–min distance nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.

  6. Max–min distance nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-10-26

    Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.

  7. THE DISTANCE TO M104

    Energy Technology Data Exchange (ETDEWEB)

    McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, SE, University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)

    2016-11-01

    M104 (NGC 4594; the Sombrero galaxy) is a nearby, well-studied elliptical galaxy included in scores of surveys focused on understanding the details of galaxy evolution. Despite the importance of observations of M104, a consensus distance has not yet been established. Here, we use newly obtained Hubble Space Telescope optical imaging to measure the distance to M104 based on the tip of the red giant branch (TRGB) method. Our measurement yields the distance to M104 to be 9.55 ± 0.13 ± 0.31 Mpc equivalent to a distance modulus of 29.90 ± 0.03 ± 0.07 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian maximum likelihood technique that reduces measurement uncertainties. The most discrepant previous results are due to Tully–Fisher method distances, which are likely inappropriate for M104 given its peculiar morphology and structure. Our results are part of a larger program to measure accurate distances to a sample of well-known spiral galaxies (including M51, M74, and M63) using the TRGB method.

  8. THE DISTANCE TO M51

    Energy Technology Data Exchange (ETDEWEB)

    McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, S.E., University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)

    2016-07-20

    Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties.

  9. The Distance to M51

    Science.gov (United States)

    McQuinn, Kristen. B. W.; Skillman, Evan D.; Dolphin, Andrew E.; Berg, Danielle; Kennicutt, Robert

    2016-07-01

    Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  10. Distance criterion for hydrogen bond

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Distance criterion for hydrogen bond. In a D-H ...A contact, the D...A distance must be less than the sum of van der Waals Radii of the D and A atoms, for it to be a hydrogen bond.

  11. Social Distance and Intergenerational Relations

    Science.gov (United States)

    Kidwell, I. Jane; Booth, Alan

    1977-01-01

    Questionnaires were administered to a sample of adults to assess the extent of social distance between people of different ages. The findings suggest that the greater the age difference (younger or older) between people, the greater the social distance they feel. (Author)

  12. Quality Content in Distance Education

    Science.gov (United States)

    Yildiz, Ezgi Pelin; Isman, Aytekin

    2016-01-01

    In parallel with technological advances in today's world of education activities can be conducted without the constraints of time and space. One of the most important of these activities is distance education. The success of the distance education is possible with content quality. The proliferation of e-learning environment has brought a need for…

  13. Virtual Bioinformatics Distance Learning Suite

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  14. The Psychology of Psychic Distance

    DEFF Research Database (Denmark)

    Håkanson, Lars; Ambos, Björn; Schuster, Anja

    2016-01-01

    and their theoretical underpinnings assume psychic distances to be symmetric. Building on insights from psychology and sociology, this paper demonstrates how national factors and cognitive processes interact in the formation of asymmetric distance perceptions. The results suggest that exposure to other countries...

  15. Cognitive Styles and Distance Education.

    Science.gov (United States)

    Liu, Yuliang; Ginther, Dean

    1999-01-01

    Considers how to adapt the design of distance education to students' cognitive styles. Discusses cognitive styles, including field dependence versus independence, holistic-analytic, sensory preference, hemispheric preferences, and Kolb's Learning Style Model; and the characteristics of distance education, including technology. (Contains 92…

  16. Distance Learning: Practice and Dilemmas

    Science.gov (United States)

    Tatkovic, Nevenka; Sehanovic, Jusuf; Ruzic, Maja

    2006-01-01

    In accordance with the European processes of integrated and homogeneous education, the paper presents the essential viewpoints and questions covering the establishment and development of "distance learning" (DL) in Republic of Croatia. It starts from the advantages of distance learning versus traditional education taking into account…

  17. Hierarchical traits distances explain grassland Fabaceae species' ecological niches distances

    Science.gov (United States)

    Fort, Florian; Jouany, Claire; Cruz, Pablo

    2015-01-01

    Fabaceae species play a key role in ecosystem functioning through their capacity to fix atmospheric nitrogen via their symbiosis with Rhizobium bacteria. To increase benefits of using Fabaceae in agricultural systems, it is necessary to find ways to evaluate species or genotypes having potential adaptations to sub-optimal growth conditions. We evaluated the relevance of phylogenetic distance, absolute trait distance and hierarchical trait distance for comparing the adaptation of 13 grassland Fabaceae species to different habitats, i.e., ecological niches. We measured a wide range of functional traits (root traits, leaf traits, and whole plant traits) in these species. Species phylogenetic and ecological distances were assessed from a species-level phylogenetic tree and species' ecological indicator values, respectively. We demonstrated that differences in ecological niches between grassland Fabaceae species were related more to their hierarchical trait distances than to their phylogenetic distances. We showed that grassland Fabaceae functional traits tend to converge among species with the same ecological requirements. Species with acquisitive root strategies (thin roots, shallow root systems) are competitive species adapted to non-stressful meadows, while conservative ones (coarse roots, deep root systems) are able to tolerate stressful continental climates. In contrast, acquisitive species appeared to be able to tolerate low soil-P availability, while conservative ones need high P availability. Finally we highlight that traits converge along the ecological gradient, providing the assumption that species with similar root-trait values are better able to coexist, regardless of their phylogenetic distance. PMID:25741353

  18. Hierarchical traits distances explain grassland Fabaceae species’ ecological niches distances

    Directory of Open Access Journals (Sweden)

    Florian eFort

    2015-02-01

    Full Text Available Fabaceae species play a key role in ecosystem functioning through their capacity to fix atmospheric nitrogen via their symbiosis with Rhizobium bacteria. To increase benefits of using Fabaceae in agricultural systems, it is necessary to find ways to evaluate species or genotypes having potential adaptations to sub-optimal growth conditions. We evaluated the relevance of phylogenetic distance, absolute trait distance and hierarchical trait distance for comparing the adaptation of 13 grassland Fabaceae species to different habitats, i.e. ecological niches. We measured a wide range of functional traits (root traits, leaf traits and whole plant traits in these species. Species phylogenetic and ecological distances were assessed from a species-level phylogenetic tree and species’ ecological indicator values, respectively. We demonstrated that differences in ecological niches between grassland Fabaceae species were related more to their hierarchical trait distances than to their phylogenetic distances. We showed that grassland Fabaceae functional traits tend to converge among species with the same ecological requirements. Species with acquisitive root strategies (thin roots, shallow root systems are competitive species adapted to non-stressful meadows, while conservative ones (coarse roots, deep root systems are able to tolerate stressful continental climates. In contrast, acquisitive species appeared to be able to tolerate low soil-P availability, while conservative ones need high P availability. Finally we highlight that traits converge along the ecological gradient, providing the assumption that species with similar root-trait values are better able to coexist, regardless of their phylogenetic distance.

  19. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.

  20. Minimum emittance in TBA and MBA lattices

    Science.gov (United States)

    Xu, Gang; Peng, Yue-Mei

    2015-03-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.

  1. Minimum emittance in TBA and MBA lattices

    International Nuclear Information System (INIS)

    Xu Gang; Peng Yuemei

    2015-01-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 3 1/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design. (authors)

  2. Who Benefits from a Minimum Wage Increase?

    OpenAIRE

    John W. Lopresti; Kevin J. Mumford

    2015-01-01

    This paper addresses the question of how a minimum wage increase affects the wages of low-wage workers. Most studies assume that there is a simple mechanical increase in the wage for workers earning a wage between the old and the new minimum wage, with some studies allowing for spillovers to workers with wages just above this range. Rather than assume that the wages of these workers would have remained constant, this paper estimates how a minimum wage increase impacts a low-wage worker's wage...

  3. Wage inequality, minimum wage effects and spillovers

    OpenAIRE

    Stewart, Mark B.

    2011-01-01

    This paper investigates possible spillover effects of the UK minimum wage. The halt in the growth in inequality in the lower half of the wage distribution (as measured by the 50:10 percentile ratio) since the mid-1990s, in contrast to the continued inequality growth in the upper half of the distribution, suggests the possibility of a minimum wage effect and spillover effects on wages above the minimum. This paper analyses individual wage changes, using both a difference-in-differences estimat...

  4. Tracking frequency laser distance gauge

    International Nuclear Information System (INIS)

    Phillips, J.D.; Reasenberg, R.D.

    2005-01-01

    Advanced astronomical missions with greatly enhanced resolution and physics missions of unprecedented accuracy will require laser distance gauges of substantially improved performance. We describe a laser gauge, based on Pound-Drever-Hall locking, in which the optical frequency is adjusted to maintain an interferometer's null condition. This technique has been demonstrated with pm performance. Automatic fringe hopping allows it to track arbitrary distance changes. The instrument is intrinsically free of the nm-scale cyclic bias present in traditional (heterodyne) high-precision laser gauges. The output is a radio frequency, readily measured to sufficient accuracy. The laser gauge has operated in a resonant cavity, which improves precision, can suppress the effects of misalignments, and makes possible precise automatic alignment. The measurement of absolute distance requires little or no additional hardware, and has also been demonstrated. The proof-of-concept version, based on a stabilized HeNe laser and operating on a 0.5 m path, has achieved 10 pm precision with 0.1 s integration time, and 0.1 mm absolute distance accuracy. This version has also followed substantial distance changes as fast as 16 mm/s. We show that, if the precision in optical frequency is a fixed fraction of the linewidth, both incremental and absolute distance precision are independent of the distance measured. We discuss systematic error sources, and present plans for a new version of the gauge based on semiconductor lasers and fiber-coupled components

  5. An adaptive distance measure for use with nonparametric models

    International Nuclear Information System (INIS)

    Garvey, D. R.; Hines, J. W.

    2006-01-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  6. Reducing the distance in distance-caregiving by technology innovation

    Directory of Open Access Journals (Sweden)

    Lazelle E Benefield

    2007-07-01

    Full Text Available Lazelle E Benefield1, Cornelia Beck21College of Nursing, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA; 2Pat & Willard Walker Family Memory Research Center, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USAAbstract: Family caregivers are responsible for the home care of over 34 million older adults in the United States. For many, the elder family member lives more than an hour’s distance away. Distance caregiving is a growing alternative to more familiar models where: 1 the elder and the family caregiver(s may reside in the same household; or 2 the family caregiver may live nearby but not in the same household as the elder. The distance caregiving model involves elders and their family caregivers who live at some distance, defined as more than a 60-minute commute, from one another. Evidence suggests that distance caregiving is a distinct phenomenon, differs substantially from on-site family caregiving, and requires additional assistance to support the physical, social, and contextual dimensions of the caregiving process. Technology-based assists could virtually connect the caregiver and elder and provide strong support that addresses the elder’s physical, social, cognitive, and/or sensory impairments. Therefore, in today’s era of high technology, it is surprising that so few affordable innovations are being marketed for distance caregiving. This article addresses distance caregiving, proposes the use of technology innovation to support caregiving, and suggests a research agenda to better inform policy decisions related to the unique needs of this situation.Keywords: caregiving, family, distance, technology, elders

  7. Equivalence of massive propagator distance and mathematical distance on graphs

    International Nuclear Information System (INIS)

    Filk, T.

    1992-01-01

    It is shown in this paper that the assignment of distance according to the massive propagator method and according to the mathematical definition (length of minimal path) on arbitrary graphs with a bound on the degree leads to equivalent large scale properties of the graph. Especially, the internal scaling dimension is the same for both definitions. This result holds for any fixed, non-vanishing mass, so that a really inequivalent definition of distance requires the limit m → 0

  8. Language distance and tree reconstruction

    International Nuclear Information System (INIS)

    Petroni, Filippo; Serva, Maurizio

    2008-01-01

    Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others

  9. Language distance and tree reconstruction

    Science.gov (United States)

    Petroni, Filippo; Serva, Maurizio

    2008-08-01

    Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others.

  10. Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier

    Science.gov (United States)

    Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar

    2015-02-01

    In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.

  11. How unprecedented a solar minimum was it?

    Science.gov (United States)

    Russell, C T; Jian, L K; Luhmann, J G

    2013-05-01

    The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.

  12. Impact of the Minimum Wage on Compression.

    Science.gov (United States)

    Wolfe, Michael N.; Candland, Charles W.

    1979-01-01

    Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)

  13. Quantitative Research on the Minimum Wage

    Science.gov (United States)

    Goldfarb, Robert S.

    1975-01-01

    The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)

  14. Determining minimum lubrication film for machine parts

    Science.gov (United States)

    Hamrock, B. J.; Dowson, D.

    1978-01-01

    Formula predicts minimum film thickness required for fully-flooded ball bearings, gears, and cams. Formula is result of study to determine complete theoretical solution of isothermal elasto-hydrodynamic lubrication of fully-flooded elliptical contacts.

  15. Long Term Care Minimum Data Set (MDS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...

  16. The SME gauge sector with minimum length

    Energy Technology Data Exchange (ETDEWEB)

    Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)

    2017-12-15

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)

  17. The SME gauge sector with minimum length

    Science.gov (United States)

    Belich, H.; Louzada, H. L. C.

    2017-12-01

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.

  18. Solar wind and coronal structure near sunspot minimum: Pioneer and SMM observations from 1985-1987

    International Nuclear Information System (INIS)

    Mihalov, J.D.; Barnes, A.; Hundhausen, A.J.; Smith, E.J.

    1990-01-01

    The solar wind speeds observed in the outer heliosphere (20 to 40 AU heliocentric distance, approximately) by Pioneers 10 an 11, and at a heliocentric distance of 0.7 AU by the Pioneer Venus spacecraft, reveal a complex set of changes in the years near the recent sunspot minimum, 1985-1987. The pattern of recurrent solar wind streams, the long-term average speed, and the sector polarity of the interplanetary magnetic field all changed in a manner suggesting both a temporal variation, and a changing dependence on heliographic latitude. Coronal observations made from the Solar Maximum Mission spacecraft during the same epoch show a systematic variation in coronal structure and (by implication) the magnetic structure imposed on the expanding solar wind. These observations suggest interpretation of the solar wind speed variations in terms of the familiar model where the speed increases with distance from a nearly flat interplanetary current sheet (or with heliomagnetic latitude), and where this current sheet becomes aligned with the solar equatorial plane as sunspot minimum approaches, but deviates rapidly from that orientation after minimum. The authors confirm here that this basic organization of the solar wind speed persists in the outer heliosphere with an orientation of the neutral sheet consistent with that inferred at a heliocentric distance of a few solar radii, from the coronal observations

  19. An Improvement To The k-Nearest Neighbor Classifier For ECG Database

    Science.gov (United States)

    Jaafar, Haryati; Hidayah Ramli, Nur; Nasir, Aimi Salihah Abdul

    2018-03-01

    The k nearest neighbor (kNN) is a non-parametric classifier and has been widely used for pattern classification. However, in practice, the performance of kNN often tends to fail due to the lack of information on how the samples are distributed among them. Moreover, kNN is no longer optimal when the training samples are limited. Another problem observed in kNN is regarding the weighting issues in assigning the class label before classification. Thus, to solve these limitations, a new classifier called Mahalanobis fuzzy k-nearest centroid neighbor (MFkNCN) is proposed in this study. Here, a Mahalanobis distance is applied to avoid the imbalance of samples distribition. Then, a surrounding rule is employed to obtain the nearest centroid neighbor based on the distributions of training samples and its distance to the query point. Consequently, the fuzzy membership function is employed to assign the query point to the class label which is frequently represented by the nearest centroid neighbor Experimental studies from electrocardiogram (ECG) signal is applied in this study. The classification performances are evaluated in two experimental steps i.e. different values of k and different sizes of feature dimensions. Subsequently, a comparative study of kNN, kNCN, FkNN and MFkCNN classifier is conducted to evaluate the performances of the proposed classifier. The results show that the performance of MFkNCN consistently exceeds the kNN, kNCN and FkNN with the best classification rates of 96.5%.

  20. Minimum Data Set RUGs by Assessment Report

    Data.gov (United States)

    U.S. Department of Health & Human Services — This table displays national frequencies and percentages for the RUG III categories from MDS Medicare assessment records. The RUG groups are classified using the...

  1. Academy Distance Learning Tools (IRIS) -

    Data.gov (United States)

    Department of Transportation — IRIS is a suite of front-end web applications utilizing a centralized back-end Oracle database. The system fully supports the FAA Academy's Distance Learning Program...

  2. Distance labeling schemes for trees

    DEFF Research Database (Denmark)

    Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben

    2016-01-01

    We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoille...... variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003]. We improve the known upper and lower bounds of exact distance labeling by showing that 1/4 log2(n) bits are needed and that 1/2 log2(n) bits are sufficient. We also give (1 + ε)-stretch labeling schemes using Theta...

  3. Distance Education in Technological Age

    Directory of Open Access Journals (Sweden)

    R .C. SHARMA

    2005-04-01

    Full Text Available Distance Education in Technological AgeRomesh Verma (Editor, New Delhi: Anmol Publications, 2005, ISBN 81-261-2210-2, pp. 419 Reviewed by R C SHARMARegional DirectorIndira Gandhi National Open University-INDIA The advancements in information and communication technologies have brought significant changes in the way the open and distance learning are provided to the learners. The impact of such changes is quite visible in both developed and developing countries. Switching over to online mode, joining hands with private initiatives and making a presence in foreign waters, are some of the hallmarks of the open and distance education (ODE institutions in developing countries. The compilation of twenty six essays on themes as applicable to ODE has resulted in the book, “Distance Education in Technological Age”. These essays follow a progressive style of narration, starting from describing conceptual framework of distance education, how the distance education was emerged on the global scene and in India, and then goes on to discuss emergence of online distance education and research aspects in ODE. The initial four chapters provide a detailed account of historical development and growth of distance education in India and State Open University and National Open University Model in India . Student support services are pivot to any distance education and much of its success depends on how well the support services are provided. These are discussed from national and international perspective. The issues of collaborative learning, learning on demand, life long learning, learning-unlearning and re-learning model and strategic alliances have also given due space by the authors. An assortment of technologies like communication technology, domestic technology, information technology, mass media and entertainment technology, media technology and educational technology give an idea of how these technologies are being adopted in the open universities. The study

  4. Distance Education in Technological Age

    OpenAIRE

    R .C. SHARMA

    2005-01-01

    Distance Education in Technological AgeRomesh Verma (Editor), New Delhi: Anmol Publications, 2005, ISBN 81-261-2210-2, pp. 419 Reviewed by R C SHARMARegional DirectorIndira Gandhi National Open University-INDIA The advancements in information and communication technologies have brought significant changes in the way the open and distance learning are provided to the learners. The impact of such changes is quite visible in both developed and developing countries. Switching over to online mode...

  5. A Supervised Multiclass Classifier for an Autocoding System

    Directory of Open Access Journals (Sweden)

    Yukako Toko

    2017-11-01

    Full Text Available Classification is often required in various contexts, including in the field of official statistics. In the previous study, we have developed a multiclass classifier that can classify short text descriptions with high accuracy. The algorithm borrows the concept of the naïve Bayes classifier and is so simple that its structure is easily understandable. The proposed classifier has the following two advantages. First, the processing times for both learning and classifying are extremely practical. Second, the proposed classifier yields high-accuracy results for a large portion of a dataset. We have previously developed an autocoding system for the Family Income and Expenditure Survey in Japan that has a better performing classifier. While the original system was developed in Perl in order to improve the efficiency of the coding process of short Japanese texts, the proposed system is implemented in the R programming language in order to explore versatility and is modified to make the system easily applicable to English text descriptions, in consideration of the increasing number of R users in the field of official statistics. We are planning to publish the proposed classifier as an R-package. The proposed classifier would be generally applicable to other classification tasks including coding activities in the field of official statistics, and it would contribute greatly to improving their efficiency.

  6. Scaling of Natal Dispersal Distances in Terrestrial Birds and Mammals

    Directory of Open Access Journals (Sweden)

    Glenn D. Sutherland

    2000-07-01

    Full Text Available Natal dispersal is a process that is critical in the spatial dynamics of populations, including population spread, recolonization, and gene flow. It is a central focus of conservation issues for many vertebrate species. Using data for 77 bird and 68 mammal species, we tested whether median and maximum natal dispersal distances were correlated with body mass, diet type, social system, taxonomic family, and migratory status. Body mass and diet type were found to predict both median and maximum natal dispersal distances in mammals: large species dispersed farther than small ones, and carnivorous species dispersed farther than herbivores and omnivores. Similar relationships occurred for carnivorous bird species, but not for herbivorous or omnivorous ones. Natal dispersal distances in birds or mammals were not significantly related to broad categories of social systems. Only in birds were factors such as taxonomic relatedness and migratory status correlated with natal dispersal, and then only for maximum distances. Summary properties of dispersal processes appeared to be derived from interactions among behavioral and morphological characteristics of species and from their linkages to the dynamics of resource availability in landscapes. In all the species we examined, most dispersers moved relatively short distances, and long-distance dispersal was uncommon. On the basis of these findings, we fit an empirical model based on the negative exponential distribution for calculating minimum probabilities that animals disperse particular distances from their natal areas. This model, coupled with knowledge of a species' body mass and diet type, can be used to conservatively predict dispersal distances for different species and examine possible consequences of large-scale habitat alterations on connectedness between populations. Taken together, our results can provide managers with the means to identify species vulnerable to landscape-level habitat changes

  7. The Edit Distance as a Measure of Perceived Rhythmic Similarity

    Directory of Open Access Journals (Sweden)

    Olaf Post

    2012-07-01

    Full Text Available The ‘edit distance’ (or ‘Levenshtein distance’ measure of distance between two data sets is defined as the minimum number of editing operations – insertions, deletions, and substitutions – that are required to transform one data set to the other (Orpen and Huron, 1992. This measure of distance has been applied frequently and successfully in music information retrieval, but rarely in predicting human perception of distance. In this study, we investigate the effectiveness of the edit distance as a predictor of perceived rhythmic dissimilarity under simple rhythmic alterations. Approaching rhythms as a set of pulses that are either onsets or silences, we study two types of alterations. The first experiment is designed to test the model’s accuracy for rhythms that are relatively similar; whether rhythmic variations with the same edit distance to a source rhythm are also perceived as relatively similar by human subjects. In addition, we observe whether the salience of an edit operation is affected by its metric placement in the rhythm. Instead of using a rhythm that regularly subdivides a 4/4 meter, our source rhythm is a syncopated 16-pulse rhythm, the son. Results show a high correlation between the predictions by the edit distance model and human similarity judgments (r = 0.87; a higher correlation than for the well-known generative theory of tonal music (r = 0.64. In the second experiment, we seek to assess the accuracy of the edit distance model in predicting relatively dissimilar rhythms. The stimuli used are random permutations of the son’s inter-onset intervals: 3-3-4-2-4. The results again indicate that the edit distance correlates well with the perceived rhythmic dissimilarity judgments of the subjects (r = 0.76. To gain insight in the relationships between the individual rhythms, the results are also presented by means of graphic phylogenetic trees.

  8. Phylogenetic Applications of the Minimum Contradiction Approach on Continuous Characters

    Directory of Open Access Journals (Sweden)

    Marc Thuillard

    2009-01-01

    Full Text Available We describe the conditions under which a set of continuous variables or characters can be described as an X-tree or a split network. A distance matrix corresponds exactly to a split network or a valued X-tree if, after ordering of the taxa, the variables values can be embedded into a function with at most a local maximum and a local minimum, and crossing any horizontal line at most twice. In real applications, the order of the taxa best satisfying the above conditions can be obtained using the Minimum Contradiction method. This approach is applied to 2 sets of continuous characters. The first set corresponds to craniofacial landmarks in Hominids. The contradiction matrix is used to identify possible tree structures and some alternatives when they exist. We explain how to discover the main structuring characters in a tree. The second set consists of a sample of 100 galaxies. In that second example one shows how to discretize the continuous variables describing physical properties of the galaxies without disrupting the underlying tree structure.

  9. A proposal of comparative Maunder minimum cosmogenic isotope measurements

    International Nuclear Information System (INIS)

    Attolini, M.R.; Nanni, T.; Galli, M.; Povinec, P.

    1989-01-01

    There are at present contraddictory conclusions about solar activity and cosmogenic isotope production variation during Maunder Minimum. The interaction of solar wind with galactic cosmic rays, the dynamic behaviour of the Sun either as a system having an internal clock, and/or as a forced non linear system, are important aspects that can shed new light on solar physics, the Earth-Sun relationship and the climatic variation. An essential progress in the matter might be made by clarifying the cosmogenic isotope production during the mentioned interval. As it seems that during Maunder Minimum the Be10 production oscillates of about a factor of two, the authors have also to expect short scale enhanced variations in tree rings radiocarbon concentrations for the same interval. It is therefore highly desirable that for the same interval, that the authors would identify with 1640-1720 AD, detailed concentration measurements both of Be10 (in dated polar ice in addition to those of Beer et al.) and of tree ring radiocarbon, be made with cross-checking, in samples of different latitudes, longitudes and within short and large distance of the sea. The samples could be taken, as for example in samples from the central Mediterranean region, in the Baltic region and in other sites from central Europe and Asia

  10. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    Science.gov (United States)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  11. The minimum wage in the Czech enterprises

    Directory of Open Access Journals (Sweden)

    Eva Lajtkepová

    2010-01-01

    Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.

  12. 18 CFR 3a.12 - Authority to classify official information.

    Science.gov (United States)

    2010-04-01

    ... efficient administration. (b) The authority to classify information or material originally as Top Secret is... classify information or material originally as Secret is exercised only by: (1) Officials who have Top... information or material originally as Confidential is exercised by officials who have Top Secret or Secret...

  13. Using Neural Networks to Classify Digitized Images of Galaxies

    Science.gov (United States)

    Goderya, S. N.; McGuire, P. C.

    2000-12-01

    Automated classification of Galaxies into Hubble types is of paramount importance to study the large scale structure of the Universe, particularly as survey projects like the Sloan Digital Sky Survey complete their data acquisition of one million galaxies. At present it is not possible to find robust and efficient artificial intelligence based galaxy classifiers. In this study we will summarize progress made in the development of automated galaxy classifiers using neural networks as machine learning tools. We explore the Bayesian linear algorithm, the higher order probabilistic network, the multilayer perceptron neural network and Support Vector Machine Classifier. The performance of any machine classifier is dependant on the quality of the parameters that characterize the different groups of galaxies. Our effort is to develop geometric and invariant moment based parameters as input to the machine classifiers instead of the raw pixel data. Such an approach reduces the dimensionality of the classifier considerably, and removes the effects of scaling and rotation, and makes it easier to solve for the unknown parameters in the galaxy classifier. To judge the quality of training and classification we develop the concept of Mathews coefficients for the galaxy classification community. Mathews coefficients are single numbers that quantify classifier performance even with unequal prior probabilities of the classes.

  14. Fisher classifier and its probability of error estimation

    Science.gov (United States)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  15. Performance of classification confidence measures in dynamic classifier systems

    Czech Academy of Sciences Publication Activity Database

    Štefka, D.; Holeňa, Martin

    2013-01-01

    Roč. 23, č. 4 (2013), s. 299-319 ISSN 1210-0552 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : classifier combining * dynamic classifier systems * classification confidence Subject RIV: IN - Informatics, Computer Science Impact factor: 0.412, year: 2013

  16. 32 CFR 2400.30 - Reproduction of classified information.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Reproduction of classified information. 2400.30... SECURITY PROGRAM Safeguarding § 2400.30 Reproduction of classified information. Documents or portions of... the originator or higher authority. Any stated prohibition against reproduction shall be strictly...

  17. Classifying spaces with virtually cyclic stabilizers for linear groups

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen

    2015-01-01

    We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...

  18. Dynamic integration of classifiers in the space of principal components

    NARCIS (Netherlands)

    Tsymbal, A.; Pechenizkiy, M.; Puuronen, S.; Patterson, D.W.; Kalinichenko, L.A.; Manthey, R.; Thalheim, B.; Wloka, U.

    2003-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. It was shown that, for an ensemble to be successful, it should consist of accurate and diverse base classifiers. However, it is also important that the

  19. Physical chemistry of WC-12 %Co coatings deposited by thermal spraying at different standoff distances

    Energy Technology Data Exchange (ETDEWEB)

    Afzal, Muhammad; Ahmed, Furqan; Anwar, Muhammad Yousaf; Ali, Liaqat; Ajmal, Muhammad [Univ. of Engineering and Technology, Metallurgical and Materials Engineering, Lahore (Pakistan); Khan, Aamer Nusair [Institute of Industrial and Control System, Rawalpindi (Pakistan)

    2015-09-15

    In the present research, WC-12 %Co cermet coatings were deposited on AISI-321 stainless steel substrate using air plasma spraying. During the deposition process, the standoff distance was varied from 80 to 130 mm with 10 mm increments. Other parameters such as current, voltage, time, carrier gas flow rate and powder feed rate etc. were kept constant. The objective was to study the effects of spraying distance on the microstructure of as-sprayed coatings. The microscopic analyses revealed that the band of spraying distance ranging from 90 to 100 mm was the threshold distance for optimum results, provided that all the other spraying parameters were kept constant. In this range of threshold distance, minimum percentages of porosity and defects were observed. Further, the formation of different phases, at six spraying distances, was studied using X-ray diffraction, and the phase analysis was correlated with hardness results.

  20. Just-in-time classifiers for recurrent concepts.

    Science.gov (United States)

    Alippi, Cesare; Boracchi, Giacomo; Roveri, Manuel

    2013-04-01

    Just-in-time (JIT) classifiers operate in evolving environments by classifying instances and reacting to concept drift. In stationary conditions, a JIT classifier improves its accuracy over time by exploiting additional supervised information coming from the field. In nonstationary conditions, however, the classifier reacts as soon as concept drift is detected; the current classification setup is discarded and a suitable one activated to keep the accuracy high. We present a novel generation of JIT classifiers able to deal with recurrent concept drift by means of a practical formalization of the concept representation and the definition of a set of operators working on such representations. The concept-drift detection activity, which is crucial in promptly reacting to changes exactly when needed, is advanced by considering change-detection tests monitoring both inputs and classes distributions.

  1. Measuring distances between complex networks

    International Nuclear Information System (INIS)

    Andrade, Roberto F.S.; Miranda, Jose G.V.; Pinho, Suani T.R.; Lobao, Thierry Petit

    2008-01-01

    A previously introduced concept of higher order neighborhoods in complex networks, [R.F.S. Andrade, J.G.V. Miranda, T.P. Lobao, Phys. Rev. E 73 (2006) 046101] is used to define a distance between networks with the same number of nodes. With such measure, expressed in terms of the matrix elements of the neighborhood matrices of each network, it is possible to compare, in a quantitative way, how far apart in the space of neighborhood matrices two networks are. The distance between these matrices depends on both the network topologies and the adopted node numberings. While the numbering of one network is fixed, a Monte Carlo algorithm is used to find the best numbering of the other network, in the sense that it minimizes the distance between the matrices. The minimal value found for the distance reflects differences in the neighborhood structures of the two networks that arise only from distinct topologies. This procedure ends up by providing a projection of the first network on the pattern of the second one. Examples are worked out allowing for a quantitative comparison for distances among distinct networks, as well as among distinct realizations of random networks

  2. Computing Distances between Probabilistic Automata

    Directory of Open Access Journals (Sweden)

    Mathieu Tracol

    2011-07-01

    Full Text Available We present relaxed notions of simulation and bisimulation on Probabilistic Automata (PA, that allow some error epsilon. When epsilon is zero we retrieve the usual notions of bisimulation and simulation on PAs. We give logical characterisations of these notions by choosing suitable logics which differ from the elementary ones, L with negation and L without negation, by the modal operator. Using flow networks, we show how to compute the relations in PTIME. This allows the definition of an efficiently computable non-discounted distance between the states of a PA. A natural modification of this distance is introduced, to obtain a discounted distance, which weakens the influence of long term transitions. We compare our notions of distance to others previously defined and illustrate our approach on various examples. We also show that our distance is not expansive with respect to process algebra operators. Although L without negation is a suitable logic to characterise epsilon-(bisimulation on deterministic PAs, it is not for general PAs; interestingly, we prove that it does characterise weaker notions, called a priori epsilon-(bisimulation, which we prove to be NP-difficult to decide.

  3. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  4. Long-distance travellers stopover for longer: a case study with spoonbills staying in North Iberia

    OpenAIRE

    Navedo , Juan G.; Orizaola , Germán; Masero , José A.; Overdijk , Otto; Sánchez-Guzmán , Juan M.

    2010-01-01

    Abstract Long-distance migration is widespread among birds, connecting breeding and wintering areas through a set of stopover localities where individuals refuel and/or rest. The extent of the stopover is critical in determining the migratory strategy of a bird. Here, we examined the relationship between minimum length of stay of PVC-ringed birds in a major stopover site and the remaining flight distance to the overwintering area in the Eurasian spoonbill (Platalea l. leucorodia) d...

  5. Safety distance for preventing hot particle ignition of building insulation materials

    OpenAIRE

    Jiayun Song; Supan Wang; Haixiang Chen

    2014-01-01

    Trajectories of flying hot particles were predicted in this work, and the temperatures during the movement were also calculated. Once the particle temperature decreased to the critical temperature for a hot particle to ignite building insulation materials, which was predicted by hot-spot ignition theory, the distance particle traveled was determined as the minimum safety distance for preventing the ignition of building insulation materials by hot particles. The results showed that for sphere ...

  6. Risk control and the minimum significant risk

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limit to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented

  7. Minimum qualifications for nuclear criticality safety professionals

    International Nuclear Information System (INIS)

    Ketzlach, N.

    1990-01-01

    A Nuclear Criticality Technology and Safety Training Committee has been established within the U.S. Department of Energy (DOE) Nuclear Criticality Safety and Technology Project to review and, if necessary, develop standards for the training of personnel involved in nuclear criticality safety (NCS). The committee is exploring the need for developing a standard or other mechanism for establishing minimum qualifications for NCS professionals. The development of standards and regulatory guides for nuclear power plant personnel may serve as a guide in developing the minimum qualifications for NCS professionals

  8. A minimum achievable PV electrical generating cost

    International Nuclear Information System (INIS)

    Sabisky, E.S.

    1996-01-01

    The role and share of photovoltaic (PV) generated electricity in our nation's future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price

  9. Euclidean distance geometry an introduction

    CERN Document Server

    Liberti, Leo

    2017-01-01

    This textbook, the first of its kind, presents the fundamentals of distance geometry:  theory, useful methodologies for obtaining solutions, and real world applications. Concise proofs are given and step-by-step algorithms for solving fundamental problems efficiently and precisely are presented in Mathematica®, enabling the reader to experiment with concepts and methods as they are introduced. Descriptive graphics, examples, and problems, accompany the real gems of the text, namely the applications in visualization of graphs, localization of sensor networks, protein conformation from distance data, clock synchronization protocols, robotics, and control of unmanned underwater vehicles, to name several.  Aimed at intermediate undergraduates, beginning graduate students, researchers, and practitioners, the reader with a basic knowledge of linear algebra will gain an understanding of the basic theories of distance geometry and why they work in real life.

  10. Geodesic distance in planar graphs

    International Nuclear Information System (INIS)

    Bouttier, J.; Di Francesco, P.; Guitter, E.

    2003-01-01

    We derive the exact generating function for planar maps (genus zero fatgraphs) with vertices of arbitrary even valence and with two marked points at a fixed geodesic distance. This is done in a purely combinatorial way based on a bijection with decorated trees, leading to a recursion relation on the geodesic distance. The latter is solved exactly in terms of discrete soliton-like expressions, suggesting an underlying integrable structure. We extract from this solution the fractal dimensions at the various (multi)-critical points, as well as the precise scaling forms of the continuum two-point functions and the probability distributions for the geodesic distance in (multi)-critical random surfaces. The two-point functions are shown to obey differential equations involving the residues of the KdV hierarchy

  11. Adaptive Distance Protection for Microgrids

    DEFF Research Database (Denmark)

    Lin, Hengwei; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez

    2015-01-01

    is adopted to accelerate the tripping speed of the relays on the weak lines. The protection methodology is tested on a mid-voltage microgrid network in Aalborg, Denmark. The results show that the adaptive distance protection methodology has good selectivity and sensitivity. What is more, this system also has......Due to the increasing penetration of distributed generation resources, more and more microgrids can be found in distribution systems. This paper proposes a phasor measurement unit based distance protection strategy for microgrids in distribution system. At the same time, transfer tripping scheme...

  12. Implementation of a microcomputer based distance relay for parallel transmission lines

    International Nuclear Information System (INIS)

    Phadke, A.G.; Jihuang, L.

    1986-01-01

    Distance relaying for parallel transmission lines is a difficult application problem with conventional phase and ground distance relays. It is known that for cross-country faults involving dissimilar phases and ground, three phase tripping may result. This paper summarizes a newly developed microcomputer based relay which is capable of classifying the cross-country fault correctly. The paper describes the principle of operation and results of laboratory tests of this relay

  13. Discretization of space and time: determining the values of minimum length and minimum time

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .

  14. Fast Exact Euclidean Distance (FEED) Transformation

    NARCIS (Netherlands)

    Schouten, Theo; Kittler, J.; van den Broek, Egon; Petrou, M.; Nixon, M.

    2004-01-01

    Fast Exact Euclidean Distance (FEED) transformation is introduced, starting from the inverse of the distance transformation. The prohibitive computational cost of a naive implementation of traditional Euclidean Distance Transformation, is tackled by three operations: restriction of both the number

  15. Class-specific Error Bounds for Ensemble Classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Prenger, R; Lemmond, T; Varshney, K; Chen, B; Hanley, W

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.

  16. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  17. Diagnostics of synchronous motor based on analysis of acoustic signals with application of MFCC and Nearest Mean classifier

    OpenAIRE

    Adam Głowacz; Witold Głowacz; Andrzej Głowacz

    2010-01-01

    The paper presents method of diagnostics of imminent failure conditions of synchronous motor. This method is based on a study ofacoustic signals generated by synchronous motor. Sound recognition system is based on algorithms of data processing, such as MFCC andNearest Mean classifier with cosine distance. Software to recognize the sounds of synchronous motor was implemented. The studies werecarried out for four imminent failure conditions of synchronous motor. The results confirm that the sys...

  18. Dimensionality reduction based on distance preservation to local mean for symmetric positive definite matrices and its application in brain-computer interfaces

    Science.gov (United States)

    Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh

    2017-06-01

    Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.

  19. Partial distance correlation with methods for dissimilarities

    OpenAIRE

    Székely, Gábor J.; Rizzo, Maria L.

    2014-01-01

    Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions, and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covari...

  20. MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.

    Science.gov (United States)

    Pennsylvania State Dept. of Public Instruction, Harrisburg.

    MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…

  1. Dirac's minimum degree condition restricted to claws

    NARCIS (Netherlands)

    Broersma, Haitze J.; Ryjacek, Z.; Schiermeyer, I.

    1997-01-01

    Let G be a graph on n 3 vertices. Dirac's minimum degree condition is the condition that all vertices of G have degree at least . This is a well-known sufficient condition for the existence of a Hamilton cycle in G. We give related sufficiency conditions for the existence of a Hamilton cycle or a

  2. 7 CFR 33.10 - Minimum requirements.

    Science.gov (United States)

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... shipment of apples to any foreign destination unless: (a) Apples grade at least U.S. No. 1 or U.S. No. 1...

  3. Minimum Risk Pesticide: Definition and Product Confirmation

    Science.gov (United States)

    Minimum risk pesticides pose little to no risk to human health or the environment and therefore are not subject to regulation under FIFRA. EPA does not do any pre-market review for such products or labels, but violative products are subject to enforcement.

  4. Minimum maintenance solar pump | Assefa | Zede Journal

    African Journals Online (AJOL)

    A minimum maintenance solar pump (MMSP), Fig 1, has been simulated for Addis Ababa, taking solar meteorological data of global radiation, diffuse radiation and ambient air temperature as input to a computer program that has been developed. To increase the performance of the solar pump, by trapping the long-wave ...

  5. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  6. 7 CFR 35.13 - Minimum quantity.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... part, transport or receive for transportation to any foreign destination, a shipment of 25 packages or...

  7. Minimum impact house prototype for sustainable building

    NARCIS (Netherlands)

    Götz, E.; Klenner, K.; Lantelme, M.; Mohn, A.; Sauter, S.; Thöne, J.; Zellmann, E.; Drexler, H.; Jauslin, D.

    2010-01-01

    The Minihouse is a prototupe for a sustainable townhouse. On a site of only 29 sqm it offers 154 sqm of urban life. The project 'Minimum Impact House' adresses two important questions: How do we provide living space in the cities without distroying the landscape? How to improve sustainably the

  8. 49 CFR 639.27 - Minimum criteria.

    Science.gov (United States)

    2010-10-01

    ... dollar value to any non-financial factors that are considered by using performance-based specifications..., DEPARTMENT OF TRANSPORTATION CAPITAL LEASES Cost-Effectiveness § 639.27 Minimum criteria. In making the... used where possible and appropriate: (a) Operation costs; (b) Reliability of service; (c) Maintenance...

  9. Computing nonsimple polygons of minimum perimeter

    NARCIS (Netherlands)

    Fekete, S.P.; Haas, A.; Hemmer, M.; Hoffmann, M.; Kostitsyna, I.; Krupke, D.; Maurer, F.; Mitchell, J.S.B.; Schmidt, A.; Schmidt, C.; Troegel, J.

    2018-01-01

    We consider the Minimum Perimeter Polygon Problem (MP3): for a given set V of points in the plane, find a polygon P with holes that has vertex set V , such that the total boundary length is smallest possible. The MP3 can be considered a natural geometric generalization of the Traveling Salesman

  10. Minimum-B mirrors plus EBT principles

    International Nuclear Information System (INIS)

    Yoshikawa, S.

    1983-01-01

    Electrons are heated at the minimum B location(s) created by the multipole field and the toroidal field. Resulting hot electrons can assist plasma confinement by (1) providing mirror, (2) creating azimuthally symmetric toroidal confinement, or (3) creating modified bumpy torus

  11. Completeness properties of the minimum uncertainty states

    Science.gov (United States)

    Trifonov, D. A.

    1993-01-01

    The completeness properties of the Schrodinger minimum uncertainty states (SMUS) and of some of their subsets are considered. The invariant measures and the resolution unity measures for the set of SMUS are constructed and the representation of squeezing and correlating operators and SMUS as superpositions of Glauber coherent states on the real line is elucidated.

  12. Minimum Description Length Shape and Appearance Models

    DEFF Research Database (Denmark)

    Thodberg, Hans Henrik

    2003-01-01

    The Minimum Description Length (MDL) approach to shape modelling is reviewed. It solves the point correspondence problem of selecting points on shapes defined as curves so that the points correspond across a data set. An efficient numerical implementation is presented and made available as open s...

  13. Faster Fully-Dynamic minimum spanning forest

    DEFF Research Database (Denmark)

    Holm, Jacob; Rotenberg, Eva; Wulff-Nilsen, Christian

    2015-01-01

    We give a new data structure for the fully-dynamic minimum spanning forest problem in simple graphs. Edge updates are supported in O(log4 n/log logn) expected amortized time per operation, improving the O(log4 n) amortized bound of Holm et al. (STOC’98, JACM’01).We also provide a deterministic data...

  14. Minimum Wage Effects throughout the Wage Distribution

    Science.gov (United States)

    Neumark, David; Schweitzer, Mark; Wascher, William

    2004-01-01

    This paper provides evidence on a wide set of margins along which labor markets can adjust in response to increases in the minimum wage, including wages, hours, employment, and ultimately labor income. Not surprisingly, the evidence indicates that low-wage workers are most strongly affected, while higher-wage workers are little affected. Workers…

  15. Asymptotics for the minimum covariance determinant estimator

    NARCIS (Netherlands)

    Butler, R.W.; Davies, P.L.; Jhun, M.

    1993-01-01

    Consistency is shown for the minimum covariance determinant (MCD) estimators of multivariate location and scale and asymptotic normality is shown for the former. The proofs are made possible by showing a separating ellipsoid property for the MCD subset of observations. An analogous property is shown

  16. Ship localization in Santa Barbara Channel using machine learning classifiers.

    Science.gov (United States)

    Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter

    2017-11-01

    Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.

  17. Classifying dysmorphic syndromes by using artificial neural network based hierarchical decision tree.

    Science.gov (United States)

    Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf

    2018-05-01

    Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.

  18. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  19. Gesture Interaction at a Distance

    NARCIS (Netherlands)

    Fikkert, F.W.

    2010-01-01

    The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is

  20. Communication Barriers in Distance Education

    Science.gov (United States)

    Isman, Aytekin; Dabaj, Fahme; Altinay, Fahriye; Altinay, Zehra

    2003-01-01

    Communication is a key concept as being the major tool for people in order to satisfy their needs. It is an activity which refers as process and effective communication requires qualified communication with the elimination of communication barriers. As it is known, distance education is a new trend by following contemporary facilities and tools…

  1. Distance Education Technologies in Asia

    International Development Research Centre (IDRC) Digital Library (Canada)

    17 schools ... Mobile Technology in Non-formal Distance Education 192 ..... in the design and application of e-learning strategies, the need to standardise and ...... library providing access to over 20,000 journals and thesis databases, and 6,000 ...

  2. Video surveillance using distance maps

    Science.gov (United States)

    Schouten, Theo E.; Kuppens, Harco C.; van den Broek, Egon L.

    2006-02-01

    Human vigilance is limited; hence, automatic motion and distance detection is one of the central issues in video surveillance. Hereby, many aspects are of importance, this paper specially addresses: efficiency, achieving real-time performance, accuracy, and robustness against various noise factors. To obtain fully controlled test environments, an artificial development center for robot navigation is introduced in which several parameters can be set (e.g., number of objects, trajectories and type and amount of noise). In the videos, for each following frame, movement of stationary objects is detected and pixels of moving objects are located from which moving objects are identified in a robust way. An Exact Euclidean Distance Map (E2DM) is utilized to determine accurately the distances between moving and stationary objects. Together with the determined distances between moving objects and the detected movement of stationary objects, this provides the input for detecting unwanted situations in the scene. Further, each intelligent object (e.g., a robot), is provided with its E2DM, allowing the object to plan its course of action. Timing results are specified for each program block of the processing chain for 20 different setups. So, the current paper presents extensive, experimentally controlled research on real-time, accurate, and robust motion detection for video surveillance, using E2DMs, which makes it a unique approach.

  3. Interaction in Distance Nursing Education

    Science.gov (United States)

    Boz Yuksekdag, Belgin

    2012-01-01

    The purpose of this study is to determine psychiatry nurses' attitudes toward the interactions in distance nursing education, and also scrunize their attitudes based on demographics and computer/Internet usage. The comparative relational scanning model is the method of this study. The research data were collected through "The Scale of Attitudes of…

  4. Student Monitoring in Distance Education.

    Science.gov (United States)

    Holt, Peter; And Others

    1987-01-01

    Reviews a computerized monitoring system for distance education students at Athabasca University designed to solve the problems of tracking student performance. A pilot project for tutors is described which includes an electronic conferencing system and electronic mail, and an evaluation currently in progress is briefly discussed. (LRW)

  5. Planetary tides during the Maunder sunspot minimum

    International Nuclear Information System (INIS)

    Smythe, C.M.; Eddy, J.A.

    1977-01-01

    Sun-centered planetary conjunctions and tidal potentials are here constructed for the AD1645 to 1715 period of sunspot absence, referred to as the 'Maunder Minimum'. These are found to be effectively indistinguishable from patterns of conjunctions and power spectra of tidal potential in the present era of a well established 11 year sunspot cycle. This places a new and difficult restraint on any tidal theory of sunspot formation. Problems arise in any direct gravitational theory due to the apparently insufficient forces and tidal heights involved. Proponents of the tidal hypothesis usually revert to trigger mechanisms, which are difficult to criticise or test by observation. Any tidal theory rests on the evidence of continued sunspot periodicity and the substantiation of a prolonged period of solar anomaly in the historical past. The 'Maunder Minimum' was the most drastic change in the behaviour of solar activity in the last 300 years; sunspots virtually disappeared for a 70 year period and the 11 year cycle was probably absent. During that time, however, the nine planets were all in their orbits, and planetary conjunctions and tidal potentials were indistinguishable from those of the present era, in which the 11 year cycle is well established. This provides good evidence against the tidal theory. The pattern of planetary tidal forces during the Maunder Minimum was reconstructed to investigate the possibility that the multiple planet forces somehow fortuitously cancelled at the time, that is that the positions of the slower moving planets in the 17th and early 18th centuries were such that conjunctions and tidal potentials were at the time reduced in number and force. There was no striking dissimilarity between the time of the Maunder Minimum and any period investigated. The failure of planetary conjunction patterns to reflect the drastic drop in sunspots during the Maunder Minimum casts doubt on the tidal theory of solar activity, but a more quantitative test

  6. A minimum spanning forest based classification method for dedicated breast CT images

    International Nuclear Information System (INIS)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei

    2015-01-01

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging

  7. Space-Efficient Approximation Scheme for Circular Earth Mover Distance

    DEFF Research Database (Denmark)

    Brody, Joshua Eric; Liang, Hongyu; Sun, Xiaoming

    2012-01-01

    The Earth Mover Distance (EMD) between point sets A and B is the minimum cost of a bipartite matching between A and B. EMD is an important measure for estimating similarities between objects with quantifiable features and has important applications in several areas including computer vision...... to computer vision [13] and can be seen as a special case of computing EMD on a discretized grid. We achieve a (1 ±ε) approximation for EMD in $\\tilde O(\\varepsilon^{-3})$ space, for every 0 ... that matches the space bound asked in [9]....

  8. A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances

    Directory of Open Access Journals (Sweden)

    Nicolas Normand

    2014-09-01

    Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-fly, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.

  9. Particle swarm optimization for determining shortest distance to voltage collapse

    Energy Technology Data Exchange (ETDEWEB)

    Arya, L.D.; Choube, S.C. [Electrical Engineering Department, S.G.S.I.T.S. Indore, MP 452 003 (India); Shrivastava, M. [Electrical Engineering Department, Government Engineering College Ujjain, MP 456 010 (India); Kothari, D.P. [Centre for Energy Studies, Indian Institute of Technology, Delhi (India)

    2007-12-15

    This paper describes an algorithm for computing shortest distance to voltage collapse or determination of CSNBP using PSO technique. A direction along CSNBP gives conservative results from voltage security view point. This information is useful to the operator to steer the system away from this point by taking corrective actions. The distance to a closest bifurcation is a minimum of the loadability given a slack bus or participation factors for increasing generation as the load increases. CSNBP determination has been formulated as an optimization problem to be used in PSO technique. PSO is a new evolutionary algorithm (EA) which is population based inspired by the social behavior of animals such as fish schooling and birds flocking. It can handle optimization problems with any complexity since mechanization is simple with few parameters to be tuned. The developed algorithm has been implemented on two standard test systems. (author)

  10. Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS

    OpenAIRE

    Sumintadireja, Prihadi; Irawan, Dasapta Erwin; Rezky, Yuanno; Gio, Prana Ugiana; Agustin, Anggita

    2016-01-01

    This file is the dataset for the following paper "Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS". Authors: Prihadi Sumintadireja1, Dasapta Erwin Irawan1, Yuano Rezky2, Prana Ugiana Gio3, Anggita Agustin1

  11. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  12. Using Statistical Process Control Methods to Classify Pilot Mental Workloads

    National Research Council Canada - National Science Library

    Kudo, Terence

    2001-01-01

    .... These include cardiac, ocular, respiratory, and brain activity measures. The focus of this effort is to apply statistical process control methodology on different psychophysiological features in an attempt to classify pilot mental workload...

  13. An ensemble classifier to predict track geometry degradation

    International Nuclear Information System (INIS)

    Cárdenas-Gallo, Iván; Sarmiento, Carlos A.; Morales, Gilberto A.; Bolivar, Manuel A.; Akhavan-Tabatabaei, Raha

    2017-01-01

    Railway operations are inherently complex and source of several problems. In particular, track geometry defects are one of the leading causes of train accidents in the United States. This paper presents a solution approach which entails the construction of an ensemble classifier to forecast the degradation of track geometry. Our classifier is constructed by solving the problem from three different perspectives: deterioration, regression and classification. We considered a different model from each perspective and our results show that using an ensemble method improves the predictive performance. - Highlights: • We present an ensemble classifier to forecast the degradation of track geometry. • Our classifier considers three perspectives: deterioration, regression and classification. • We construct and test three models and our results show that using an ensemble method improves the predictive performance.

  14. A novel statistical method for classifying habitat generalists and specialists

    DEFF Research Database (Denmark)

    Chazdon, Robin L; Chao, Anne; Colwell, Robert K

    2011-01-01

    in second-growth (SG) and old-growth (OG) rain forests in the Caribbean lowlands of northeastern Costa Rica. We evaluate the multinomial model in detail for the tree data set. Our results for birds were highly concordant with a previous nonstatistical classification, but our method classified a higher......: (1) generalist; (2) habitat A specialist; (3) habitat B specialist; and (4) too rare to classify with confidence. We illustrate our multinomial classification method using two contrasting data sets: (1) bird abundance in woodland and heath habitats in southeastern Australia and (2) tree abundance...... fraction (57.7%) of bird species with statistical confidence. Based on a conservative specialization threshold and adjustment for multiple comparisons, 64.4% of tree species in the full sample were too rare to classify with confidence. Among the species classified, OG specialists constituted the largest...

  15. 6 CFR 7.23 - Emergency release of classified information.

    Science.gov (United States)

    2010-01-01

    ... Classified Information Non-disclosure Form. In emergency situations requiring immediate verbal release of... information through approved communication channels by the most secure and expeditious method possible, or by...

  16. Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Fatih Gökçe

    2015-09-01

    Full Text Available Detection and distance estimation of micro unmanned aerial vehicles (mUAVs is crucial for (i the detection of intruder mUAVs in protected environments; (ii sense and avoid purposes on mUAVs or on other aerial vehicles and (iii multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG and local binary patterns (LBP using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032 × 778 resolution and 150 ms outdoors (1280 × 720 resolution per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms.

  17. Further results on binary convolutional codes with an optimum distance profile

    DEFF Research Database (Denmark)

    Johannesson, Rolf; Paaske, Erik

    1978-01-01

    Fixed binary convolutional codes are considered which are simultaneously optimal or near-optimal according to three criteria: namely, distance profiled, free distanced_{ infty}, and minimum number of weightd_{infty}paths. It is shown how the optimum distance profile criterion can be used to limit...... codes. As a counterpart to quick-look-in (QLI) codes which are not "transparent," we introduce rateR = 1/2easy-look-in-transparent (ELIT) codes with a feedforward inverse(1 + D,D). In general, ELIT codes haved_{infty}superior to that of QLI codes....

  18. DECISION TREE CLASSIFIERS FOR STAR/GALAXY SEPARATION

    International Nuclear Information System (INIS)

    Vasconcellos, E. C.; Ruiz, R. S. R.; De Carvalho, R. R.; Capelato, H. V.; Gal, R. R.; LaBarbera, F. L.; Frago Campos Velho, H.; Trevisan, M.

    2011-01-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 ≤ r ≤ 21 (85.2%) and r ≥ 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 ≤ r ≤ 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination (∼2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 ≤ r ≤ 21.

  19. Local-global classifier fusion for screening chest radiographs

    Science.gov (United States)

    Ding, Meng; Antani, Sameer; Jaeger, Stefan; Xue, Zhiyun; Candemir, Sema; Kohli, Marc; Thoma, George

    2017-03-01

    Tuberculosis (TB) is a severe comorbidity of HIV and chest x-ray (CXR) analysis is a necessary step in screening for the infective disease. Automatic analysis of digital CXR images for detecting pulmonary abnormalities is critical for population screening, especially in medical resource constrained developing regions. In this article, we describe steps that improve previously reported performance of NLM's CXR screening algorithms and help advance the state of the art in the field. We propose a local-global classifier fusion method where two complementary classification systems are combined. The local classifier focuses on subtle and partial presentation of the disease leveraging information in radiology reports that roughly indicates locations of the abnormalities. In addition, the global classifier models the dominant spatial structure in the gestalt image using GIST descriptor for the semantic differentiation. Finally, the two complementary classifiers are combined using linear fusion, where the weight of each decision is calculated by the confidence probabilities from the two classifiers. We evaluated our method on three datasets in terms of the area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity and accuracy. The evaluation demonstrates the superiority of our proposed local-global fusion method over any single classifier.

  20. Verification of classified fissile material using unclassified attributes

    International Nuclear Information System (INIS)

    Nicholas, N.J.; Fearey, B.L.; Puckett, J.M.; Tape, J.W.

    1998-01-01

    This paper reports on the most recent efforts of US technical experts to explore verification by IAEA of unclassified attributes of classified excess fissile material. Two propositions are discussed: (1) that multiple unclassified attributes could be declared by the host nation and then verified (and reverified) by the IAEA in order to provide confidence in that declaration of a classified (or unclassified) inventory while protecting classified or sensitive information; and (2) that attributes could be measured, remeasured, or monitored to provide continuity of knowledge in a nonintrusive and unclassified manner. They believe attributes should relate to characteristics of excess weapons materials and should be verifiable and authenticatable with methods usable by IAEA inspectors. Further, attributes (along with the methods to measure them) must not reveal any classified information. The approach that the authors have taken is as follows: (1) assume certain attributes of classified excess material, (2) identify passive signatures, (3) determine range of applicable measurement physics, (4) develop a set of criteria to assess and select measurement technologies, (5) select existing instrumentation for proof-of-principle measurements and demonstration, and (6) develop and design information barriers to protect classified information. While the attribute verification concepts and measurements discussed in this paper appear promising, neither the attribute verification approach nor the measurement technologies have been fully developed, tested, and evaluated

  1. A cardiorespiratory classifier of voluntary and involuntary electrodermal activity

    Directory of Open Access Journals (Sweden)

    Sejdic Ervin

    2010-02-01

    Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.

  2. Handwritten Digit Recognition using Edit Distance-Based KNN

    OpenAIRE

    Bernard , Marc; Fromont , Elisa; Habrard , Amaury; Sebban , Marc

    2012-01-01

    We discuss the student project given for the last 5 years to the 1st year Master Students which follow the Machine Learning lecture at the University Jean Monnet in Saint Etienne, France. The goal of this project is to develop a GUI that can recognize digits and/or letters drawn manually. The system is based on a string representation of the dig- its using Freeman codes and on the use of an edit-distance-based K-Nearest Neighbors classifier. In addition to the machine learning knowledge about...

  3. Action Recognition Using Motion Primitives and Probabilistic Edit Distance

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2006-01-01

    In this paper we describe a recognition approach based on the notion of primitives. As opposed to recognizing actions based on temporal trajectories or temporal volumes, primitive-based recognition is based on representing a temporal sequence containing an action by only a few characteristic time...... into a string containing a sequence of symbols, each representing a primitives. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The approach is evaluated on five one-arm gestures and the recognition rate is 91...

  4. Nowcasting daily minimum air and grass temperature

    Science.gov (United States)

    Savage, M. J.

    2016-02-01

    Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) <1 °C for the 2-h ahead nowcasts. Model 2 (also exponential), for which a constant model coefficient ( b = 2.2) was used, was usually slightly less accurate but still with RMSEs <1 °C. Use of model 3 (square root) yielded increased RMSEs for the 2-h ahead comparisons between nowcasted and measured daily minima air temperature, increasing to 1.4 °C for some sites. For all sites for all models, the comparisons for the 4-h ahead air temperature nowcasts generally yielded increased RMSEs, <2.1 °C. Comparisons for all model nowcasts of the daily grass

  5. New method for distance-based close following safety indicator.

    Science.gov (United States)

    Sharizli, A A; Rahizar, R; Karim, M R; Saifizul, A A

    2015-01-01

    The increase in the number of fatalities caused by road accidents involving heavy vehicles every year has raised the level of concern and awareness on road safety in developing countries like Malaysia. Changes in the vehicle dynamic characteristics such as gross vehicle weight, travel speed, and vehicle classification will affect a heavy vehicle's braking performance and its ability to stop safely in emergency situations. As such, the aim of this study is to establish a more realistic new distance-based safety indicator called the minimum safe distance gap (MSDG), which incorporates vehicle classification (VC), speed, and gross vehicle weight (GVW). Commercial multibody dynamics simulation software was used to generate braking distance data for various heavy vehicle classes under various loads and speeds. By applying nonlinear regression analysis to the simulation results, a mathematical expression of MSDG has been established. The results show that MSDG is dynamically changed according to GVW, VC, and speed. It is envisaged that this new distance-based safety indicator would provide a more realistic depiction of the real traffic situation for safety analysis.

  6. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  7. Moral distance in dictator games

    Directory of Open Access Journals (Sweden)

    Fernando Aguiar

    2008-04-01

    Full Text Available We perform an experimental investigation using a dictator game in which individuals must make a moral decision --- to give or not to give an amount of money to poor people in the Third World. A questionnaire in which the subjects are asked about the reasons for their decision shows that, at least in this case, moral motivations carry a heavy weight in the decision: the majority of dictators give the money for reasons of a consequentialist nature. Based on the results presented here and of other analogous experiments, we conclude that dicator behavior can be understood in terms of moral distance rather than social distance and that it systematically deviates from the egoism assumption in economic models and game theory. %extit{JEL}: A13, C72, C91

  8. Managerial Distance and Virtual Ownership

    DEFF Research Database (Denmark)

    Hansmann, Henry; Thomsen, Steen

    Industrial foundations are autonomous nonprofit entities that own and control one or more conventional business firms. These foundations are common in Northern Europe, where they own a number of internationally prominent companies. Previous studies have indicated, surprisingly, that companies con......, but corporate governance and fiduciary behavior more generally....... on differences among the industrial foundations themselves. We work with a rich data set comprising 113 foundation-owned Danish companies over the period 2003-2008. We focus in particular on a composite structural factor that we term “managerial distance.” We propose this as a measure of the extent to which......-seeking outside owners of the company. Consistent with this hypothesis, our empirical analysis shows a positive, significant, and robust association between managerial distance and the economic performance of foundation owned companies. The findings appear to illuminate not just foundation governance...

  9. Determining distances using asteroseismic methods

    DEFF Research Database (Denmark)

    Aguirre, Victor Silva; Casagrande, L.; Basu, Sarbina

    2013-01-01

    Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification of the r......Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification...... fluxes, and thus distances for field stars in a self-consistent manner. Applying our method to a sample of solar-like oscillators in the {\\it Kepler} field that have accurate {\\it Hipparcos} parallaxes, we find agreement in our distance determinations to better than 5%. Comparison with measurements...

  10. Measurement of Minimum Bias Observables with ATLAS

    CERN Document Server

    Kvita, Jiri; The ATLAS collaboration

    2017-01-01

    The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes. It has also a significant relevance for the simulation of the environment at the LHC with many concurrent pp interactions (“pileup”). The ATLAS collaboration has provided new measurements of the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam currents, recorded at center of mass energies of 8 TeV and 13 TeV. The measurements cover a wide spectrum using charged particle selections with minimum transverse momentum of both 100 MeV and 500 MeV and in various phase space regions of low and high charged particle multiplicities.

  11. Comments on the 'minimum flux corona' concept

    International Nuclear Information System (INIS)

    Antiochos, S.K.; Underwood, J.H.

    1978-01-01

    Hearn's (1975) models of the energy balance and mass loss of stellar coronae, based on a 'minimum flux corona' concept, are critically examined. First, it is shown that the neglect of the relevant length scales for coronal temperature variation leads to an inconsistent computation of the total energy flux F. The stability arguments upon which the minimum flux concept is based are shown to be fallacious. Errors in the computation of the stellar wind contribution to the energy budget are identified. Finally we criticize Hearn's (1977) suggestion that the model, with a value of the thermal conductivity modified by the magnetic field, can explain the difference between solar coronal holes and quiet coronal regions. (orig.) 891 WL [de

  12. Protocol for the verification of minimum criteria

    International Nuclear Information System (INIS)

    Gaggiano, M.; Spiccia, P.; Gaetano Arnetta, P.

    2014-01-01

    This Protocol has been prepared with reference to the provisions of article 8 of the Legislative Decree of May 26, 2000 No. 187. Quality controls of radiological equipment fit within the larger 'quality assurance Program' and are intended to ensure the correct operation of the same and the maintenance of that State. The pursuit of this objective guarantees that the radiological equipment subjected to those controls also meets the minimum criteria of acceptability set out in annex V of the aforementioned legislative decree establishing the conditions necessary to allow the functions to which each radiological equipment was designed, built and for which it is used. The Protocol is established for the purpose of quality control of radiological equipment of Cone Beam Computer Tomography type and reference document, in the sense that compliance with stated tolerances also ensures the subsistence minimum acceptability requirements, where applicable.

  13. Low Streamflow Forcasting using Minimum Relative Entropy

    Science.gov (United States)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  14. Minimum Wage Laws and the Distribution of Employment.

    Science.gov (United States)

    Lang, Kevin

    The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…

  15. On the center of distances

    Czech Academy of Sciences Publication Activity Database

    Bielas, Wojciech; Plewik, S.; Walczyńska, Marta

    2018-01-01

    Roč. 4, č. 2 (2018), s. 687-698 ISSN 2199-675X R&D Projects: GA ČR GF16-34860L Institutional support: RVO:67985840 Keywords : Cantorval center of distances von Neumann's theorem * set of subsums Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics https://link.springer.com/article/10.1007%2Fs40879-017-0199-4

  16. Distance probes of dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, A. G.; Padmanabhan, N.; Aldering, G.; Allen, S. W.; Baltay, C.; Cahn, R. N.; D’Andrea, C. B.; Dalal, N.; Dawson, K. S.; Denney, K. D.; Eisenstein, D. J.; Finley, D. A.; Freedman, W. L.; Ho, S.; Holz, D. E.; Kasen, D.; Kent, S. M.; Kessler, R.; Kuhlmann, S.; Linder, E. V.; Martini, P.; Nugent, P. E.; Perlmutter, S.; Peterson, B. M.; Riess, A. G.; Rubin, D.; Sako, M.; Suntzeff, N. V.; Suzuki, N.; Thomas, R. C.; Wood-Vasey, W. M.; Woosley, S. E.

    2015-03-01

    This document presents the results from the Distances subgroup of the Cosmic Frontier Community Planning Study (Snowmass 2013). We summarize the current state of the field as well as future prospects and challenges. In addition to the established probes using Type Ia supernovae and baryon acoustic oscillations, we also consider prospective methods based on clusters, active galactic nuclei, gravitational wave sirens and strong lensing time delays.

  17. Minimum intervention dentistry: periodontics and implant dentistry.

    Science.gov (United States)

    Darby, I B; Ngo, L

    2013-06-01

    This article will look at the role of minimum intervention dentistry in the management of periodontal disease. It will discuss the role of appropriate assessment, treatment and risk factors/indicators. In addition, the role of the patient and early intervention in the continuing care of dental implants will be discussed as well as the management of peri-implant disease. © 2013 Australian Dental Association.

  18. Minimum quality standards and international trade

    DEFF Research Database (Denmark)

    Baltzer, Kenneth Thomas

    2011-01-01

    This paper investigates the impact of a non-discriminating minimum quality standard (MQS) on trade and welfare when the market is characterized by imperfect competition and asymmetric information. A simple partial equilibrium model of an international Cournot duopoly is presented in which a domes...... prefer different levels of regulation. As a result, international trade disputes are likely to arise even when regulation is non-discriminating....

  19. ''Reduced'' magnetohydrodynamics and minimum dissipation rates

    International Nuclear Information System (INIS)

    Montgomery, D.

    1992-01-01

    It is demonstrated that all solutions of the equations of ''reduced'' magnetohydrodynamics approach a uniform-current, zero-flow state for long times, given a constant wall electric field, uniform scalar viscosity and resistivity, and uniform mass density. This state is the state of minimum energy dissipation rate for these boundary conditions. No steady-state turbulence is possible. The result contrasts sharply with results for full three-dimensional magnetohydrodynamics before the reduction occurs

  20. Minimum K_2,3-saturated Graphs

    OpenAIRE

    Chen, Ya-Chen

    2010-01-01

    A graph is K_{2,3}-saturated if it has no subgraph isomorphic to K_{2,3}, but does contain a K_{2,3} after the addition of any new edge. We prove that the minimum number of edges in a K_{2,3}-saturated graph on n >= 5 vertices is sat(n, K_{2,3}) = 2n - 3.

  1. Minimum degree and density of binary sequences

    DEFF Research Database (Denmark)

    Brandt, Stephan; Müttel, J.; Rautenbach, D.

    2010-01-01

    For d,k∈N with k ≤ 2d, let g(d,k) denote the infimum density of binary sequences (x)∈{0,1} which satisfy the minimum degree condition σ(x+) ≥ k for all i∈Z with xi=1. We reduce the problem of computing g(d,k) to a combinatorial problem related to the generalized k-girth of a graph G which...

  2. Support Services for Distance Education

    Directory of Open Access Journals (Sweden)

    Sandra Frieden

    1999-01-01

    Full Text Available The creation and operation of a distance education support infrastructure requires the collaboration of virtually all administrative departments whose activities deal with students and faculty, and all participating academic departments. Implementation can build on where the institution is and design service-oriented strategies that strengthen institutional support and commitment. Issues to address include planning, faculty issues and concerns, policies and guidelines, approval processes, scheduling, training, publicity, information-line operations, informational materials, orientation and registration processes, class coordination and support, testing, evaluations, receive site management, partnerships, budgets, staffing, library and e-mail support, and different delivery modes (microwave, compressed video, radio, satellite, public television/cable, video tape and online. The process is ongoing and increasingly participative as various groups on campus begin to get involved with distance education activities. The distance education unit must continuously examine and revise its processes and procedures to maintain the academic integrity and service excellence of its programs. It’s a daunting prospect to revise the way things have been done for many years, but each department has an opportunity to respond to new ways of serving and reaching students.

  3. Teaching Chemistry via Distance Education

    Science.gov (United States)

    Boschmann, Erwin

    2003-06-01

    This paper describes a chemistry course taught at Indiana University Purdue University, Indianapolis via television, with a Web version added later. The television format is a delivery technology; the Web is an engagement technology and is preferred since it requires student participation. The distance-laboratory component presented the greatest challenge since laboratories via distance education are not a part of the U.S. academic culture. Appropriate experiments have been developed with the consultation of experts from The Open University in the United Kingdom, Athabasca University in Canada, and Monash University in Australia. The criteria used in the development of experiments are: (1) they must be credible academic experiences equal to or better than those used on campus, (2) they must be easy to perform without supervision, (3) they must be safe, and (4) they must meet all legal requirements. An evaluation of the program using three different approaches is described. The paper concludes that technology-mediated distance education students do as well as on-campus students, but drop out at a higher rate. It is very important to communicate with students frequently, and technology tools ought to be used only if good pedagogy is enhanced by their use.

  4. Balanced sensitivity functions for tuning multi-dimensional Bayesian network classifiers

    NARCIS (Netherlands)

    Bolt, J.H.; van der Gaag, L.C.

    Multi-dimensional Bayesian network classifiers are Bayesian networks of restricted topological structure, which are tailored to classifying data instances into multiple dimensions. Like more traditional classifiers, multi-dimensional classifiers are typically learned from data and may include

  5. Extremal values on Zagreb indices of trees with given distance k-domination number.

    Science.gov (United States)

    Pei, Lidan; Pan, Xiangfeng

    2018-01-01

    Let [Formula: see text] be a graph. A set [Formula: see text] is a distance k -dominating set of G if for every vertex [Formula: see text], [Formula: see text] for some vertex [Formula: see text], where k is a positive integer. The distance k -domination number [Formula: see text] of G is the minimum cardinality among all distance k -dominating sets of G . The first Zagreb index of G is defined as [Formula: see text] and the second Zagreb index of G is [Formula: see text]. In this paper, we obtain the upper bounds for the Zagreb indices of n -vertex trees with given distance k -domination number and characterize the extremal trees, which generalize the results of Borovićanin and Furtula (Appl. Math. Comput. 276:208-218, 2016). What is worth mentioning, for an n -vertex tree T , is that a sharp upper bound on the distance k -domination number [Formula: see text] is determined.

  6. Investigation of Reliabilities of Bolt Distances for Bolted Structural Steel Connections by Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Ertekin Öztekin Öztekin

    2015-12-01

    Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.

  7. Qualitative Research in Distance Education: An Analysis of Journal Literature 2005-2012

    Science.gov (United States)

    Hauser, Laura

    2013-01-01

    This review study examines the current research literature in distance education for the years 2005 to 2012. The author found 382 research articles published during that time in four prominent peer-reviewed research journals. The articles were classified and coded as quantitative, qualitative, or mixed methods. Further analysis found another…

  8. Nonparametric, Coupled ,Bayesian ,Dictionary ,and Classifier Learning for Hyperspectral Classification.

    Science.gov (United States)

    Akhtar, Naveed; Mian, Ajmal

    2017-10-03

    We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.

  9. Classifying a smoker scale in adult daily and nondaily smokers.

    Science.gov (United States)

    Pulvers, Kim; Scheuermann, Taneisha S; Romero, Devan R; Basora, Brittany; Luo, Xianghua; Ahluwalia, Jasjit S

    2014-05-01

    Smoker identity, or the strength of beliefs about oneself as a smoker, is a robust marker of smoking behavior. However, many nondaily smokers do not identify as smokers, underestimating their risk for tobacco-related disease and resulting in missed intervention opportunities. Assessing underlying beliefs about characteristics used to classify smokers may help explain the discrepancy between smoking behavior and smoker identity. This study examines the factor structure, reliability, and validity of the Classifying a Smoker scale among a racially diverse sample of adult smokers. A cross-sectional survey was administered through an online panel survey service to 2,376 current smokers who were at least 25 years of age. The sample was stratified to obtain equal numbers of 3 racial/ethnic groups (African American, Latino, and White) across smoking level (nondaily and daily smoking). The Classifying a Smoker scale displayed a single factor structure and excellent internal consistency (α = .91). Classifying a Smoker scores significantly increased at each level of smoking, F(3,2375) = 23.68, p smoker identity, stronger dependence on cigarettes, greater health risk perceptions, more smoking friends, and were more likely to carry cigarettes. Classifying a Smoker scores explained unique variance in smoking variables above and beyond that explained by smoker identity. The present study supports the use of the Classifying a Smoker scale among diverse, experienced smokers. Stronger endorsement of characteristics used to classify a smoker (i.e., stricter criteria) was positively associated with heavier smoking and related characteristics. Prospective studies are needed to inform prevention and treatment efforts.

  10. Representative Vector Machines: A Unified Framework for Classical Classifiers.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu

    2016-08-01

    Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.

  11. Nonlinear dimension reduction and clustering by Minimum Curvilinearity unfold neuropathic pain and tissue embryological classes

    KAUST Repository

    Cannistraci, Carlo

    2010-09-01

    Motivation: Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. Methods: \\'Minimum Curvilinearity\\' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Results: Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. Conclusion: MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. © The Author(s) 2010. Published by Oxford University Press.

  12. Nonlinear dimension reduction and clustering by Minimum Curvilinearity unfold neuropathic pain and tissue embryological classes.

    Science.gov (United States)

    Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo

    2010-09-15

    Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.

  13. Current Directional Protection of Series Compensated Line Using Intelligent Classifier

    Directory of Open Access Journals (Sweden)

    M. Mollanezhad Heydarabadi

    2016-12-01

    Full Text Available Current inversion condition leads to incorrect operation of current based directional relay in power system with series compensated device. Application of the intelligent system for fault direction classification has been suggested in this paper. A new current directional protection scheme based on intelligent classifier is proposed for the series compensated line. The proposed classifier uses only half cycle of pre-fault and post fault current samples at relay location to feed the classifier. A lot of forward and backward fault simulations under different system conditions upon a transmission line with a fixed series capacitor are carried out using PSCAD/EMTDC software. The applicability of decision tree (DT, probabilistic neural network (PNN and support vector machine (SVM are investigated using simulated data under different system conditions. The performance comparison of the classifiers indicates that the SVM is a best suitable classifier for fault direction discriminating. The backward faults can be accurately distinguished from forward faults even under current inversion without require to detect of the current inversion condition.

  14. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  15. Maximum margin classifier working in a set of strings.

    Science.gov (United States)

    Koyano, Hitoshi; Hayashida, Morihiro; Akutsu, Tatsuya

    2016-03-01

    Numbers and numerical vectors account for a large portion of data. However, recently, the amount of string data generated has increased dramatically. Consequently, classifying string data is a common problem in many fields. The most widely used approach to this problem is to convert strings into numerical vectors using string kernels and subsequently apply a support vector machine that works in a numerical vector space. However, this non-one-to-one conversion involves a loss of information and makes it impossible to evaluate, using probability theory, the generalization error of a learning machine, considering that the given data to train and test the machine are strings generated according to probability laws. In this study, we approach this classification problem by constructing a classifier that works in a set of strings. To evaluate the generalization error of such a classifier theoretically, probability theory for strings is required. Therefore, we first extend a limit theorem for a consensus sequence of strings demonstrated by one of the authors and co-workers in a previous study. Using the obtained result, we then demonstrate that our learning machine classifies strings in an asymptotically optimal manner. Furthermore, we demonstrate the usefulness of our machine in practical data analysis by applying it to predicting protein-protein interactions using amino acid sequences and classifying RNAs by the secondary structure using nucleotide sequences.

  16. Use of information barriers to protect classified information

    International Nuclear Information System (INIS)

    MacArthur, D.; Johnson, M.W.; Nicholas, N.J.; Whiteson, R.

    1998-01-01

    This paper discusses the detailed requirements for an information barrier (IB) for use with verification systems that employ intrusive measurement technologies. The IB would protect classified information in a bilateral or multilateral inspection of classified fissile material. Such a barrier must strike a balance between providing the inspecting party the confidence necessary to accept the measurement while protecting the inspected party's classified information. The authors discuss the structure required of an IB as well as the implications of the IB on detector system maintenance. A defense-in-depth approach is proposed which would provide assurance to the inspected party that all sensitive information is protected and to the inspecting party that the measurements are being performed as expected. The barrier could include elements of physical protection (such as locks, surveillance systems, and tamper indicators), hardening of key hardware components, assurance of capabilities and limitations of hardware and software systems, administrative controls, validation and verification of the systems, and error detection and resolution. Finally, an unclassified interface could be used to display and, possibly, record measurement results. The introduction of an IB into an analysis system may result in many otherwise innocuous components (detectors, analyzers, etc.) becoming classified and unavailable for routine maintenance by uncleared personnel. System maintenance and updating will be significantly simplified if the classification status of as many components as possible can be made reversible (i.e. the component can become unclassified following the removal of classified objects)

  17. Detection of microaneurysms in retinal images using an ensemble classifier

    Directory of Open Access Journals (Sweden)

    M.M. Habib

    2017-01-01

    Full Text Available This paper introduces, and reports on the performance of, a novel combination of algorithms for automated microaneurysm (MA detection in retinal images. The presence of MAs in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR which is one of the leading causes of blindness amongst the working age population. An extensive survey of the literature is presented and current techniques in the field are summarised. The proposed technique first detects an initial set of candidates using a Gaussian Matched Filter and then classifies this set to reduce the number of false positives. A Tree Ensemble classifier is used with a set of 70 features (the most commons features in the literature. A new set of 32 MA groundtruth images (with a total of 256 labelled MAs based on images from the MESSIDOR dataset is introduced as a public dataset for benchmarking MA detection algorithms. We evaluate our algorithm on this dataset as well as another public dataset (DIARETDB1 v2.1 and compare it against the best available alternative. Results show that the proposed classifier is superior in terms of eliminating false positive MA detection from the initial set of candidates. The proposed method achieves an ROC score of 0.415 compared to 0.2636 achieved by the best available technique. Furthermore, results show that the classifier model maintains consistent performance across datasets, illustrating the generalisability of the classifier and that overfitting does not occur.

  18. Generalization in the XCSF classifier system: analysis, improvement, and extension.

    Science.gov (United States)

    Lanzi, Pier Luca; Loiacono, Daniele; Wilson, Stewart W; Goldberg, David E

    2007-01-01

    We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.

  19. Dynamic cluster generation for a fuzzy classifier with ellipsoidal regions.

    Science.gov (United States)

    Abe, S

    1998-01-01

    In this paper, we discuss a fuzzy classifier with ellipsoidal regions that dynamically generates clusters. First, for the data belonging to a class we define a fuzzy rule with an ellipsoidal region. Namely, using the training data for each class, we calculate the center and the covariance matrix of the ellipsoidal region for the class. Then we tune the fuzzy rules, i.e., the slopes of the membership functions, successively until there is no improvement in the recognition rate of the training data. Then if the number of the data belonging to a class that are misclassified into another class exceeds a prescribed number, we define a new cluster to which those data belong and the associated fuzzy rule. Then we tune the newly defined fuzzy rules in the similar way as stated above, fixing the already obtained fuzzy rules. We iterate generation of clusters and tuning of the newly generated fuzzy rules until the number of the data belonging to a class that are misclassified into another class does not exceed the prescribed number. We evaluate our method using thyroid data, Japanese Hiragana data of vehicle license plates, and blood cell data. By dynamic cluster generation, the generalization ability of the classifier is improved and the recognition rate of the fuzzy classifier for the test data is the best among the neural network classifiers and other fuzzy classifiers if there are no discrete input variables.

  20. Block-classified bidirectional motion compensation scheme for wavelet-decomposed digital video

    Energy Technology Data Exchange (ETDEWEB)

    Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Zhang, Y.Q. [David Sarnoff Research Center, Princeton, NJ (United States); Jabbari, B. [George Mason Univ., Fairfax, VA (United States)

    1997-08-01

    In this paper the authors introduce a block-classified bidirectional motion compensation scheme for the previously developed wavelet-based video codec, where multiresolution motion estimation is performed in the wavelet domain. The frame classification structure described in this paper is similar to that used in the MPEG standard. Specifically, the I-frames are intraframe coded, the P-frames are interpolated from a previous I- or a P-frame, and the B-frames are bidirectional interpolated frames. They apply this frame classification structure to the wavelet domain with variable block sizes and multiresolution representation. They use a symmetric bidirectional scheme for the B-frames and classify the motion blocks as intraframe, compensated either from the preceding or the following frame, or bidirectional (i.e., compensated based on which type yields the minimum energy). They also introduce the concept of F-frames, which are analogous to P-frames but are predicted from the following frame only. This improves the overall quality of the reconstruction in a group of pictures (GOP) but at the expense of extra buffering. They also study the effect of quantization of the I-frames on the reconstruction of a GOP, and they provide intuitive explanation for the results. In addition, the authors study a variety of wavelet filter-banks to be used in a multiresolution motion-compensated hierarchical video codec.

  1. A History of Classified Activities at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    2001-01-30

    The facilities that became Oak Ridge National Laboratory (ORNL) were created in 1943 during the United States' super-secret World War II project to construct an atomic bomb (the Manhattan Project). During World War II and for several years thereafter, essentially all ORNL activities were classified. Now, in 2000, essentially all ORNL activities are unclassified. The major purpose of this report is to provide a brief history of ORNL's major classified activities from 1943 until the present (September 2000). This report is expected to be useful to the ORNL Classification Officer and to ORNL's Authorized Derivative Classifiers and Authorized Derivative Declassifiers in their classification review of ORNL documents, especially those documents that date from the 1940s and 1950s.

  2. COMPARISON OF SVM AND FUZZY CLASSIFIER FOR AN INDIAN SCRIPT

    Directory of Open Access Journals (Sweden)

    M. J. Baheti

    2012-01-01

    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  3. Optimal threshold estimation for binary classifiers using game theory.

    Science.gov (United States)

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  4. Statistical text classifier to detect specific type of medical incidents.

    Science.gov (United States)

    Wong, Zoie Shui-Yee; Akiyama, Masanori

    2013-01-01

    WHO Patient Safety has put focus to increase the coherence and expressiveness of patient safety classification with the foundation of International Classification for Patient Safety (ICPS). Text classification and statistical approaches has showed to be successful to identifysafety problems in the Aviation industryusing incident text information. It has been challenging to comprehend the taxonomy of medical incidents in a structured manner. Independent reporting mechanisms for patient safety incidents have been established in the UK, Canada, Australia, Japan, Hong Kong etc. This research demonstrates the potential to construct statistical text classifiers to detect specific type of medical incidents using incident text data. An illustrative example for classifying look-alike sound-alike (LASA) medication incidents using structured text from 227 advisories related to medication errors from Global Patient Safety Alerts (GPSA) is shown in this poster presentation. The classifier was built using logistic regression model. ROC curve and the AUC value indicated that this is a satisfactory good model.

  5. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  6. Defending Malicious Script Attacks Using Machine Learning Classifiers

    Directory of Open Access Journals (Sweden)

    Nayeem Khan

    2017-01-01

    Full Text Available The web application has become a primary target for cyber criminals by injecting malware especially JavaScript to perform malicious activities for impersonation. Thus, it becomes an imperative to detect such malicious code in real time before any malicious activity is performed. This study proposes an efficient method of detecting previously unknown malicious java scripts using an interceptor at the client side by classifying the key features of the malicious code. Feature subset was obtained by using wrapper method for dimensionality reduction. Supervised machine learning classifiers were used on the dataset for achieving high accuracy. Experimental results show that our method can efficiently classify malicious code from benign code with promising results.

  7. A systems biology-based classifier for hepatocellular carcinoma diagnosis.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available AIM: The diagnosis of hepatocellular carcinoma (HCC in the early stage is crucial to the application of curative treatments which are the only hope for increasing the life expectancy of patients. Recently, several large-scale studies have shed light on this problem through analysis of gene expression profiles to identify markers correlated with HCC progression. However, those marker sets shared few genes in common and were poorly validated using independent data. Therefore, we developed a systems biology based classifier by combining the differential gene expression with topological features of human protein interaction networks to enhance the ability of HCC diagnosis. METHODS AND RESULTS: In the Oncomine platform, genes differentially expressed in HCC tissues relative to their corresponding normal tissues were filtered by a corrected Q value cut-off and Concept filters. The identified genes that are common to different microarray datasets were chosen as the candidate markers. Then, their networks were analyzed by GeneGO Meta-Core software and the hub genes were chosen. After that, an HCC diagnostic classifier was constructed by Partial Least Squares modeling based on the microarray gene expression data of the hub genes. Validations of diagnostic performance showed that this classifier had high predictive accuracy (85.88∼92.71% and area under ROC curve (approximating 1.0, and that the network topological features integrated into this classifier contribute greatly to improving the predictive performance. Furthermore, it has been demonstrated that this modeling strategy is not only applicable to HCC, but also to other cancers. CONCLUSION: Our analysis suggests that the systems biology-based classifier that combines the differential gene expression and topological features of human protein interaction network may enhance the diagnostic performance of HCC classifier.

  8. Quality Practices: An Open Distance Learning Perspective

    Directory of Open Access Journals (Sweden)

    Kemlall RAMDASS

    2018-01-01

    Full Text Available Global transformation in higher education over the past two decades has led to the implementation of national policies in order to measure the performance of institutions in South Africa. The Higher Education Quality Council (HEQC adopted the quality assurance (QA model for the purposes of accountability and governance in South African Higher Education. The first Council of Higher Education (CHE audit, encouraged a compliance mentality through a ‘tick box’ mentality, thereby encouraging compliance of minimum standards. Thus, quality assurance audits became a ‘feared’ phenomenon in all higher education institutions in South Africa. The complete lack of stewardship in addressing the culture of quality and its’ implications for continuous improvement has led to inefficiencies in the entire higher education landscape. In this paper the ‘fuzzy’ and perhaps ‘slippery’ nature of quality is addressed through a critical analysis of the concepts of development, enhancement and assurance in relation to the quality of teaching and learning in higher education through a case study methodology using qualitative analysis in an open distance learning institution (ODL. The key argument is that although quality is important for improvement, practices at the institution are not changing in the way they should because of a quality culture that is determined by the Department of Higher Education and Training. Hence the research question is to determine the status of quality with a view of recommending total quality management as a strategy that would enhance the practice of quality in the organization. Therefore, this paper explores the current quality practices with the intent to improve the delivery of teaching and learning in an ODL environment.

  9. Distance Measurement Solves Astrophysical Mysteries

    Science.gov (United States)

    2003-08-01

    Location, location, and location. The old real-estate adage about what's really important proved applicable to astrophysics as astronomers used the sharp radio "vision" of the National Science Foundation's Very Long Baseline Array (VLBA) to pinpoint the distance to a pulsar. Their accurate distance measurement then resolved a dispute over the pulsar's birthplace, allowed the astronomers to determine the size of its neutron star and possibly solve a mystery about cosmic rays. "Getting an accurate distance to this pulsar gave us a real bonanza," said Walter Brisken, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. Monogem Ring The Monogem Ring, in X-Ray Image by ROSAT satellite CREDIT: Max-Planck Institute, American Astronomical Society (Click on Image for Larger Version) The pulsar, called PSR B0656+14, is in the constellation Gemini, and appears to be near the center of a circular supernova remnant that straddles Gemini and its neighboring constellation, Monoceros, and is thus called the Monogem Ring. Since pulsars are superdense, spinning neutron stars left over when a massive star explodes as a supernova, it was logical to assume that the Monogem Ring, the shell of debris from a supernova explosion, was the remnant of the blast that created the pulsar. However, astronomers using indirect methods of determining the distance to the pulsar had concluded that it was nearly 2500 light-years from Earth. On the other hand, the supernova remnant was determined to be only about 1000 light-years from Earth. It seemed unlikely that the two were related, but instead appeared nearby in the sky purely by a chance juxtaposition. Brisken and his colleagues used the VLBA to make precise measurements of the sky position of PSR B0656+14 from 2000 to 2002. They were able to detect the slight offset in the object's apparent position when viewed from opposite sides of Earth's orbit around the Sun. This effect, called parallax, provides a direct measurement of

  10. Implications of physical symmetries in adaptive image classifiers

    DEFF Research Database (Denmark)

    Sams, Thomas; Hansen, Jonas Lundbek

    2000-01-01

    It is demonstrated that rotational invariance and reflection symmetry of image classifiers lead to a reduction in the number of free parameters in the classifier. When used in adaptive detectors, e.g. neural networks, this may be used to decrease the number of training samples necessary to learn...... a given classification task, or to improve generalization of the neural network. Notably, the symmetrization of the detector does not compromise the ability to distinguish objects that break the symmetry. (C) 2000 Elsevier Science Ltd. All rights reserved....

  11. Silicon nanowire arrays as learning chemical vapour classifiers

    International Nuclear Information System (INIS)

    Niskanen, A O; Colli, A; White, R; Li, H W; Spigone, E; Kivioja, J M

    2011-01-01

    Nanowire field-effect transistors are a promising class of devices for various sensing applications. Apart from detecting individual chemical or biological analytes, it is especially interesting to use multiple selective sensors to look at their collective response in order to perform classification into predetermined categories. We show that non-functionalised silicon nanowire arrays can be used to robustly classify different chemical vapours using simple statistical machine learning methods. We were able to distinguish between acetone, ethanol and water with 100% accuracy while methanol, ethanol and 2-propanol were classified with 96% accuracy in ambient conditions.

  12. Contaminant classification using cosine distances based on multiple conventional sensors.

    Science.gov (United States)

    Liu, Shuming; Che, Han; Smith, Kate; Chang, Tian

    2015-02-01

    Emergent contamination events have a significant impact on water systems. After contamination detection, it is important to classify the type of contaminant quickly to provide support for remediation attempts. Conventional methods generally either rely on laboratory-based analysis, which requires a long analysis time, or on multivariable-based geometry analysis and sequence analysis, which is prone to being affected by the contaminant concentration. This paper proposes a new contaminant classification method, which discriminates contaminants in a real time manner independent of the contaminant concentration. The proposed method quantifies the similarities or dissimilarities between sensors' responses to different types of contaminants. The performance of the proposed method was evaluated using data from contaminant injection experiments in a laboratory and compared with a Euclidean distance-based method. The robustness of the proposed method was evaluated using an uncertainty analysis. The results show that the proposed method performed better in identifying the type of contaminant than the Euclidean distance based method and that it could classify the type of contaminant in minutes without significantly compromising the correct classification rate (CCR).

  13. Decentralized Pricing in Minimum Cost Spanning Trees

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moulin, Hervé; Østerdal, Lars Peter

    In the minimum cost spanning tree model we consider decentralized pricing rules, i.e. rules that cover at least the ecient cost while the price charged to each user only depends upon his own connection costs. We de ne a canonical pricing rule and provide two axiomatic characterizations. First......, the canonical pricing rule is the smallest among those that improve upon the Stand Alone bound, and are either superadditive or piece-wise linear in connection costs. Our second, direct characterization relies on two simple properties highlighting the special role of the source cost....

  14. The Risk Management of Minimum Return Guarantees

    Directory of Open Access Journals (Sweden)

    Antje Mahayni

    2008-05-01

    Full Text Available Contracts paying a guaranteed minimum rate of return and a fraction of a positive excess rate, which is specified relative to a benchmark portfolio, are closely related to unit-linked life-insurance products and can be considered as alternatives to direct investment in the underlying benchmark. They contain an embedded power option, and the key issue is the tractable and realistic hedging of this option, in order to rigorously justify valuation by arbitrage arguments and prevent the guarantees from becoming uncontrollable liabilities to the issuer. We show how to determine the contract parameters conservatively and implement robust risk-management strategies.

  15. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  16. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  17. Quantum chromodynamics at large distances

    International Nuclear Information System (INIS)

    Arbuzov, B.A.

    1987-01-01

    Properties of QCD at large distances are considered in the framework of traditional quantum field theory. An investigation of asymptotic behaviour of lower Green functions in QCD is the starting point of the approach. The recent works are reviewed which confirm the singular infrared behaviour of gluon propagator M 2 /(k 2 ) 2 at least under some gauge conditions. A special covariant gauge comes out to be the most suitable for description of infrared region due to absence of ghost contributions to infrared asymptotics of Green functions. Solutions of Schwinger-Dyson equation for quark propagator are obtained in this special gauge and are shown to possess desirable properties: spontaneous breaking of chiral invariance and nonperturbative character. The infrared asymptotics of lower Green functions are used for calculation of vacuum expectation values of gluon and quark fields. These vacuum expectation values are obtained in a good agreement with the corresponding phenomenological values which are needed in the method of sum rules in QCD, that confirms adequacy of the infrared region description. The consideration of a behaviour of QCD at large distances leads to the conclusion that at contemporary stage of theory development one may consider two possibilities. The first one is the well-known confinement hypothesis and the second one is called incomplete confinement and stipulates for open color to be observable. Possible manifestations of incomplete confinement are discussed

  18. Determination of minimum sample size for fault diagnosis of automobile hydraulic brake system using power analysis

    Directory of Open Access Journals (Sweden)

    V. Indira

    2015-03-01

    Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.

  19. Psychological distance of pedestrian at the bus terminal area

    Science.gov (United States)

    Firdaus Mohamad Ali, Mohd; Salleh Abustan, Muhamad; Hidayah Abu Talib, Siti; Abustan, Ismail; Rahman, Noorhazlinda Abd; Gotoh, Hitoshi

    2018-03-01

    Walking is a part of transportation modes that is effective for pedestrian in either short or long trips. All people are classified as pedestrian because people do walk every day and the higher number of people walking will lead to crowd conditions and that is the reason of the importance to study about the behaviour of pedestrian specifically the psychological distance in both indoor and outdoor. Nowadays, the number of studies of crowd dynamics among pedestrian have increased due to the concern about the safety issues primarily related to the emergency cases such as fire, earthquake, festival and etc. An observation of pedestrian was conducted at one of the main bus terminals in Kuala Lumpur with the main objective to obtain pedestrian psychological distance and it took place for 45 minutes by using a camcorder that was set up by using a tripod on the upper floor from the area of observation at the main lobby and the trapped area was approximately 100 m2. The analysis was focused on obtaining the gap between pedestrian based on two different categories, which are; (a) Pedestrian with relationship, and (b) Pedestrian without relationship. In total, 1,766 data were obtained during the analysis in which 561 data were obtained for `Pedestrian with relationship' and 1,205 data were obtained for "Pedestrian without relationship". Based on the obtained results, "Pedestrian without relationship" had shown a slightly higher average value of psychological distance between them compare to "Pedestrian with relationship" with the results of 1.6360m and 1.5909m respectively. In gender case, "Pedestrian without relationship" had higher mean of psychological distance in all three categories as well. Therefore, it can be concluded that pedestrian without relationship tend to have longer distance when walking in crowds.

  20. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers.

    Directory of Open Access Journals (Sweden)

    Muhammad Ahmad

    Full Text Available Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF, in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN. The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods.

  1. Discrimination-Aware Classifiers for Student Performance Prediction

    Science.gov (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei

    2015-01-01

    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  2. 29 CFR 1910.307 - Hazardous (classified) locations.

    Science.gov (United States)

    2010-07-01

    ... equipment at the location. (c) Electrical installations. Equipment, wiring methods, and installations of... covers the requirements for electric equipment and wiring in locations that are classified depending on... provisions of this section. (4) Division and zone classification. In Class I locations, an installation must...

  3. 29 CFR 1926.407 - Hazardous (classified) locations.

    Science.gov (United States)

    2010-07-01

    ...) locations, unless modified by provisions of this section. (b) Electrical installations. Equipment, wiring..., DEPARTMENT OF LABOR (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Electrical Installation Safety... electric equipment and wiring in locations which are classified depending on the properties of the...

  4. 18 CFR 3a.71 - Accountability for classified material.

    Science.gov (United States)

    2010-04-01

    ... numbers assigned to top secret material will be separate from the sequence for other classified material... central control registry in calendar year 1969. TS 1006—Sixth Top Secret document controlled by the... control registry when the document is transferred. (e) For Top Secret documents only, an access register...

  5. Classifier fusion for VoIP attacks classification

    Science.gov (United States)

    Safarik, Jakub; Rezac, Filip

    2017-05-01

    SIP is one of the most successful protocols in the field of IP telephony communication. It establishes and manages VoIP calls. As the number of SIP implementation rises, we can expect a higher number of attacks on the communication system in the near future. This work aims at malicious SIP traffic classification. A number of various machine learning algorithms have been developed for attack classification. The paper presents a comparison of current research and the use of classifier fusion method leading to a potential decrease in classification error rate. Use of classifier combination makes a more robust solution without difficulties that may affect single algorithms. Different voting schemes, combination rules, and classifiers are discussed to improve the overall performance. All classifiers have been trained on real malicious traffic. The concept of traffic monitoring depends on the network of honeypot nodes. These honeypots run in several networks spread in different locations. Separation of honeypots allows us to gain an independent and trustworthy attack information.

  6. Bayesian Classifier for Medical Data from Doppler Unit

    Directory of Open Access Journals (Sweden)

    J. Málek

    2006-01-01

    Full Text Available Nowadays, hand-held ultrasonic Doppler units (probes are often used for noninvasive screening of atherosclerosis in the arteries of the lower limbs. The mean velocity of blood flow in time and blood pressures are measured on several positions on each lower limb. By listening to the acoustic signal generated by the device or by reading the signal displayed on screen, a specialist can detect peripheral arterial disease (PAD.This project aims to design software that will be able to analyze data from such a device and classify it into several diagnostic classes. At the Department of Functional Diagnostics at the Regional Hospital in Liberec a database of several hundreds signals was collected. In cooperation with the specialist, the signals were manually classified into four classes. For each class, selected signal features were extracted and then used for training a Bayesian classifier. Another set of signals was used for evaluating and optimizing the parameters of the classifier. Slightly above 84 % of successfully recognized diagnostic states, was recently achieved on the test data. 

  7. An Investigation to Improve Classifier Accuracy for Myo Collected Data

    Science.gov (United States)

    2017-02-01

    Bad Samples Effect on Classification Accuracy 7 5.1 Naïve Bayes (NB) Classifier Accuracy 7 5.2 Logistic Model Tree (LMT) 10 5.3 K-Nearest Neighbor...gesture, pitch feature, user 06. All samples exhibit reversed movement...20 Fig. A-2 Come gesture, pitch feature, user 14. All samples exhibit reversed movement

  8. Diagnosis of Broiler Livers by Classifying Image Patches

    DEFF Research Database (Denmark)

    Jørgensen, Anders; Fagertun, Jens; Moeslund, Thomas B.

    2017-01-01

    The manual health inspection are becoming the bottleneck at poultry processing plants. We present a computer vision method for automatic diagnosis of broiler livers. The non-rigid livers, of varying shape and sizes, are classified in patches by a convolutional neural network, outputting maps...

  9. Support vector machines classifiers of physical activities in preschoolers

    Science.gov (United States)

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3-5 years old were asked to participate in a s...

  10. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  11. Building an automated SOAP classifier for emergency department reports.

    Science.gov (United States)

    Mowery, Danielle; Wiebe, Janyce; Visweswaran, Shyam; Harkema, Henk; Chapman, Wendy W

    2012-02-01

    Information extraction applications that extract structured event and entity information from unstructured text can leverage knowledge of clinical report structure to improve performance. The Subjective, Objective, Assessment, Plan (SOAP) framework, used to structure progress notes to facilitate problem-specific, clinical decision making by physicians, is one example of a well-known, canonical structure in the medical domain. Although its applicability to structuring data is understood, its contribution to information extraction tasks has not yet been determined. The first step to evaluating the SOAP framework's usefulness for clinical information extraction is to apply the model to clinical narratives and develop an automated SOAP classifier that classifies sentences from clinical reports. In this quantitative study, we applied the SOAP framework to sentences from emergency department reports, and trained and evaluated SOAP classifiers built with various linguistic features. We found the SOAP framework can be applied manually to emergency department reports with high agreement (Cohen's kappa coefficients over 0.70). Using a variety of features, we found classifiers for each SOAP class can be created with moderate to outstanding performance with F(1) scores of 93.9 (subjective), 94.5 (objective), 75.7 (assessment), and 77.0 (plan). We look forward to expanding the framework and applying the SOAP classification to clinical information extraction tasks. Copyright © 2011. Published by Elsevier Inc.

  12. Learning to classify wakes from local sensory information

    Science.gov (United States)

    Alsalman, Mohamad; Colvert, Brendan; Kanso, Eva; Kanso Team

    2017-11-01

    Aquatic organisms exhibit remarkable abilities to sense local flow signals contained in their fluid environment and to surmise the origins of these flows. For example, fish can discern the information contained in various flow structures and utilize this information for obstacle avoidance and prey tracking. Flow structures created by flapping and swimming bodies are well characterized in the fluid dynamics literature; however, such characterization relies on classical methods that use an external observer to reconstruct global flow fields. The reconstructed flows, or wakes, are then classified according to the unsteady vortex patterns. Here, we propose a new approach for wake identification: we classify the wakes resulting from a flapping airfoil by applying machine learning algorithms to local flow information. In particular, we simulate the wakes of an oscillating airfoil in an incoming flow, extract the downstream vorticity information, and train a classifier to learn the different flow structures and classify new ones. This data-driven approach provides a promising framework for underwater navigation and detection in application to autonomous bio-inspired vehicles.

  13. The Closing of the Classified Catalog at Boston University

    Science.gov (United States)

    Hazen, Margaret Hindle

    1974-01-01

    Although the classified catalog at Boston University libraries has been a useful research tool, it has proven too expensive to keep current. The library has converted to a traditional alphabetic subject catalog and will recieve catalog cards from the Ohio College Library Center through the New England Library Network. (Author/LS)

  14. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  15. Reconfigurable support vector machine classifier with approximate computing

    NARCIS (Netherlands)

    van Leussen, M.J.; Huisken, J.; Wang, L.; Jiao, H.; De Gyvez, J.P.

    2017-01-01

    Support Vector Machine (SVM) is one of the most popular machine learning algorithms. An energy-efficient SVM classifier is proposed in this paper, where approximate computing is utilized to reduce energy consumption and silicon area. A hardware architecture with reconfigurable kernels and

  16. Classifying regularized sensor covariance matrices: An alternative to CSP

    NARCIS (Netherlands)

    Roijendijk, L.M.M.; Gielen, C.C.A.M.; Farquhar, J.D.R.

    2016-01-01

    Common spatial patterns ( CSP) is a commonly used technique for classifying imagined movement type brain-computer interface ( BCI) datasets. It has been very successful with many extensions and improvements on the basic technique. However, a drawback of CSP is that the signal processing pipeline

  17. Classifying regularised sensor covariance matrices: An alternative to CSP

    NARCIS (Netherlands)

    Roijendijk, L.M.M.; Gielen, C.C.A.M.; Farquhar, J.D.R.

    2016-01-01

    Common spatial patterns (CSP) is a commonly used technique for classifying imagined movement type brain computer interface (BCI) datasets. It has been very successful with many extensions and improvements on the basic technique. However, a drawback of CSP is that the signal processing pipeline

  18. Two-categorical bundles and their classifying spaces

    DEFF Research Database (Denmark)

    Baas, Nils A.; Bökstedt, M.; Kro, T.A.

    2012-01-01

    -category is a classifying space for the associated principal 2-bundles. In the process of proving this we develop a lot of powerful machinery which may be useful in further studies of 2-categorical topology. As a corollary we get a new proof of the classification of principal bundles. A calculation based...

  19. 3 CFR - Classified Information and Controlled Unclassified Information

    Science.gov (United States)

    2010-01-01

    ... on Transparency and Open Government and on the Freedom of Information Act, my Administration is... memoranda of January 21, 2009, on Transparency and Open Government and on the Freedom of Information Act; (B... 3 The President 1 2010-01-01 2010-01-01 false Classified Information and Controlled Unclassified...

  20. A Gene Expression Classifier of Node-Positive Colorectal Cancer

    Directory of Open Access Journals (Sweden)

    Paul F. Meeh

    2009-10-01

    Full Text Available We used digital long serial analysis of gene expression to discover gene expression differences between node-negative and node-positive colorectal tumors and developed a multigene classifier able to discriminate between these two tumor types. We prepared and sequenced long serial analysis of gene expression libraries from one node-negative and one node-positive colorectal tumor, sequenced to a depth of 26,060 unique tags, and identified 262 tags significantly differentially expressed between these two tumors (P < 2 x 10-6. We confirmed the tag-to-gene assignments and differential expression of 31 genes by quantitative real-time polymerase chain reaction, 12 of which were elevated in the node-positive tumor. We analyzed the expression levels of these 12 upregulated genes in a validation panel of 23 additional tumors and developed an optimized seven-gene logistic regression classifier. The classifier discriminated between node-negative and node-positive tumors with 86% sensitivity and 80% specificity. Receiver operating characteristic analysis of the classifier revealed an area under the curve of 0.86. Experimental manipulation of the function of one classification gene, Fibronectin, caused profound effects on invasion and migration of colorectal cancer cells in vitro. These results suggest that the development of node-positive colorectal cancer occurs in part through elevated epithelial FN1 expression and suggest novel strategies for the diagnosis and treatment of advanced disease.

  1. Cascaded lexicalised classifiers for second-person reference resolution

    NARCIS (Netherlands)

    Purver, M.; Fernández, R.; Frampton, M.; Peters, S.; Healey, P.; Pieraccini, R.; Byron, D.; Young, S.; Purver, M.

    2009-01-01

    This paper examines the resolution of the second person English pronoun you in multi-party dialogue. Following previous work, we attempt to classify instances as generic or referential, and in the latter case identify the singular or plural addressee. We show that accuracy and robustness can be

  2. Human Activity Recognition by Combining a Small Number of Classifiers.

    Science.gov (United States)

    Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin

    2016-09-01

    We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.

  3. Evaluation of three classifiers in mapping forest stand types using ...

    African Journals Online (AJOL)

    EJIRO

    applied for classification of the image. Supervised classification technique using maximum likelihood algorithm is the most commonly and widely used method for land cover classification (Jia and Richards, 2006). In Australia, the maximum likelihood classifier was effectively used to map different forest stand types with high.

  4. Classifying patients' complaints for regulatory purposes : A Pilot Study

    NARCIS (Netherlands)

    Bouwman, R.J.R.; Bomhoff, Manja; Robben, Paul; Friele, R.D.

    2018-01-01

    Objectives: It is assumed that classifying and aggregated reporting of patients' complaints by regulators helps to identify problem areas, to respond better to patients and increase public accountability. This pilot study addresses what a classification of complaints in a regulatory setting

  5. Localizing genes to cerebellar layers by classifying ISH images.

    Directory of Open Access Journals (Sweden)

    Lior Kirsch

    Full Text Available Gene expression controls how the brain develops and functions. Understanding control processes in the brain is particularly hard since they involve numerous types of neurons and glia, and very little is known about which genes are expressed in which cells and brain layers. Here we describe an approach to detect genes whose expression is primarily localized to a specific brain layer and apply it to the mouse cerebellum. We learn typical spatial patterns of expression from a few markers that are known to be localized to specific layers, and use these patterns to predict localization for new genes. We analyze images of in-situ hybridization (ISH experiments, which we represent using histograms of local binary patterns (LBP and train image classifiers and gene classifiers for four layers of the cerebellum: the Purkinje, granular, molecular and white matter layer. On held-out data, the layer classifiers achieve accuracy above 94% (AUC by representing each image at multiple scales and by combining multiple image scores into a single gene-level decision. When applied to the full mouse genome, the classifiers predict specific layer localization for hundreds of new genes in the Purkinje and granular layers. Many genes localized to the Purkinje layer are likely to be expressed in astrocytes, and many others are involved in lipid metabolism, possibly due to the unusual size of Purkinje cells.

  6. An ensemble self-training protein interaction article classifier.

    Science.gov (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard

    2014-01-01

    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.

  7. Classifying Your Food as Acid, Low-Acid, or Acidified

    OpenAIRE

    Bacon, Karleigh

    2012-01-01

    As a food entrepreneur, you should be aware of how ingredients in your product make the food look, feel, and taste; as well as how the ingredients create environments for microorganisms like bacteria, yeast, and molds to survive and grow. This guide will help you classifying your food as acid, low-acid, or acidified.

  8. Gene-expression Classifier in Papillary Thyroid Carcinoma

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise

    2016-01-01

    BACKGROUND: No reliable biomarker for metastatic potential in the risk stratification of papillary thyroid carcinoma exists. We aimed to develop a gene-expression classifier for metastatic potential. MATERIALS AND METHODS: Genome-wide expression analyses were used. Development cohort: freshly...

  9. Abbreviations: Their Effects on Comprehension of Classified Advertisements.

    Science.gov (United States)

    Sokol, Kirstin R.

    Two experimental designs were used to test the hypothesis that abbreviations in classified advertisements decrease the reader's comprehension of such ads. In the first experimental design, 73 high school students read four ads (for employment, used cars, apartments for rent, and articles for sale) either with abbreviations or with all…

  10. Multi-image acquisition-based distance sensor using agile laser spot beam.

    Science.gov (United States)

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  11. Open and Distance Learning Today. Routledge Studies in Distance Education Series.

    Science.gov (United States)

    Lockwood, Fred, Ed.

    This book contains the following papers on open and distance learning today: "Preface" (Daniel); "Big Bang Theory in Distance Education" (Hawkridge); "Practical Agenda for Theorists of Distance Education" (Perraton); "Trends, Directions and Needs: A View from Developing Countries" (Koul); "American…

  12. Improved Collaborative Representation Classifier Based on l2-Regularized for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Shirui Huo

    2017-01-01

    Full Text Available Human action recognition is an important recent challenging task. Projecting depth images onto three depth motion maps (DMMs and extracting deep convolutional neural network (DCNN features are discriminant descriptor features to characterize the spatiotemporal information of a specific action from a sequence of depth images. In this paper, a unified improved collaborative representation framework is proposed in which the probability that a test sample belongs to the collaborative subspace of all classes can be well defined and calculated. The improved collaborative representation classifier (ICRC based on l2-regularized for human action recognition is presented to maximize the likelihood that a test sample belongs to each class, then theoretical investigation into ICRC shows that it obtains a final classification by computing the likelihood for each class. Coupled with the DMMs and DCNN features, experiments on depth image-based action recognition, including MSRAction3D and MSRGesture3D datasets, demonstrate that the proposed approach successfully using a distance-based representation classifier achieves superior performance over the state-of-the-art methods, including SRC, CRC, and SVM.

  13. Evaluating a k-nearest neighbours-based classifier for locating faulty areas in power systems

    Directory of Open Access Journals (Sweden)

    Juan José Mora Flórez

    2008-09-01

    Full Text Available This paper reports a strategy for identifying and locating faults in a power distribution system. The strategy was based on the K-nearest neighbours technique. This technique simply helps to estimate a distance from the features used for describing a particu-lar fault being classified to the faults presented during the training stage. If new data is presented to the proposed fault locator, it is classified according to the nearest example recovered. A characterisation of the voltage and current measurements obtained at one single line end is also presented in this document for assigning the area in the case of a fault in a power system. The pro-posed strategy was tested in a real power distribution system, average 93% confidence indexes being obtained which gives a good indicator of the proposal’s high performance. The results showed how a fault could be located by using features obtained from voltage and current, improving utility response and thereby improving system continuity indexes in power distribution sys-tems.

  14. VIRTUAL LABORATORY IN DISTANCE LEARNING SYSTEM

    Directory of Open Access Journals (Sweden)

    Е. Kozlovsky

    2011-11-01

    Full Text Available Questions of designing and a choice of technologies of creation of virtual laboratory for the distance learning system are considered. Distance learning system «Kherson Virtual University» is used as illustration.

  15. Distance Learning Plan Development: Initiating Organizational Structures

    National Research Council Canada - National Science Library

    Poole, Clifton

    1998-01-01

    .... Army distance learning plan managers to examine the DLPs they were directing. The analysis showed that neither army nor civilian distance learning plan managers used formalized requirements for organizational structure development (OSD...

  16. When Do Distance Effects Become Empirically Observable?

    DEFF Research Database (Denmark)

    Beugelsdijk, Sjoerd; Nell, Phillip C.; Ambos, Björn

    2017-01-01

    Integrating distance research with the behavioral strategy literature on MNC headquarters-subsidiary relations, this paper explores how the distance between headquarters and subsidiaries relates to value added by the headquarters. We show for 124 manufacturing subsidiaries in Europe that...

  17. Institutional Distance and the Internationalization Process

    DEFF Research Database (Denmark)

    Pogrebnyakov, Nicolai; Maitland, Carleen

    2011-01-01

    This paper applies the institutional lens to the internationalization process model. It updates the concept of psychic distance in the model with a recently developed, theoretically grounded construct of institutional distance. Institutions are considered simultaneously at the national and industry...

  18. Efficient DNA barcode regions for classifying Piper species (Piperaceae

    Directory of Open Access Journals (Sweden)

    Arunrat Chaveerach

    2016-09-01

    Full Text Available Piper species are used for spices, in traditional and processed forms of medicines, in cosmetic compounds, in cultural activities and insecticides. Here barcode analysis was performed for identification of plant parts, young plants and modified forms of plants. Thirty-six Piper species were collected and the three barcode regions, matK, rbcL and psbA-trnH spacer, were amplified, sequenced and aligned to determine their genetic distances. For intraspecific genetic distances, the most effective values for the species identification ranged from no difference to very low distance values. However, P. betle had the highest values at 0.386 for the matK region. This finding may be due to P. betle being an economic and cultivated species, and thus is supported with growth factors, which may have affected its genetic distance. The interspecific genetic distances that were most effective for identification of different species were from the matK region and ranged from a low of 0.002 in 27 paired species to a high of 0.486. Eight species pairs, P. kraense and P. dominantinervium, P. magnibaccum and P. kraense, P. phuwuaense and P. dominantinervium, P. phuwuaense and P. kraense, P. pilobracteatum and P. dominantinervium, P. pilobracteatum and P. kraense, P. pilobracteatum and P. phuwuaense and P. sylvestre and P. polysyphonum, that presented a genetic distance of 0.000 and were identified by independently using each of the other two regions. Concisely, these three barcode regions are powerful for further efficient identification of the 36 Piper species.

  19. Efficient DNA barcode regions for classifying Piper species (Piperaceae).

    Science.gov (United States)

    Chaveerach, Arunrat; Tanee, Tawatchai; Sanubol, Arisa; Monkheang, Pansa; Sudmoon, Runglawan

    2016-01-01

    Piper species are used for spices, in traditional and processed forms of medicines, in cosmetic compounds, in cultural activities and insecticides. Here barcode analysis was performed for identification of plant parts, young plants and modified forms of plants. Thirty-six Piper species were collected and the three barcode regions, matK , rbcL and psbA - trnH spacer, were amplified, sequenced and aligned to determine their genetic distances. For intraspecific genetic distances, the most effective values for the species identification ranged from no difference to very low distance values. However, Piper betle had the highest values at 0.386 for the matK region. This finding may be due to Piper betle being an economic and cultivated species, and thus is supported with growth factors, which may have affected its genetic distance. The interspecific genetic distances that were most effective for identification of different species were from the matK region and ranged from a low of 0.002 in 27 paired species to a high of 0.486. Eight species pairs, Piper kraense and Piper dominantinervium , Piper magnibaccum and Piper kraense , Piper phuwuaense and Piper dominantinervium , Piper phuwuaense and Piper kraense , Piper pilobracteatum and Piper dominantinervium , Piper pilobracteatum and Piper kraense , Piper pilobracteatum and Piper phuwuaense and Piper sylvestre and Piper polysyphonum , that presented a genetic distance of 0.000 and were identified by independently using each of the other two regions. Concisely, these three barcode regions are powerful for further efficient identification of the 36 Piper species.

  20. Efficient DNA barcode regions for classifying Piper species (Piperaceae)

    Science.gov (United States)

    Chaveerach, Arunrat; Tanee, Tawatchai; Sanubol, Arisa; Monkheang, Pansa; Sudmoon, Runglawan

    2016-01-01

    Abstract Piper species are used for spices, in traditional and processed forms of medicines, in cosmetic compounds, in cultural activities and insecticides. Here barcode analysis was performed for identification of plant parts, young plants and modified forms of plants. Thirty-six Piper species were collected and the three barcode regions, matK, rbcL and psbA-trnH spacer, were amplified, sequenced and aligned to determine their genetic distances. For intraspecific genetic distances, the most effective values for the species identification ranged from no difference to very low distance values. However, Piper betle had the highest values at 0.386 for the matK region. This finding may be due to Piper betle being an economic and cultivated species, and thus is supported with growth factors, which may have affected its genetic distance. The interspecific genetic distances that were most effective for identification of different species were from the matK region and ranged from a low of 0.002 in 27 paired species to a high of 0.486. Eight species pairs, Piper kraense and Piper dominantinervium, Piper magnibaccum and Piper kraense, Piper phuwuaense and Piper dominantinervium, Piper phuwuaense and Piper kraense, Piper pilobracteatum and Piper dominantinervium, Piper pilobracteatum and Piper kraense, Piper pilobracteatum and Piper phuwuaense and Piper sylvestre and Piper polysyphonum, that presented a genetic distance of 0.000 and were identified by independently using each of the other two regions. Concisely, these three barcode regions are powerful for further efficient identification of the 36 Piper species. PMID:27829794