Construction of Protograph LDPC Codes with Linear Minimum Distance
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
International Nuclear Information System (INIS)
Achmad Suntoro
2014-01-01
A design to determine the minimum distance between the consecutive carriers at the trajectory of gamma irradiators IR-200K is implemented. Equilibrium between centrifugal force of a moving carrier in circular trajectory and its gravity force as well as carrier dimensions are used as parameters in determining such a minimum distance. The minimum distance between the consecutive carriers in the design is defined 1.2 meters. The distance is 11.5% greater than the minimum distance theoretically calculated, namely 1,076 meters. Errors tolerance in construction/installation of the trajectory and other unexpected things during irradiator's operation are part of the consideration to enlarge the minimum distance from its theoretical value. The distance between the consecutive carriers will not affect throughput and efficiency of using radiation due to the straight trajectory segments do not need to follow such the minimum distance between the carriers, as the trajectory segments around the i radiation sources are straight. (author)
LDPC Codes with Minimum Distance Proportional to Block Size
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low
The Minimum Distance of Graph Codes
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn
2011-01-01
We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....
Lower bounds for the minimum distance of algebraic geometry codes
DEFF Research Database (Denmark)
Beelen, Peter
, such as the Goppa bound, the Feng-Rao bound and the Kirfel-Pellikaan bound. I will finish my talk by giving several examples. Especially for two-point codes, the generalized order bound is fairly easy to compute. As an illustration, I will indicate how a lower bound can be obtained for the minimum distance of some...... description of these codes in terms of order domains has been found. In my talk I will indicate how one can use the ideas behind the order bound to obtain a lower bound for the minimum distance of any AG-code. After this I will compare this generalized order bound with other known lower bounds...
Decoding Reed-Solomon Codes beyond half the minimum distance
DEFF Research Database (Denmark)
Høholdt, Tom; Nielsen, Rasmus Refslund
1999-01-01
We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...
Minimum Distance Estimation on Time Series Analysis With Little Data
National Research Council Canada - National Science Library
Tekin, Hakan
2001-01-01
.... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...
47 CFR 73.807 - Minimum distance separation between stations.
2010-10-01
... and the right-hand column lists (for informational purposes only) the minimum distance necessary for...) Within 320 km of the Mexican border, LP100 stations must meet the following separations with respect to any Mexican stations: Mexican station class Co-channel (km) First-adjacent channel (km) Second-third...
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
Protograph based LDPC codes with minimum distance linearly growing with block size
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Directory of Open Access Journals (Sweden)
Sharif Uddin
2016-01-01
Full Text Available An enhanced k-nearest neighbor (k-NN classification algorithm is presented, which uses a density based similarity measure in addition to a distance based similarity measure to improve the diagnostic performance in bearing fault diagnosis. Due to its use of distance based similarity measure alone, the classification accuracy of traditional k-NN deteriorates in case of overlapping samples and outliers and is highly susceptible to the neighborhood size, k. This study addresses these limitations by proposing the use of both distance and density based measures of similarity between training and test samples. The proposed k-NN classifier is used to enhance the diagnostic performance of a bearing fault diagnosis scheme, which classifies different fault conditions based upon hybrid feature vectors extracted from acoustic emission (AE signals. Experimental results demonstrate that the proposed scheme, which uses the enhanced k-NN classifier, yields better diagnostic performance and is more robust to variations in the neighborhood size, k.
A linear time algorithm for minimum fill-in and treewidth for distance heredity graphs
Broersma, Haitze J.; Dahlhaus, E.; Kloks, A.J.J.; Kloks, T.
2000-01-01
A graph is distance hereditary if it preserves distances in all its connected induced subgraphs. The MINIMUM FILL-IN problem is the problem of finding a chordal supergraph with the smallest possible number of edges. The TREEWIDTH problem is the problem of finding a chordal embedding of the graph
On the sizes of expander graphs and minimum distances of graph codes
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn
2014-01-01
We give lower bounds for the minimum distances of graph codes based on expander graphs. The bounds depend only on the second eigenvalue of the graph and the parameters of the component codes. We also give an upper bound on the size of a degree regular graph with given second eigenvalue....
Principle of minimum distance in space of states as new principle in quantum physics
International Nuclear Information System (INIS)
Ion, D. B.; Ion, M. L. D.
2007-01-01
The mathematician Leonhard Euler (1707-1783) appears to have been a philosophical optimist having written: 'Since the fabric of universe is the most perfect and is the work of the most wise Creator, nothing whatsoever take place in this universe in which some relation of maximum or minimum does not appear. Wherefore, there is absolutely no doubt that every effect in universe can be explained as satisfactory from final causes themselves the aid of the method of Maxima and Minima, as can from the effective causes'. Having in mind this kind of optimism in the papers mentioned in this work we introduced and investigated the possibility to construct a predictive analytic theory of the elementary particle interaction based on the principle of minimum distance in the space of quantum states (PMD-SQS). So, choosing the partial transition amplitudes as the system variational variables and the distance in the space of the quantum states as a measure of the system effectiveness, we obtained the results presented in this paper. These results proved that the principle of minimum distance in space of quantum states (PMD-SQS) can be chosen as variational principle by which we can find the analytic expressions of the partial transition amplitudes. In this paper we present a description of hadron-hadron scattering via principle of minimum distance PMD-SQS when the distance in space of states is minimized with two directional constraints: dσ/dΩ(±1) = fixed. Then by using the available experimental (pion-nucleon and kaon-nucleon) phase shifts we obtained not only consistent experimental tests of the PMD-SQS optimality, but also strong experimental evidences for new principles in hadronic physics such as: Principle of nonextensivity conjugation via the Riesz-Thorin relation (1/2p + 1/2q = 1) and a new Principle of limited uncertainty in nonextensive quantum physics. The strong experimental evidence obtained here for the nonextensive statistical behavior of the [J,
Combination of minimum enclosing balls classifier with SVM in coal-rock recognition
Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan
2017-01-01
Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987
Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.
Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan
2017-01-01
Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.
Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.
Directory of Open Access Journals (Sweden)
QingJun Song
Full Text Available Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB algorithm plus Support vector machine (SVM is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.
Directory of Open Access Journals (Sweden)
Eric Z. Chen
2015-01-01
Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.
30 CFR 77.807-2 - Booms and masts; minimum distance from high-voltage lines.
2010-07-01
...-voltage lines. 77.807-2 Section 77.807-2 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-2 Booms and masts; minimum distance from high-voltage lines. The booms and masts of equipment operated on the surface of any...
30 CFR 77.807-3 - Movement of equipment; minimum distance from high-voltage lines.
2010-07-01
... high-voltage lines. 77.807-3 Section 77.807-3 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-3 Movement of equipment; minimum distance from high-voltage lines. When any part of any equipment operated on the surface of any...
Rate-Compatible LDPC Codes with Linear Minimum Distance
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Decoding and finding the minimum distance with Gröbner bases : history and new insights
Bulygin, S.; Pellikaan, G.R.; Woungang, I.; Misra, S.; Misra, S.C.
2010-01-01
In this chapter, we discuss decoding techniques and finding the minimum distance of linear codes with the use of Grobner bases. First, we give a historical overview of decoding cyclic codes via solving systems polynominal equations over finite fields. In particular, we mention papers of Cooper,.
Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases
Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.
2009-01-01
In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.
Rate-compatible protograph LDPC code families with linear minimum distance
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
International Nuclear Information System (INIS)
Giansanti, Daniele; Macellari, Velio; Maccioni, Giovanni
2008-01-01
Fall prevention lacks easy, quantitative and wearable methods for the classification of fall-risk (FR). Efforts must be thus devoted to the choice of an ad hoc classifier both to reduce the size of the sample used to train the classifier and to improve performances. A new methodology that uses a neural network (NN) and a wearable device are hereby proposed for this purpose. The NN uses kinematic parameters assessed by a wearable device with accelerometers and rate gyroscopes during a posturography protocol. The training of the NN was based on the Mahalanobis distance and was carried out on two groups of 30 elderly subjects with varying fall-risk Tinetti scores. The validation was done on two groups of 100 subjects with different fall-risk Tinetti scores and showed that, both in terms of specificity and sensitivity, the NN performed better than other classifiers (naive Bayes, Bayes net, multilayer perceptron, support vector machines, statistical classifiers). In particular, (i) the proposed NN methodology improved the specificity and sensitivity by a mean of 3% when compared to the statistical classifier based on the Mahalanobis distance (SCMD) described in Giansanti (2006 Physiol. Meas. 27 1081–90); (ii) the assessed specificity was 97%, the assessed sensitivity was 98% and the area under receiver operator characteristics was 0.965. (note)
Qiu, Zhijun; Zhou, Bo; Yuan, Jiangfeng
2017-11-21
Protein-protein interaction site (PPIS) prediction must deal with the diversity of interaction sites that limits their prediction accuracy. Use of proteins with unknown or unidentified interactions can also lead to missing interfaces. Such data errors are often brought into the training dataset. In response to these two problems, we used the minimum covariance determinant (MCD) method to refine the training data to build a predictor with better performance, utilizing its ability of removing outliers. In order to predict test data in practice, a method based on Mahalanobis distance was devised to select proper test data as input for the predictor. With leave-one-validation and independent test, after the Mahalanobis distance screening, our method achieved higher performance according to Matthews correlation coefficient (MCC), although only a part of test data could be predicted. These results indicate that data refinement is an efficient approach to improve protein-protein interaction site prediction. By further optimizing our method, it is hopeful to develop predictors of better performance and wide range of application. Copyright © 2017 Elsevier Ltd. All rights reserved.
Barthlome, D. E.
1975-01-01
Test results of a unique automatic brake control system are outlined and a comparison is made of its mode of operation to that of an existing skid control system. The purpose of the test system is to provide automatic control of braking action such that hydraulic brake pressure is maintained at a near constant, optimum value during minimum distance stops.
Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers
Daniel L. Schmoldt; Jing He; A. Lynn Abbott
1998-01-01
Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
DEFF Research Database (Denmark)
Hermansson, Henrik Alf Jonas; Cross, James
2015-01-01
Political scientists often nd themselves tracking amendments to political texts. As different actors weigh in, texts change as they are drafted and redrafted, reflecting political preferences and power. This study provides a novel solution to the problem of detecting amendments to political text......) and substantive amount of amendments made between version of texts. To illustrate the usefulness and eciency of the approach we replicate two existing studies from the field of legislative studies. Our results demonstrate that minimum edit distance methods can produce superior measures of text amendments to hand...
Real-time stop sign detection and distance estimation using a single camera
Wang, Wenpeng; Su, Yuxuan; Cheng, Ming
2018-04-01
In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.
Toward the minimum inner edge distance of the habitable zone
Energy Technology Data Exchange (ETDEWEB)
Zsom, Andras; Seager, Sara; De Wit, Julien; Stamenković, Vlada, E-mail: zsom@mit.edu [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2013-12-01
We explore the minimum distance from a host star where an exoplanet could potentially be habitable in order not to discard close-in rocky exoplanets for follow-up observations. We find that the inner edge of the Habitable Zone for hot desert worlds can be as close as 0.38 AU around a solar-like star, if the greenhouse effect is reduced (∼1% relative humidity) and the surface albedo is increased. We consider a wide range of atmospheric and planetary parameters such as the mixing ratios of greenhouse gases (water vapor and CO{sub 2}), surface albedo, pressure, and gravity. Intermediate surface pressure (∼1-10 bars) is necessary to limit water loss and to simultaneously sustain an active water cycle. We additionally find that the water loss timescale is influenced by the atmospheric CO{sub 2} level, because it indirectly influences the stratospheric water mixing ratio. If the CO{sub 2} mixing ratio of dry planets at the inner edge is smaller than 10{sup –4}, the water loss timescale is ∼1 billion years, which is considered here too short for life to evolve. We also show that the expected transmission spectra of hot desert worlds are similar to an Earth-like planet. Therefore, an instrument designed to identify biosignature gases in an Earth-like atmosphere can also identify similarly abundant gases in the atmospheres of dry planets. Our inner edge limit is closer to the host star than previous estimates. As a consequence, the occurrence rate of potentially habitable planets is larger than previously thought.
Toward the minimum inner edge distance of the habitable zone
International Nuclear Information System (INIS)
Zsom, Andras; Seager, Sara; De Wit, Julien; Stamenković, Vlada
2013-01-01
We explore the minimum distance from a host star where an exoplanet could potentially be habitable in order not to discard close-in rocky exoplanets for follow-up observations. We find that the inner edge of the Habitable Zone for hot desert worlds can be as close as 0.38 AU around a solar-like star, if the greenhouse effect is reduced (∼1% relative humidity) and the surface albedo is increased. We consider a wide range of atmospheric and planetary parameters such as the mixing ratios of greenhouse gases (water vapor and CO 2 ), surface albedo, pressure, and gravity. Intermediate surface pressure (∼1-10 bars) is necessary to limit water loss and to simultaneously sustain an active water cycle. We additionally find that the water loss timescale is influenced by the atmospheric CO 2 level, because it indirectly influences the stratospheric water mixing ratio. If the CO 2 mixing ratio of dry planets at the inner edge is smaller than 10 –4 , the water loss timescale is ∼1 billion years, which is considered here too short for life to evolve. We also show that the expected transmission spectra of hot desert worlds are similar to an Earth-like planet. Therefore, an instrument designed to identify biosignature gases in an Earth-like atmosphere can also identify similarly abundant gases in the atmospheres of dry planets. Our inner edge limit is closer to the host star than previous estimates. As a consequence, the occurrence rate of potentially habitable planets is larger than previously thought.
Giovannini, Federico; Savino, Giovanni; Pierini, Marco; Baldanzini, Niccolò
2013-10-01
In the recent years the autonomous emergency brake (AEB) was introduced in the automotive field to mitigate the injury severity in case of unavoidable collisions. A crucial element for the activation of the AEB is to establish when the obstacle is no longer avoidable by lateral evasive maneuvers (swerving). In the present paper a model to compute the minimum swerving distance needed by a powered two-wheeler (PTW) to avoid the collision against a fixed obstacle, named last-second swerving model (Lsw), is proposed. The effectiveness of the model was investigated by an experimental campaign involving 12 volunteers riding a scooter equipped with a prototype autonomous emergency braking, named motorcycle autonomous emergency braking system (MAEB). The tests showed the performance of the model in evasive trajectory computation for different riding styles and fixed obstacles. Copyright © 2013 Elsevier Ltd. All rights reserved.
Elbakary, M. I.; Alam, M. S.; Aslan, M. S.
2008-03-01
In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and crucial information such as direction, position and size related to the target are lost. If the target reappears at a later frame, it may not be tracked again because the 3D orientation, size and location of the target might be changed. To obtain information about the target before disappearing and to detect the target after reappearing, distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of selecting the training chips manually and selecting the number of the training chips randomly, we adopted the K-means algorithm to cluster the training frames and based on the number of clusters we select the training chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview ,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed algorithm.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the
DEFF Research Database (Denmark)
Clausen, Kevin Kuhlmann; Clausen, Preben; Hounisen, Jens Peder
2013-01-01
Global Positioning System (GPS) satellite telemetry was used to determine the foraging range, habitat use and minimum flight distances for individual East Atlantic Light-bellied Brent Geese Branta bernicla hrota at two spring staging areas in Denmark. Foraging ranges (mean ± s.d. = 53.0 ± 23.4 km...
SpectraClassifier 1.0: a user friendly, automated MRS-based classifier-development system
Directory of Open Access Journals (Sweden)
Julià-Sapé Margarida
2010-02-01
Full Text Available Abstract Background SpectraClassifier (SC is a Java solution for designing and implementing Magnetic Resonance Spectroscopy (MRS-based classifiers. The main goal of SC is to allow users with minimum background knowledge of multivariate statistics to perform a fully automated pattern recognition analysis. SC incorporates feature selection (greedy stepwise approach, either forward or backward, and feature extraction (PCA. Fisher Linear Discriminant Analysis is the method of choice for classification. Classifier evaluation is performed through various methods: display of the confusion matrix of the training and testing datasets; K-fold cross-validation, leave-one-out and bootstrapping as well as Receiver Operating Characteristic (ROC curves. Results SC is composed of the following modules: Classifier design, Data exploration, Data visualisation, Classifier evaluation, Reports, and Classifier history. It is able to read low resolution in-vivo MRS (single-voxel and multi-voxel and high resolution tissue MRS (HRMAS, processed with existing tools (jMRUI, INTERPRET, 3DiCSI or TopSpin. In addition, to facilitate exchanging data between applications, a standard format capable of storing all the information needed for a dataset was developed. Each functionality of SC has been specifically validated with real data with the purpose of bug-testing and methods validation. Data from the INTERPRET project was used. Conclusions SC is a user-friendly software designed to fulfil the needs of potential users in the MRS community. It accepts all kinds of pre-processed MRS data types and classifies them semi-automatically, allowing spectroscopists to concentrate on interpretation of results with the use of its visualisation tools.
Effect of Weight Transfer on a Vehicle's Stopping Distance.
Whitmire, Daniel P.; Alleman, Timothy J.
1979-01-01
An analysis of the minimum stopping distance problem is presented taking into account the effect of weight transfer on nonskidding vehicles and front- or rear-wheels-skidding vehicles. Expressions for the minimum stopping distances are given in terms of vehicle geometry and the coefficients of friction. (Author/BB)
Steiner Distance in Graphs--A Survey
Mao, Yaping
2017-01-01
For a connected graph $G$ of order at least $2$ and $S\\subseteq V(G)$, the \\emph{Steiner distance} $d_G(S)$ among the vertices of $S$ is the minimum size among all connected subgraphs whose vertex sets contain $S$. In this paper, we summarize the known results on the Steiner distance parameters, including Steiner distance, Steiner diameter, Steiner center, Steiner median, Steiner interval, Steiner distance hereditary graph, Steiner distance stable graph, average Steiner distance, and Steiner ...
Data characteristics that determine classifier performance
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2006-11-01
Full Text Available available at [11]. The kNN uses a LinearNN nearest neighbour search algorithm with an Euclidean distance metric [8]. The optimal k value is determined by performing 10-fold cross-validation. An optimal k value between 1 and 10 is used for Experiments 1... classifiers. 10-fold cross-validation is used to evaluate and compare the performance of the classifiers on the different data sets. 3.1. Artificial data generation Multivariate Gaussian distributions are used to generate artificial data sets. We use d...
Coupling between minimum scattering antennas
DEFF Research Database (Denmark)
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
Minimum-link paths among obstacles in the plane
Mitchell, J.S.B.; Rote, G.; Woeginger, G.J.
1992-01-01
Given a set of nonintersecting polygonal obstacles in the plane, thelink distance between two pointss andt is the minimum number of edges required to form a polygonal path connectings tot that avoids all obstacles. We present an algorithm that computes the link distance (and a corresponding
A Linguistic Image of Nature: The Burmese Numerative Classifier System
Becker, Alton L.
1975-01-01
The Burmese classifier system is coherent because it is based upon a single elementary semantic dimension: deixis. On that dimension, four distances are distinguished, distances which metaphorically substitute for other conceptual relations between people and other living beings, people and things, and people and concepts. (Author/RM)
Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.
2017-12-01
We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a
The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule
Eskandari, M. R.; Faghihi, F.; Mahdavi, M.
The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.
DEFF Research Database (Denmark)
Lisonek, Petr
1996-01-01
A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...
The Minimum Wage and the Employment of Teenagers. Recent Research.
Fallick, Bruce; Currie, Janet
A study used individual-level data from the National Longitudinal Study of Youth to examine the effects of changes in the federal minimum wage on teenage employment. Individuals in the sample were classified as either likely or unlikely to be affected by these increases in the federal minimum wage on the basis of their wage rates and industry of…
18 CFR 367.18 - Criteria for classifying leases.
2010-04-01
... the lessee) must not give rise to a new classification of a lease for accounting purposes. ... classifying the lease. (4) The present value at the beginning of the lease term of the minimum lease payments... taxes to be paid by the lessor, including any related profit, equals or exceeds 90 percent of the excess...
Edit Distance to Monotonicity in Sliding Windows
DEFF Research Database (Denmark)
Chan, Ho-Leung; Lam, Tak-Wah; Lee, Lap Kei
2011-01-01
Given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non-decreasing with respect to the numerical value. The space complexity of estimating the edit distance to monotonicity of a ...
A Bayesian Classifier for X-Ray Pulsars Recognition
Directory of Open Access Journals (Sweden)
Hao Liang
2016-01-01
Full Text Available Recognition for X-ray pulsars is important for the problem of spacecraft’s attitude determination by X-ray Pulsar Navigation (XPNAV. By using the nonhomogeneous Poisson model of the received photons and the minimum recognition error criterion, a classifier based on the Bayesian theorem is proposed. For X-ray pulsars recognition with unknown Doppler frequency and initial phase, the features of every X-ray pulsar are extracted and the unknown parameters are estimated using the Maximum Likelihood (ML method. Besides that, a method to recognize unknown X-ray pulsars or X-ray disturbances is proposed. Simulation results certificate the validity of the proposed Bayesian classifier.
Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles
Directory of Open Access Journals (Sweden)
Fatih Gökçe
2015-09-01
Full Text Available Detection and distance estimation of micro unmanned aerial vehicles (mUAVs is crucial for (i the detection of intruder mUAVs in protected environments; (ii sense and avoid purposes on mUAVs or on other aerial vehicles and (iii multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG and local binary patterns (LBP using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032 × 778 resolution and 150 ms outdoors (1280 × 720 resolution per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms.
Directory of Open Access Journals (Sweden)
Yousef Malik
2016-12-01
Full Text Available The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN. In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.
Ultrametric Distance in Syntax
Directory of Open Access Journals (Sweden)
Roberts Mark D.
2015-04-01
Full Text Available Phrase structure trees have a hierarchical structure. In many subjects, most notably in taxonomy such tree structures have been studied using ultrametrics. Here syntactical hierarchical phrase trees are subject to a similar analysis, which is much simpler as the branching structure is more readily discernible and switched. The ambiguity of which branching height to choose, is resolved by postulating that branching occurs at the lowest height available. An ultrametric produces a measure of the complexity of sentences: presumably the complexity of sentences increases as a language is acquired so that this can be tested. All ultrametric triangles are equilateral or isosceles. Here it is shown that X̅ structure implies that there are no equilateral triangles. Restricting attention to simple syntax a minimum ultrametric distance between lexical categories is calculated. A matrix constructed from this ultrametric distance is shown to be different than the matrix obtained from features. It is shown that the definition of C-COMMAND can be replaced by an equivalent ultrametric definition. The new definition invokes a minimum distance between nodes and this is more aesthetically satisfying than previous varieties of definitions. From the new definition of C-COMMAND follows a new definition of of the central notion in syntax namely GOVERNMENT.
Timed Fast Exact Euclidean Distance (tFEED) maps
Kehtarnavaz, Nasser; Schouten, Theo E.; Laplante, Philip A.; Kuppens, Harco; van den Broek, Egon
2005-01-01
In image and video analysis, distance maps are frequently used. They provide the (Euclidean) distance (ED) of background pixels to the nearest object pixel. In a naive implementation, each object pixel feeds its (exact) ED to each background pixel; then the minimum of these values denotes the ED to
A Minimum Spanning Tree Representation of Anime Similarities
Wibowo, Canggih Puspo
2016-01-01
In this work, a new way to represent Japanese animation (anime) is presented. We applied a minimum spanning tree to show the relation between anime. The distance between anime is calculated through three similarity measurements, namely crew, score histogram, and topic similarities. Finally, the centralities are also computed to reveal the most significance anime. The result shows that the minimum spanning tree can be used to determine the similarity anime. Furthermore, by using centralities c...
Minimum triplet covers of binary phylogenetic X-trees.
Huber, K T; Moulton, V; Steel, M
2017-12-01
Trees with labelled leaves and with all other vertices of degree three play an important role in systematic biology and other areas of classification. A classical combinatorial result ensures that such trees can be uniquely reconstructed from the distances between the leaves (when the edges are given any strictly positive lengths). Moreover, a linear number of these pairwise distance values suffices to determine both the tree and its edge lengths. A natural set of pairs of leaves is provided by any 'triplet cover' of the tree (based on the fact that each non-leaf vertex is the median vertex of three leaves). In this paper we describe a number of new results concerning triplet covers of minimum size. In particular, we characterize such covers in terms of an associated graph being a 2-tree. Also, we show that minimum triplet covers are 'shellable' and thereby provide a set of pairs for which the inter-leaf distance values will uniquely determine the underlying tree and its associated branch lengths.
2011-10-13
... implementation of policies and minimum standards regarding information security, personnel security, and systems security; address both internal and external security threats and vulnerabilities; and provide policies and... policies and minimum standards will address all agencies that operate or access classified computer...
Analytic processing of distance.
Dopkins, Stephen; Galyer, Darin
2018-01-01
How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.
An ensemble of dissimilarity based classifiers for Mackerel gender determination
Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.
2014-03-01
Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.
An ensemble of dissimilarity based classifiers for Mackerel gender determination
International Nuclear Information System (INIS)
Blanco, A; Rodriguez, R; Martinez-Maranon, I
2014-01-01
Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity
UPGMA and the normalized equidistant minimum evolution problem
Moulton, Vincent; Spillner, Andreas; Wu, Taoyang
2017-01-01
UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is a widely used clustering method. Here we show that UPGMA is a greedy heuristic for the normalized equidistant minimum evolution (NEME) problem, that is, finding a rooted tree that minimizes the minimum evolution score relative to the dissimilarity matrix among all rooted trees with the same leaf-set in which all leaves have the same distance to the root. We prove that the NEME problem is NP-hard. In addition, we present some heurist...
Wang, Yan-ying; Wang, Jin-liang; Wang, Ping; Hu, Wen-yin; Su, Shao-hua
2015-12-01
High accuracy remote sensed image classification technology is a long-term and continuous pursuit goal of remote sensing applications. In order to evaluate single classification algorithm accuracy, take Landsat TM image as data source, Northwest Yunnan as study area, seven types of land cover classification like Maximum Likelihood Classification has been tested, the results show that: (1)the overall classification accuracy of Maximum Likelihood Classification(MLC), Artificial Neural Network Classification(ANN), Minimum Distance Classification(MinDC) is higher, which is 82.81% and 82.26% and 66.41% respectively; the overall classification accuracy of Parallel Hexahedron Classification(Para), Spectral Information Divergence Classification(SID), Spectral Angle Classification(SAM) is low, which is 37.29%, 38.37, 53.73%, respectively. (2) from each category classification accuracy: although the overall accuracy of the Para is the lowest, it is much higher on grasslands, wetlands, forests, airport land, which is 89.59%, 94.14%, and 89.04%, respectively; the SAM, SID are good at forests classification with higher overall classification accuracy, which is 89.8% and 87.98%, respectively. Although the overall classification accuracy of ANN is very high, the classification accuracy of road, rural residential land and airport land is very low, which is 10.59%, 11% and 11.59% respectively. Other classification methods have their advantages and disadvantages. These results show that, under the same conditions, the same images with different classification methods to classify, there will be a classifier to some features has higher classification accuracy, a classifier to other objects has high classification accuracy, and therefore, we may select multi sub-classifier integration to improve the classification accuracy.
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A
2014-12-23
A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.
Safety distance between underground natural gas and water pipeline facilities
International Nuclear Information System (INIS)
Mohsin, R.; Majid, Z.A.; Yusof, M.Z.
2014-01-01
A leaking water pipe bursting high pressure water jet in the soil will create slurry erosion which will eventually erode the adjacent natural gas pipe, thus causing its failure. The standard 300 mm safety distance used to place natural gas pipe away from water pipeline facilities needs to be reviewed to consider accidental damage and provide safety cushion to the natural gas pipe. This paper presents a study on underground natural gas pipeline safety distance via experimental and numerical approaches. The pressure–distance characteristic curve obtained from this experimental study showed that the pressure was inversely proportional to the square of the separation distance. Experimental testing using water-to-water pipeline system environment was used to represent the worst case environment, and could be used as a guide to estimate appropriate safety distance. Dynamic pressures obtained from the experimental measurement and simulation prediction mutually agreed along the high-pressure water jetting path. From the experimental and simulation exercises, zero effect distance for water-to-water medium was obtained at an estimated horizontal distance at a minimum of 1500 mm, while for the water-to-sand medium, the distance was estimated at a minimum of 1200 mm. - Highlights: • Safe separation distance of underground natural gas pipes was determined. • Pressure curve is inversely proportional to separation distance. • Water-to-water system represents the worst case environment. • Measured dynamic pressures mutually agreed with simulation results. • Safe separation distance of more than 1200 mm should be applied
Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen
2011-11-01
Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.
Computing the discrete fréchet distance with imprecise input
Ahn, Heekap
2012-02-01
We consider the problem of computing the discrete Frechet distance between two polyg- onal curves when their vertices are imprecise. An imprecise point is given by a region and this point could lie anywhere within this region. By modelling imprecise points as balls in dimension d, we present an algorithm for this problem that returns in time 2 O (d 2)m 2n 2 log 2 (mn) the minimum Frechet distance between two imprecise polygonal curves with n and m vertices, respectively. We give an improved algorithm for the pla- nar case with running time O(mnlog 3 (mn)+(m 2 +n 2) log(mn)). In the d-dimensional orthogonal case, where points are modelled as axis-parallel boxes, and we use the L∞ distance, we give an O(dmnlog(dmn))-time algorithm. We also give effcient O(dmn)-time algorithms to approximate the maximum Frechet distance, as well as the minimum and maximum Frechet distance under translation. These algorithms achieve constant factor approximation ratios in \\ ealistic" settings (such as when the radii of the balls modelling the imprecise points are roughly of the same size). © 2012 World Scientific Publishing Company.
Comparison of Classifier Architectures for Online Neural Spike Sorting.
Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood
2017-04-01
High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.
Design for minimum energy in interstellar communication
Messerschmitt, David G.
2015-02-01
Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.
An Improvement To The k-Nearest Neighbor Classifier For ECG Database
Jaafar, Haryati; Hidayah Ramli, Nur; Nasir, Aimi Salihah Abdul
2018-03-01
The k nearest neighbor (kNN) is a non-parametric classifier and has been widely used for pattern classification. However, in practice, the performance of kNN often tends to fail due to the lack of information on how the samples are distributed among them. Moreover, kNN is no longer optimal when the training samples are limited. Another problem observed in kNN is regarding the weighting issues in assigning the class label before classification. Thus, to solve these limitations, a new classifier called Mahalanobis fuzzy k-nearest centroid neighbor (MFkNCN) is proposed in this study. Here, a Mahalanobis distance is applied to avoid the imbalance of samples distribition. Then, a surrounding rule is employed to obtain the nearest centroid neighbor based on the distributions of training samples and its distance to the query point. Consequently, the fuzzy membership function is employed to assign the query point to the class label which is frequently represented by the nearest centroid neighbor Experimental studies from electrocardiogram (ECG) signal is applied in this study. The classification performances are evaluated in two experimental steps i.e. different values of k and different sizes of feature dimensions. Subsequently, a comparative study of kNN, kNCN, FkNN and MFkCNN classifier is conducted to evaluate the performances of the proposed classifier. The results show that the performance of MFkNCN consistently exceeds the kNN, kNCN and FkNN with the best classification rates of 96.5%.
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
International Nuclear Information System (INIS)
Kaya, Savaş; Kaya, Cemal; Islam, Nazmul
2016-01-01
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
Energy Technology Data Exchange (ETDEWEB)
Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)
2016-03-15
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
Minimum wakefield achievable by waveguide damped cavity
International Nuclear Information System (INIS)
Lin, X.E.; Kroll, N.M.
1995-01-01
The authors use an equivalent circuit to model a waveguide damped cavity. Both exponentially damped and persistent (decay t -3/2 ) components of the wakefield are derived from this model. The result shows that for a cavity with resonant frequency a fixed interval above waveguide cutoff, the persistent wakefield amplitude is inversely proportional to the external Q value of the damped mode. The competition of the two terms results in an optimal Q value, which gives a minimum wakefield as a function of the distance behind the source particle. The minimum wakefield increases when the resonant frequency approaches the waveguide cutoff. The results agree very well with computer simulation on a real cavity-waveguide system
Permutation-invariant distance between atomic configurations
Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel
2015-09-01
We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity.
Permutation-invariant distance between atomic configurations
International Nuclear Information System (INIS)
Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel
2015-01-01
We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity
A minimum spanning forest based classification method for dedicated breast CT images
International Nuclear Information System (INIS)
Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei
2015-01-01
Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging
On the short distance behavior of string theories
International Nuclear Information System (INIS)
Guida, R.; Konishi, K.; Provero, P.
1991-01-01
Short distance behavior of string theories is investigated by the use of the discretized path-integral formulation. In particular, the minimum physical length and the generalized uncertainty relation are re-derived from a set of Ward-Takahashi identities. In this paper several issues related to the form of the generalized uncertainty relation and to its implications are discussed. A consistent qualitative picture of short distance behavior of string theory seems to emerge from such a study
A Survey of Binary Similarity and Distance Measures
Directory of Open Access Journals (Sweden)
Seung-Seok Choi
2010-02-01
Full Text Available The binary feature vector is one of the most common representations of patterns and measuring similarity and distance measures play a critical role in many problems such as clustering, classification, etc. Ever since Jaccard proposed a similarity measure to classify ecological species in 1901, numerous binary similarity and distance measures have been proposed in various fields. Applying appropriate measures results in more accurate data analysis. Notwithstanding, few comprehensive surveys on binary measures have been conducted. Hence we collected 76 binary similarity and distance measures used over the last century and reveal their correlations through the hierarchical clustering technique.
Mahalanobis distance and variable selection to optimize dose response
International Nuclear Information System (INIS)
Moore, D.H. II; Bennett, D.E.; Wyrobek, A.J.; Kranzler, D.
1979-01-01
A battery of statistical techniques are combined to improve detection of low-level dose response. First, Mahalanobis distances are used to classify objects as normal or abnormal. Then the proportion classified abnormal is regressed on dose. Finally, a subset of regressor variables is selected which maximizes the slope of the dose response line. Use of the techniques is illustrated by application to mouse sperm damaged by low doses of x-rays
Distance-Based Image Classification: Generalizing to New Classes at Near Zero Cost
Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.
2013-01-01
We study large-scale image classification methods that can incorporate new classes and training images continuously over time at negligible cost. To this end, we consider two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers, and introduce a new
Fast Tree: Computing Large Minimum-Evolution Trees with Profiles instead of a Distance Matrix
Energy Technology Data Exchange (ETDEWEB)
N. Price, Morgan; S. Dehal, Paramvir; P. Arkin, Adam
2009-07-31
Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement neighbor-joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest-neighbor interchanges to reduce the length of the tree. For an alignment with N sequences, L sites, and a different characters, a distance matrix requires O(N^2) space and O(N^2 L) time, but FastTree requires just O( NLa + N sqrt(N) ) memory and O( N sqrt(N) log(N) L a ) time. To estimate the tree's reliability, FastTree uses local bootstrapping, which gives another 100-fold speedup over a distance matrix. For example, FastTree computed a tree and support values for 158,022 distinct 16S ribosomal RNAs in 17 hours and 2.4 gigabytes of memory. Just computing pairwise Jukes-Cantor distances and storing them, without inferring a tree or bootstrapping, would require 17 hours and 50 gigabytes of memory. In simulations, FastTree was slightly more accurate than neighbor joining, BIONJ, or FastME; on genuine alignments, FastTree's topologies had higher likelihoods. FastTree is available at http://microbesonline.org/fasttree.
Complex networks in the Euclidean space of communicability distances
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection
Sarojini, Balakrishnan; Ramaraj, Narayanasamy; Nickolas, Savarimuthu
Medical Data mining is the search for relationships and patterns within the medical datasets that could provide useful knowledge for effective clinical decisions. The inclusion of irrelevant, redundant and noisy features in the process model results in poor predictive accuracy. Much research work in data mining has gone into improving the predictive accuracy of the classifiers by applying the techniques of feature selection. Feature selection in medical data mining is appreciable as the diagnosis of the disease could be done in this patient-care activity with minimum number of significant features. The objective of this work is to show that selecting the more significant features would improve the performance of the classifier. We empirically evaluate the classification effectiveness of LibSVM classifier on the reduced feature subset of diabetes dataset. The evaluations suggest that the feature subset selected improves the predictive accuracy of the classifier and reduce false negatives and false positives.
International Nuclear Information System (INIS)
Masurowski, Frank; Drechsler, Martin; Frank, Karin
2016-01-01
Setting a minimum distance between wind turbines and settlements is an important policy to mitigate the conflict between renewable energy production and the well-being of residents. We present a novel approach to assess the impact of varying minimum distances on the wind energy potential of a region, state or country. We show that this impact can be predicted from the spatial structure of the settlements. Applying this approach to Germany, we identify those regions where the energy potential very sensitively reacts to a change in the minimum distance. In relative terms the reduction of the energy potential is maximal in the north-west and the south-east of Germany. In absolute terms it is maximal in the north. This information helps deciding in which regions the minimum distance may be increased without large losses in the energy potential. - Highlights: • Distance between wind turbines and settlements is an important policy criterion. • We predict the impact of varying the distance on the regional energy potential. • The impact can be explained from the settlement structure. • The impact varies by region and German Federal state.
Solar wind and coronal structure near sunspot minimum: Pioneer and SMM observations from 1985-1987
International Nuclear Information System (INIS)
Mihalov, J.D.; Barnes, A.; Hundhausen, A.J.; Smith, E.J.
1990-01-01
The solar wind speeds observed in the outer heliosphere (20 to 40 AU heliocentric distance, approximately) by Pioneers 10 an 11, and at a heliocentric distance of 0.7 AU by the Pioneer Venus spacecraft, reveal a complex set of changes in the years near the recent sunspot minimum, 1985-1987. The pattern of recurrent solar wind streams, the long-term average speed, and the sector polarity of the interplanetary magnetic field all changed in a manner suggesting both a temporal variation, and a changing dependence on heliographic latitude. Coronal observations made from the Solar Maximum Mission spacecraft during the same epoch show a systematic variation in coronal structure and (by implication) the magnetic structure imposed on the expanding solar wind. These observations suggest interpretation of the solar wind speed variations in terms of the familiar model where the speed increases with distance from a nearly flat interplanetary current sheet (or with heliomagnetic latitude), and where this current sheet becomes aligned with the solar equatorial plane as sunspot minimum approaches, but deviates rapidly from that orientation after minimum. The authors confirm here that this basic organization of the solar wind speed persists in the outer heliosphere with an orientation of the neutral sheet consistent with that inferred at a heliocentric distance of a few solar radii, from the coronal observations
Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.
Ethnic and social distance towards Roma population
Directory of Open Access Journals (Sweden)
Miladinović Slobodan
2008-01-01
Full Text Available The Roma people are one of social marginalised ethnic groups which can easily be classified as underclass or ethno-class. In this work are presented the results of survey data analysis of ethnic and social distance towards the Roma population (2007. It concludes that the Roma people are one of ethnic groups with the highest social and ethnic distances in all observed social relations. The conclusion is that their generations poverty and still of life which the poverty produces are main causes of the high distance. The society has very important task to overcome situation which keep their social situation. In this paper are showed some of possible solutions of solving global Romas social situation and their integration in the rest of society. .
Classification of Pulse Waveforms Using Edit Distance with Real Penalty
Directory of Open Access Journals (Sweden)
Zhang Dongyu
2010-01-01
Full Text Available Abstract Advances in sensor and signal processing techniques have provided effective tools for quantitative research in traditional Chinese pulse diagnosis (TCPD. Because of the inevitable intraclass variation of pulse patterns, the automatic classification of pulse waveforms has remained a difficult problem. In this paper, by referring to the edit distance with real penalty (ERP and the recent progress in -nearest neighbors (KNN classifiers, we propose two novel ERP-based KNN classifiers. Taking advantage of the metric property of ERP, we first develop an ERP-induced inner product and a Gaussian ERP kernel, then embed them into difference-weighted KNN classifiers, and finally develop two novel classifiers for pulse waveform classification. The experimental results show that the proposed classifiers are effective for accurate classification of pulse waveform.
Evaluating the concept specialization distance from an end-user perspective: The case of AGROVOC
Martín-Moncunill, David; Sicilia-Urban, Miguel Angel; García-Barriocanal, Elena; Stracke, Christian M.
2017-01-01
Purpose – The common understanding of generalization/specialization relations assumes the relation to be equally strong between a classifier and any of its related classifiers and also at every level of the hierarchy. Assigning a grade of relative distance to represent the level of similarity
Further results on binary convolutional codes with an optimum distance profile
DEFF Research Database (Denmark)
Johannesson, Rolf; Paaske, Erik
1978-01-01
Fixed binary convolutional codes are considered which are simultaneously optimal or near-optimal according to three criteria: namely, distance profiled, free distanced_{ infty}, and minimum number of weightd_{infty}paths. It is shown how the optimum distance profile criterion can be used to limit...... codes. As a counterpart to quick-look-in (QLI) codes which are not "transparent," we introduce rateR = 1/2easy-look-in-transparent (ELIT) codes with a feedforward inverse(1 + D,D). In general, ELIT codes haved_{infty}superior to that of QLI codes....
The Sight Distance Issues with Retrofitted Single-Lane HOV Facilities
Directory of Open Access Journals (Sweden)
Zhongren Wang
2013-06-01
Full Text Available It is well-known that obstruction inside a highway horizontal curve will lead to impaired sight distance. Highway alignment design standards in terms of the minimum horizontal curve radius are specified to allow for adequate stopping sight distance at given design speeds. For a single-lane HOV facility, inside curve obstruction may occur no matter when the facility curves to the left (per travel direction or right. A unique situation that calls for special attention is that the adjacent mixed-flow lane traffic, once queued, may become sight obstruction. Calculations indicated that such obstruction may govern the minimum curve radius design as long as the left shoulder is not less than 0.92 m, when the HOV lane is contiguous to the mixed-flow lanes. Such governance may necessitate design speed reduction, horizontal and cross-section design adjustment, or both.
The structure of water around the compressibility minimum
Energy Technology Data Exchange (ETDEWEB)
Skinner, L. B. [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Mineral Physics Institute, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Benmore, C. J., E-mail: benmore@aps.anl.gov [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Neuefeind, J. C. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37922 (United States); Parise, J. B. [Mineral Physics Institute, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Department of Geosciences, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Photon Sciences Division, Brookhaven National Laboratory, Upton, New York 11973 (United States)
2014-12-07
Here we present diffraction data that yield the oxygen-oxygen pair distribution function, g{sub OO}(r) over the range 254.2–365.9 K. The running O-O coordination number, which represents the integral of the pair distribution function as a function of radial distance, is found to exhibit an isosbestic point at 3.30(5) Å. The probability of finding an oxygen atom surrounding another oxygen at this distance is therefore shown to be independent of temperature and corresponds to an O-O coordination number of 4.3(2). Moreover, the experimental data also show a continuous transition associated with the second peak position in g{sub OO}(r) concomitant with the compressibility minimum at 319 K.
A unified classifier for robust face recognition based on combining multiple subspace algorithms
Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad
2012-10-01
Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Directory of Open Access Journals (Sweden)
Md. Rabiul Islam
2014-01-01
Full Text Available The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo
2010-09-15
Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.
García-Floriano, Andrés; Ferreira-Santiago, Angel; Yáñez-Márquez, Cornelio; Camacho-Nieto, Oscar; Aldape-Pérez, Mario; Villuendas-Rey, Yenny
2017-01-01
Social networking potentially offers improved distance learning environments by enabling the exchange of resources between learners. The existence of properly classified content results in an enhanced distance learning experience in which appropriate materials can be retrieved efficiently; however, for this to happen, metadata needs to be present.…
Implementation of a microcomputer based distance relay for parallel transmission lines
International Nuclear Information System (INIS)
Phadke, A.G.; Jihuang, L.
1986-01-01
Distance relaying for parallel transmission lines is a difficult application problem with conventional phase and ground distance relays. It is known that for cross-country faults involving dissimilar phases and ground, three phase tripping may result. This paper summarizes a newly developed microcomputer based relay which is capable of classifying the cross-country fault correctly. The paper describes the principle of operation and results of laboratory tests of this relay
Safety distance for preventing hot particle ignition of building insulation materials
Jiayun Song; Supan Wang; Haixiang Chen
2014-01-01
Trajectories of flying hot particles were predicted in this work, and the temperatures during the movement were also calculated. Once the particle temperature decreased to the critical temperature for a hot particle to ignite building insulation materials, which was predicted by hot-spot ignition theory, the distance particle traveled was determined as the minimum safety distance for preventing the ignition of building insulation materials by hot particles. The results showed that for sphere ...
Vera, José Fernando; de Rooij, Mark; Heiser, Willem J
2014-11-01
In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.
Relationship between source-surface distance and patient dose in fluoroscopic X-ray examinations
International Nuclear Information System (INIS)
Suzuki, Shoichi; Asada, Yasuki; Nishi, Kazuta; Mizuno, Emiko; Hara, Natsue; Orito, Takeo; Kamei, Tetsuya; Koga, Sukehiko
2000-01-01
The International Electrotechnical Commission, IEC provided in its standard IEC 60601-1-3 (1994) to prevent the use during radioscopic irradiation of focal spot to skin distances less than 20 cm if the X-RAY EQUIPMENT is specified for RADIOSCOPY during surgery or 30 cm for other specified applications. This standard was reflected in the Japanese Industrial Standard JIS Z 4701-1997, which provided the minimum distance from focal spot to skin to be 30 cm for the use of a fluoroscopic and radiographic table (Under-table type). However, JIS had formerly provided the minimum distance to be 40 cm and so does the current Medical Treatment Law. The draft revision for the Medical Treatment Law currently discussed has consideration to adopt the value 30 cm in accordance with the current JIS. Our research intended to investigate the impact on the entrance surface dose for the change of the focal spot to skin distance from 40 cm to 30 cm. The result was 20-30% increase of the entrance surface dose for the focal spot to skin distance 30 cm. Taking patient exposure dose into account, we need further and more sufficient discussion with this result before adopting this value to the Medical Treatment Law. (author)
Classifying smoking urges via machine learning.
Dumortier, Antoine; Beckjord, Ellen; Shiffman, Saul; Sejdić, Ervin
2016-12-01
Smoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states. To test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision. The outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications. In conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions. Copyright © 2016 Elsevier Ireland Ltd. All rights
Metrics for measuring distances in configuration spaces
International Nuclear Information System (INIS)
Sadeghi, Ali; Ghasemi, S. Alireza; Schaefer, Bastian; Mohr, Stephan; Goedecker, Stefan; Lill, Markus A.
2013-01-01
In order to characterize molecular structures we introduce configurational fingerprint vectors which are counterparts of quantities used experimentally to identify structures. The Euclidean distance between the configurational fingerprint vectors satisfies the properties of a metric and can therefore safely be used to measure dissimilarities between configurations in the high dimensional configuration space. In particular we show that these metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not. We introduce a Monte Carlo approach to obtain the global minimum of the RMSD between configurations, which is obtained from a global minimization over all translations, rotations, and permutations of atomic indices
DEFF Research Database (Denmark)
Sommerlund, Julie
2006-01-01
This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...
Effect of Image Linearization on Normalized Compression Distance
Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela
Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Cyclic labellings with constraints at two distances
Leese, R; Noble, S D
2004-01-01
Motivated by problems in radio channel assignment, we consider the vertex-labelling of graphs with non-negative integers. The objective is to minimise the span of the labelling, subject to constraints imposed at graph distances one and two. We show that the minimum span is (up to rounding) a piecewise linear function of the constraints, and give a complete specification, together with associated optimal assignments, for trees and cycles.
Cannistraci, Carlo
2010-09-01
Motivation: Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. Methods: \\'Minimum Curvilinearity\\' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Results: Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. Conclusion: MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. © The Author(s) 2010. Published by Oxford University Press.
On Normalized Compression Distance and Large Malware
Borbely, Rebecca Schuller
2015-01-01
Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...
Classifying Coding DNA with Nucleotide Statistics
Directory of Open Access Journals (Sweden)
Nicolas Carels
2009-10-01
Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.
Romanian Libray Science Distance Education. Current Context and Possible Solutions
Directory of Open Access Journals (Sweden)
Silvia-Adriana Tomescu
2012-01-01
We thought it would be very useful to propose a model of teaching, learning and assessment for distance higher librarianship tested on www.oll.ro, Open learning library platform to analyze the impact on students, and especially to test the effectiveness of teaching and assessing knowledge from distance. We set a rigorous approach that reflects the problems facing the Romanian LIS education system and emphasizes the optimal strategies that need to be implemented. The benefits of such an approach can and classified as: innovation in education, communicative facilities, and effective strategies for teaching library science.
Steiner tree heuristic in the Euclidean d-space using bottleneck distances
DEFF Research Database (Denmark)
Lorenzen, Stephan Sloth; Winter, Pawel
2016-01-01
Some of the most efficient heuristics for the Euclidean Steiner minimal tree problem in the d-dimensional space, d ≥2, use Delaunay tessellations and minimum spanning trees to determine small subsets of geometrically close terminals. Their low-cost Steiner trees are determined and concatenated...... in a greedy fashion to obtain a low cost tree spanning all terminals. The weakness of this approach is that obtained solutions are topologically related to minimum spanning trees. To avoid this and to obtain even better solutions, bottleneck distances are utilized to determine good subsets of terminals...
Directory of Open Access Journals (Sweden)
Muhammad Ahmad
Full Text Available Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF, in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN. The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods.
Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier
Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar
2015-02-01
In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.
Representing distance, consuming distance
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
Title: Representing Distance, Consuming Distance Abstract: Distance is a condition for corporeal and virtual mobilities, for desired and actual travel, but yet it has received relatively little attention as a theoretical entity in its own right. Understandings of and assumptions about distance...... are being consumed in the contemporary society, in the same way as places, media, cultures and status are being consumed (Urry 1995, Featherstone 2007). An exploration of distance and its representations through contemporary consumption theory could expose what role distance plays in forming...
International Nuclear Information System (INIS)
Todinov, M.T.
2004-01-01
A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level
Lesser, Mark R; Jackson, Stephen T
2013-03-01
Long-distance dispersal is an integral part of plant species migration and population development. We aged and genotyped 1125 individuals in four disjunct populations of Pinus ponderosa that were initially established by long-distance dispersal in the 16th and 17th centuries. Parentage analysis was used to determine if individuals were the product of local reproductive events (two parents present), long-distance pollen dispersal (one parent present) or long-distance seed dispersal (no parents present). All individuals established in the first century at each site were the result of long-distance dispersal. Individuals reproduced at younger ages with increasing age of the overall population. These results suggest Allee effects, where populations were initially unable to expand on their own, and were dependent on long-distance dispersal to overcome a minimum-size threshold. Our results demonstrate that long-distance dispersal was not only necessary for initial colonisation but also to sustain subsequent population growth during early phases of expansion. © 2012 Blackwell Publishing Ltd/CNRS.
Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh
2017-06-01
Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.
Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen
2012-10-01
This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.
Contaminant classification using cosine distances based on multiple conventional sensors.
Liu, Shuming; Che, Han; Smith, Kate; Chang, Tian
2015-02-01
Emergent contamination events have a significant impact on water systems. After contamination detection, it is important to classify the type of contaminant quickly to provide support for remediation attempts. Conventional methods generally either rely on laboratory-based analysis, which requires a long analysis time, or on multivariable-based geometry analysis and sequence analysis, which is prone to being affected by the contaminant concentration. This paper proposes a new contaminant classification method, which discriminates contaminants in a real time manner independent of the contaminant concentration. The proposed method quantifies the similarities or dissimilarities between sensors' responses to different types of contaminants. The performance of the proposed method was evaluated using data from contaminant injection experiments in a laboratory and compared with a Euclidean distance-based method. The robustness of the proposed method was evaluated using an uncertainty analysis. The results show that the proposed method performed better in identifying the type of contaminant than the Euclidean distance based method and that it could classify the type of contaminant in minutes without significantly compromising the correct classification rate (CCR).
Farley, Carlton; Kassu, Aschalew; Bose, Nayana; Jackson-Davis, Armitra; Boateng, Judith; Ruffin, Paul; Sharma, Anup
2017-06-01
A short distance standoff Raman technique is demonstrated for detecting economically motivated adulteration (EMA) in extra virgin olive oil (EVOO). Using a portable Raman spectrometer operating with a 785 nm laser and a 2-in. refracting telescope, adulteration of olive oil with grapeseed oil and canola oil is detected between 1% and 100% at a minimum concentration of 2.5% from a distance of 15 cm and at a minimum concentration of 5% from a distance of 1 m. The technique involves correlating the intensity ratios of prominent Raman bands of edible oils at 1254, 1657, and 1441 cm -1 to the degree of adulteration. As a novel variation in the data analysis technique, integrated intensities over a spectral range of 100 cm -1 around the Raman line were used, making it possible to increase the sensitivity of the technique. The technique is demonstrated by detecting adulteration of EVOO with grapeseed and canola oils at 0-100%. Due to the potential of this technique for making measurements from a convenient distance, the short distance standoff Raman technique has the promise to be used for routine applications in food industry such as identifying food items and monitoring EMA at various checkpoints in the food supply chain and storage facilities.
New method for distance-based close following safety indicator.
Sharizli, A A; Rahizar, R; Karim, M R; Saifizul, A A
2015-01-01
The increase in the number of fatalities caused by road accidents involving heavy vehicles every year has raised the level of concern and awareness on road safety in developing countries like Malaysia. Changes in the vehicle dynamic characteristics such as gross vehicle weight, travel speed, and vehicle classification will affect a heavy vehicle's braking performance and its ability to stop safely in emergency situations. As such, the aim of this study is to establish a more realistic new distance-based safety indicator called the minimum safe distance gap (MSDG), which incorporates vehicle classification (VC), speed, and gross vehicle weight (GVW). Commercial multibody dynamics simulation software was used to generate braking distance data for various heavy vehicle classes under various loads and speeds. By applying nonlinear regression analysis to the simulation results, a mathematical expression of MSDG has been established. The results show that MSDG is dynamically changed according to GVW, VC, and speed. It is envisaged that this new distance-based safety indicator would provide a more realistic depiction of the real traffic situation for safety analysis.
Just-in-time adaptive classifiers-part II: designing the classifier.
Alippi, Cesare; Roveri, Manuel
2008-12-01
Aging effects, environmental changes, thermal drifts, and soft and hard faults affect physical systems by changing their nature and behavior over time. To cope with a process evolution adaptive solutions must be envisaged to track its dynamics; in this direction, adaptive classifiers are generally designed by assuming the stationary hypothesis for the process generating the data with very few results addressing nonstationary environments. This paper proposes a methodology based on k-nearest neighbor (NN) classifiers for designing adaptive classification systems able to react to changing conditions just-in-time (JIT), i.e., exactly when it is needed. k-NN classifiers have been selected for their computational-free training phase, the possibility to easily estimate the model complexity k and keep under control the computational complexity of the classifier through suitable data reduction mechanisms. A JIT classifier requires a temporal detection of a (possible) process deviation (aspect tackled in a companion paper) followed by an adaptive management of the knowledge base (KB) of the classifier to cope with the process change. The novelty of the proposed approach resides in the general framework supporting the real-time update of the KB of the classification system in response to novel information coming from the process both in stationary conditions (accuracy improvement) and in nonstationary ones (process tracking) and in providing a suitable estimate of k. It is shown that the classification system grants consistency once the change targets the process generating the data in a new stationary state, as it is the case in many real applications.
Hybrid classifiers methods of data, knowledge, and classifier combination
Wozniak, Michal
2014-01-01
This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.
Fast Tree: Computing Large Minimum-Evolution Trees with Profiles instead of a Distance Matrix
N. Price, Morgan
2009-01-01
Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement neighbor-joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest-neighbor i...
FastTree: Computing Large Minimum Evolution Trees with Profiles instead of a Distance Matrix
Price, Morgan N.; Dehal, Paramvir S.; Arkin, Adam P.
2009-01-01
Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement Neighbor-Joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest neighbor in...
Directory of Open Access Journals (Sweden)
Jinhong Noh
2016-04-01
Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.
Classification of resistance to passive motion using minimum probability of error criterion.
Chan, H C; Manry, M T; Kondraske, G V
1987-01-01
Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
International Nuclear Information System (INIS)
Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.; Hoare, Melvin G.; Benjamin, Robert A.
2012-01-01
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distances (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.
Safety distance for preventing hot particle ignition of building insulation materials
Directory of Open Access Journals (Sweden)
Jiayun Song
2014-01-01
Full Text Available Trajectories of flying hot particles were predicted in this work, and the temperatures during the movement were also calculated. Once the particle temperature decreased to the critical temperature for a hot particle to ignite building insulation materials, which was predicted by hot-spot ignition theory, the distance particle traveled was determined as the minimum safety distance for preventing the ignition of building insulation materials by hot particles. The results showed that for sphere aluminum particles with the same initial velocities and diameters, the horizontal and vertical distances traveled by particles with higher initial temperatures were higher. Smaller particles traveled farther when other conditions were the same. The critical temperature for an aluminum particle to ignite rigid polyurethane foam increased rapidly with the decrease of particle diameter. The horizontal and vertical safety distances were closely related to the initial temperature, diameter and initial velocity of particles. These results could help update the safety provision of firework display.
Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm
Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.
2014-11-01
minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several
Qualitative Research in Distance Education: An Analysis of Journal Literature 2005-2012
Hauser, Laura
2013-01-01
This review study examines the current research literature in distance education for the years 2005 to 2012. The author found 382 research articles published during that time in four prominent peer-reviewed research journals. The articles were classified and coded as quantitative, qualitative, or mixed methods. Further analysis found another…
Extremal values on Zagreb indices of trees with given distance k-domination number.
Pei, Lidan; Pan, Xiangfeng
2018-01-01
Let [Formula: see text] be a graph. A set [Formula: see text] is a distance k -dominating set of G if for every vertex [Formula: see text], [Formula: see text] for some vertex [Formula: see text], where k is a positive integer. The distance k -domination number [Formula: see text] of G is the minimum cardinality among all distance k -dominating sets of G . The first Zagreb index of G is defined as [Formula: see text] and the second Zagreb index of G is [Formula: see text]. In this paper, we obtain the upper bounds for the Zagreb indices of n -vertex trees with given distance k -domination number and characterize the extremal trees, which generalize the results of Borovićanin and Furtula (Appl. Math. Comput. 276:208-218, 2016). What is worth mentioning, for an n -vertex tree T , is that a sharp upper bound on the distance k -domination number [Formula: see text] is determined.
Long-distance travellers stopover for longer: a case study with spoonbills staying in North Iberia
Navedo , Juan G.; Orizaola , Germán; Masero , José A.; Overdijk , Otto; Sánchez-Guzmán , Juan M.
2010-01-01
Abstract Long-distance migration is widespread among birds, connecting breeding and wintering areas through a set of stopover localities where individuals refuel and/or rest. The extent of the stopover is critical in determining the migratory strategy of a bird. Here, we examined the relationship between minimum length of stay of PVC-ringed birds in a major stopover site and the remaining flight distance to the overwintering area in the Eurasian spoonbill (Platalea l. leucorodia) d...
Energy Technology Data Exchange (ETDEWEB)
Afzal, Muhammad; Ahmed, Furqan; Anwar, Muhammad Yousaf; Ali, Liaqat; Ajmal, Muhammad [Univ. of Engineering and Technology, Metallurgical and Materials Engineering, Lahore (Pakistan); Khan, Aamer Nusair [Institute of Industrial and Control System, Rawalpindi (Pakistan)
2015-09-15
In the present research, WC-12 %Co cermet coatings were deposited on AISI-321 stainless steel substrate using air plasma spraying. During the deposition process, the standoff distance was varied from 80 to 130 mm with 10 mm increments. Other parameters such as current, voltage, time, carrier gas flow rate and powder feed rate etc. were kept constant. The objective was to study the effects of spraying distance on the microstructure of as-sprayed coatings. The microscopic analyses revealed that the band of spraying distance ranging from 90 to 100 mm was the threshold distance for optimum results, provided that all the other spraying parameters were kept constant. In this range of threshold distance, minimum percentages of porosity and defects were observed. Further, the formation of different phases, at six spraying distances, was studied using X-ray diffraction, and the phase analysis was correlated with hardness results.
Support vector machine as a binary classifier for automated object detection in remotely sensed data
International Nuclear Information System (INIS)
Wardaya, P D
2014-01-01
In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result
Support vector machine as a binary classifier for automated object detection in remotely sensed data
Wardaya, P. D.
2014-02-01
In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result.
Towards an intelligent environment for distance learning
Directory of Open Access Journals (Sweden)
Rafael Morales
2009-12-01
Full Text Available Mainstream distance learning nowadays is heavily influenced by traditional educational approaches that produceshomogenised learning scenarios for all learners through learning management systems. Any differentiation betweenlearners and personalisation of their learning scenarios is left to the teacher, who gets minimum support from the system inthis respect. This way, the truly digital native, the computer, is left out of the move, unable to better support the teachinglearning processes because it is not provided with the means to transform into knowledge all the information that it storesand manages. I believe learning management systems should care for supporting adaptation and personalisation of bothindividual learning and the formation of communities of learning. Open learner modelling and intelligent collaborativelearning environments are proposed as a means to care. The proposal is complemented with a general architecture for anintelligent environment for distance learning and an educational model based on the principles of self-management,creativity, significance and participation.
Directory of Open Access Journals (Sweden)
Jiwoong Yu
2017-05-01
Full Text Available Ultra-high resolution (UHR radar imaging is used to analyze the internal structure of objects and to identify and classify their shapes based on ultra-wideband (UWB signals using a vector network analyzer (VNA. However, radar-based imaging is limited by microwave propagation effects, wave scattering, and transmit power, thus the received signals are inevitably weak and noisy. To overcome this problem, the radar may be operated in the near-field. The focusing of UHR radar signals over a close distance requires precise geometry in order to accommodate the spherical waves. In this paper, a geometric estimation and compensation method that is based on the minimum entropy of radar images with sub-centimeter resolution is proposed and implemented. Inverse synthetic aperture radar (ISAR imaging is used because it is applicable to several fields, including medical- and security-related applications, and high quality images of various targets have been produced to verify the proposed method. For ISAR in the near-field, the compensation for the time delay depends on the distance from the center of rotation and the internal RF circuits and cables. Required parameters for the delay compensation algorithm that can be used to minimize the entropy of the radar images are determined so that acceptable results can be achieved. The processing speed can be enhanced by performing the calculations in the time domain without the phase values, which are removed after upsampling. For comparison, the parameters are also estimated by performing random sampling in the data set. Although the reduced data set contained only 5% of the observed angles, the parameter optimization method is shown to operate correctly.
Multi-image acquisition-based distance sensor using agile laser spot beam.
Riza, Nabeel A; Amin, M Junaid
2014-09-01
We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.
Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf
2018-05-01
Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.
Feedback brake distribution control for minimum pitch
Tavernini, Davide; Velenis, Efstathios; Longo, Stefano
2017-06-01
The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.
A Novel Parallel Algorithm for Edit Distance Computation
Directory of Open Access Journals (Sweden)
Muhammad Murtaza Yousaf
2018-01-01
Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.
Saw, Kim Guan
2017-01-01
This article revisits the cognitive load theory to explore the use of worked examples to teach a selected topic in a higher level undergraduate physics course for distance learners at the School of Distance Education, Universiti Sains Malaysia. With a break of several years from receiving formal education and having only minimum science…
Edit distance for marked point processes revisited: An implementation by binary integer programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)
2015-12-15
We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process is large.
International Nuclear Information System (INIS)
Guillorn, Michael A.; Carr, Dustin W.; Tiberio, Richard C.; Greenbaum, Elias; Simpson, Michael L.
2000-01-01
We report a versatile process for the fabrication of dissimilar metal electrodes with a minimum interelectrode distance of less than 6 nm using electron beam lithography and liftoff pattern transfer. This technique provides a controllable and reproducible method for creating structures suited for the electrical characterization of asymmetric molecules for molecular electronics applications. Electrode structures employing pairs of Au electrodes and non-Au electrodes were fabricated in three different patterns. Parallel electrode structures 300 μm long with interelectrode distances as low as 10 nm, 75 nm wide electrode pairs with interelectrode distances less than 6 nm, and a multiterminal electrode structure with reproducible interelectrode distances of 8 nm were realized using this technique. The processing issues associated with the fabrication of these structures are discussed along with the intended application of these devices. (c) 2000 American Vacuum Society
An adaptive distance measure for use with nonparametric models
International Nuclear Information System (INIS)
Garvey, D. R.; Hines, J. W.
2006-01-01
Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction
Quantum ensembles of quantum classifiers.
Schuld, Maria; Petruccione, Francesco
2018-02-09
Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.
Handwritten Digit Recognition using Edit Distance-Based KNN
Bernard , Marc; Fromont , Elisa; Habrard , Amaury; Sebban , Marc
2012-01-01
We discuss the student project given for the last 5 years to the 1st year Master Students which follow the Machine Learning lecture at the University Jean Monnet in Saint Etienne, France. The goal of this project is to develop a GUI that can recognize digits and/or letters drawn manually. The system is based on a string representation of the dig- its using Freeman codes and on the use of an edit-distance-based K-Nearest Neighbors classifier. In addition to the machine learning knowledge about...
Improved initial guess for minimum energy path calculations
International Nuclear Information System (INIS)
Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt; Jónsson, Hannes
2014-01-01
A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used
Max–min distance nonnegative matrix factorization
Wang, Jim Jing-Yan; Gao, Xin
2014-01-01
Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.
Max–min distance nonnegative matrix factorization
Wang, Jim Jing-Yan
2014-10-26
Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.
Towards the use of similarity distances to music genre classification: A comparative study.
Goienetxea, Izaro; Martínez-Otzeta, José María; Sierra, Basilio; Mendialdua, Iñigo
2018-01-01
Music genre classification is a challenging research concept, for which open questions remain regarding classification approach, music piece representation, distances between/within genres, and so on. In this paper an investigation on the classification of generated music pieces is performed, based on the idea that grouping close related known pieces in different sets -or clusters- and then generating in an automatic way a new song which is somehow "inspired" in each set, the new song would be more likely to be classified as belonging to the set which inspired it, based on the same distance used to separate the clusters. Different music pieces representations and distances among pieces are used; obtained results are promising, and indicate the appropriateness of the used approach even in a such a subjective area as music genre classification is.
Distance education as a 'we–ness': a case for uBuntu as a theoretical framework
Du Toit-Brits, C.; Potgieter, F.J.; Hongwane, V.
2012-01-01
‘Education-for-me’ could, epistemologically considered, probably be classified as a typically Western concept where the individual within distance education focuses on individualised ‘education-for-me’, thus making the individual the most important ordering principle. African – specifically Batswana – ACE (Advanced Certificate in Education) students do not regard distance education as a solitary activity or individualised ‘education-for-me’. It appears from the qualitative data...
Learning Global-Local Distance Metrics for Signature-Based Biometric Cryptosystems
Directory of Open Access Journals (Sweden)
George S. Eskander Ekladious
2017-11-01
Full Text Available Biometric traits, such as fingerprints, faces and signatures have been employed in bio-cryptosystems to secure cryptographic keys within digital security schemes. Reliable implementations of these systems employ error correction codes formulated as simple distance thresholds, although they may not effectively model the complex variability of behavioral biometrics like signatures. In this paper, a Global-Local Distance Metric (GLDM framework is proposed to learn cost-effective distance metrics, which reduce within-class variability and augment between-class variability, so that simple error correction thresholds of bio-cryptosystems provide high classification accuracy. First, a large number of samples from a development dataset are used to train a global distance metric that differentiates within-class from between-class samples of the population. Then, once user-specific samples are available for enrollment, the global metric is tuned to a local user-specific one. Proof-of-concept experiments on two reference offline signature databases confirm the viability of the proposed approach. Distance metrics are produced based on concise signature representations consisting of about 20 features and a single prototype. A signature-based bio-cryptosystem is designed using the produced metrics and has shown average classification error rates of about 7% and 17% for the PUCPR and the GPDS-300 databases, respectively. This level of performance is comparable to that obtained with complex state-of-the-art classifiers.
A novel three-stage distance-based consensus ranking method
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Space-Efficient Approximation Scheme for Circular Earth Mover Distance
DEFF Research Database (Denmark)
Brody, Joshua Eric; Liang, Hongyu; Sun, Xiaoming
2012-01-01
The Earth Mover Distance (EMD) between point sets A and B is the minimum cost of a bipartite matching between A and B. EMD is an important measure for estimating similarities between objects with quantifiable features and has important applications in several areas including computer vision...... to computer vision [13] and can be seen as a special case of computing EMD on a discretized grid. We achieve a (1 ±ε) approximation for EMD in $\\tilde O(\\varepsilon^{-3})$ space, for every 0 ... that matches the space bound asked in [9]....
Phylogenetic Applications of the Minimum Contradiction Approach on Continuous Characters
Directory of Open Access Journals (Sweden)
Marc Thuillard
2009-01-01
Full Text Available We describe the conditions under which a set of continuous variables or characters can be described as an X-tree or a split network. A distance matrix corresponds exactly to a split network or a valued X-tree if, after ordering of the taxa, the variables values can be embedded into a function with at most a local maximum and a local minimum, and crossing any horizontal line at most twice. In real applications, the order of the taxa best satisfying the above conditions can be obtained using the Minimum Contradiction method. This approach is applied to 2 sets of continuous characters. The first set corresponds to craniofacial landmarks in Hominids. The contradiction matrix is used to identify possible tree structures and some alternatives when they exist. We explain how to discover the main structuring characters in a tree. The second set consists of a sample of 100 galaxies. In that second example one shows how to discretize the continuous variables describing physical properties of the galaxies without disrupting the underlying tree structure.
Maximum And Minimum Temperature Trends In Mexico For The Last 31 Years
Romero-Centeno, R.; Zavala-Hidalgo, J.; Allende Arandia, M. E.; Carrasco-Mijarez, N.; Calderon-Bustamante, O.
2013-05-01
Based on high-resolution (1') daily maps of the maximum and minimum temperatures in Mexico, an analysis of the last 31-year trends is performed. The maps were generated using all the available information from more than 5,000 stations of the Mexican Weather Service (Servicio Meteorológico Nacional, SMN) for the period 1979-2009, along with data from the North American Regional Reanalysis (NARR). The data processing procedure includes a quality control step, in order to eliminate erroneous daily data, and make use of a high-resolution digital elevation model (from GEBCO), the relationship between air temperature and elevation by means of the average environmental lapse rate, and interpolation algorithms (linear and inverse-distance weighting). Based on the monthly gridded maps for the mentioned period, the maximum and minimum temperature trends calculated by least-squares linear regression and their statistical significance are obtained and discussed.
The Edit Distance as a Measure of Perceived Rhythmic Similarity
Directory of Open Access Journals (Sweden)
Olaf Post
2012-07-01
Full Text Available The ‘edit distance’ (or ‘Levenshtein distance’ measure of distance between two data sets is defined as the minimum number of editing operations – insertions, deletions, and substitutions – that are required to transform one data set to the other (Orpen and Huron, 1992. This measure of distance has been applied frequently and successfully in music information retrieval, but rarely in predicting human perception of distance. In this study, we investigate the effectiveness of the edit distance as a predictor of perceived rhythmic dissimilarity under simple rhythmic alterations. Approaching rhythms as a set of pulses that are either onsets or silences, we study two types of alterations. The first experiment is designed to test the model’s accuracy for rhythms that are relatively similar; whether rhythmic variations with the same edit distance to a source rhythm are also perceived as relatively similar by human subjects. In addition, we observe whether the salience of an edit operation is affected by its metric placement in the rhythm. Instead of using a rhythm that regularly subdivides a 4/4 meter, our source rhythm is a syncopated 16-pulse rhythm, the son. Results show a high correlation between the predictions by the edit distance model and human similarity judgments (r = 0.87; a higher correlation than for the well-known generative theory of tonal music (r = 0.64. In the second experiment, we seek to assess the accuracy of the edit distance model in predicting relatively dissimilar rhythms. The stimuli used are random permutations of the son’s inter-onset intervals: 3-3-4-2-4. The results again indicate that the edit distance correlates well with the perceived rhythmic dissimilarity judgments of the subjects (r = 0.76. To gain insight in the relationships between the individual rhythms, the results are also presented by means of graphic phylogenetic trees.
Building gene expression profile classifiers with a simple and efficient rejection option in R.
Benso, Alfredo; Di Carlo, Stefano; Politano, Gianfranco; Savino, Alessandro; Hafeezurrehman, Hafeez
2011-01-01
The collection of gene expression profiles from DNA microarrays and their analysis with pattern recognition algorithms is a powerful technology applied to several biological problems. Common pattern recognition systems classify samples assigning them to a set of known classes. However, in a clinical diagnostics setup, novel and unknown classes (new pathologies) may appear and one must be able to reject those samples that do not fit the trained model. The problem of implementing a rejection option in a multi-class classifier has not been widely addressed in the statistical literature. Gene expression profiles represent a critical case study since they suffer from the curse of dimensionality problem that negatively reflects on the reliability of both traditional rejection models and also more recent approaches such as one-class classifiers. This paper presents a set of empirical decision rules that can be used to implement a rejection option in a set of multi-class classifiers widely used for the analysis of gene expression profiles. In particular, we focus on the classifiers implemented in the R Language and Environment for Statistical Computing (R for short in the remaining of this paper). The main contribution of the proposed rules is their simplicity, which enables an easy integration with available data analysis environments. Since in the definition of a rejection model tuning of the involved parameters is often a complex and delicate task, in this paper we exploit an evolutionary strategy to automate this process. This allows the final user to maximize the rejection accuracy with minimum manual intervention. This paper shows how the use of simple decision rules can be used to help the use of complex machine learning algorithms in real experimental setups. The proposed approach is almost completely automated and therefore a good candidate for being integrated in data analysis flows in labs where the machine learning expertise required to tune traditional
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
Georgiou, Harris
2009-10-01
Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.
IAEA safeguards and classified materials
International Nuclear Information System (INIS)
Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.
1997-01-01
The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA's safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials
Towards the use of similarity distances to music genre classification: A comparative study.
Directory of Open Access Journals (Sweden)
Izaro Goienetxea
Full Text Available Music genre classification is a challenging research concept, for which open questions remain regarding classification approach, music piece representation, distances between/within genres, and so on. In this paper an investigation on the classification of generated music pieces is performed, based on the idea that grouping close related known pieces in different sets -or clusters- and then generating in an automatic way a new song which is somehow "inspired" in each set, the new song would be more likely to be classified as belonging to the set which inspired it, based on the same distance used to separate the clusters. Different music pieces representations and distances among pieces are used; obtained results are promising, and indicate the appropriateness of the used approach even in a such a subjective area as music genre classification is.
Action Recognition Using Motion Primitives and Probabilistic Edit Distance
DEFF Research Database (Denmark)
Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.
2006-01-01
In this paper we describe a recognition approach based on the notion of primitives. As opposed to recognizing actions based on temporal trajectories or temporal volumes, primitive-based recognition is based on representing a temporal sequence containing an action by only a few characteristic time...... into a string containing a sequence of symbols, each representing a primitives. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The approach is evaluated on five one-arm gestures and the recognition rate is 91...
International Nuclear Information System (INIS)
Czubek, J.A.; Lenda, A.
1979-01-01
The minimum dimensions have been calculated assuring 91, 96 and 98 % of the probe response in respect to the infinite medium. The models are of cylindrical form, the probe (source-to-detector distance equal to 60 or 90 cm) being placed on the model axis, symmetrically with respect to the two end-faces. All the models are ''embedded'' in various media, such as: air, sand of 40% porosity and completely saturated with water, sand of 30 % porosity and of moisture content equal to 10 %, and water. The models are of three types of material: sandstone, limestone and dolomite, with various porosities, ranging from 0 to 100 %. The probe response is due to gamma rays arising from the radiativecapture of thermal neutrons. The calculations were carried out for the highest energy line of gamma rays arising in given litology. Gamma-ray flux from the neutron radiative capture has been calculated versus rock porosity and model dimensions and radiation migration lengths determined for given litologies. The minimum dimensions of cylindrical models are given as functions of: porosity, probe length (source-to-detector distance) lithology of model and type of medium surrounding our model. (author)
Directory of Open Access Journals (Sweden)
Ertekin Öztekin Öztekin
2015-12-01
Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.
Tableau Calculus for the Logic of Comparative Similarity over Arbitrary Distance Spaces
Alenda, Régis; Olivetti, Nicola
The logic CSL (first introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev in 2005) allows one to reason about distance comparison and similarity comparison within a modal language. The logic can express assertions of the kind "A is closer/more similar to B than to C" and has a natural application to spatial reasoning, as well as to reasoning about concept similarity in ontologies. The semantics of CSL is defined in terms of models based on different classes of distance spaces and it generalizes the logic S4 u of topological spaces. In this paper we consider CSL defined over arbitrary distance spaces. The logic comprises a binary modality to represent comparative similarity and a unary modality to express the existence of the minimum of a set of distances. We first show that the semantics of CSL can be equivalently defined in terms of preferential models. As a consequence we obtain the finite model property of the logic with respect to its preferential semantic, a property that does not hold with respect to the original distance-space semantics. Next we present an analytic tableau calculus based on its preferential semantics. The calculus provides a decision procedure for the logic, its termination is obtained by imposing suitable blocking restrictions.
Kaplan, Mehmet; Ozcan, Onder; Bilgic, Ethem; Kaplan, Elif Tugce; Kaplan, Tugba; Kaplan, Fatma Cigdem
2017-11-01
The Limberg flap (LF) procedure is widely performed for the treatment of sacrococcygeal pilonidal sinus (SPS); however, recurrences continues to be observed. The aim of this study was to assess the relationship between LF designs and the risk of SPS recurrence. Sixty-one cases with recurrent disease (study group) and 194 controls, with a minimum of 5 recurrence-free years following surgery (control group), were included in the study. LF reconstructions performed in each group were classified as off-midline closure (OMC) and non-OMC types. Subsequently, the 2 groups were analyzed. After adjustment for all variables, non-OMC types showed the most prominent correlation with recurrence, followed by interrupted suturing type, family history of SPS, smoking, prolonged healing time, and younger age. The best cut-off value for the critical distance from the midline was found to be 11 mm (with 72% sensitivity and 95% specificity for recurrence). We recommend OMC modifications, with the flap tailored to create a safe margin of at least 2 cm between the flap borders and the midline. Copyright © 2017 Elsevier Inc. All rights reserved.
Vo, Martin
2017-08-01
Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.
Directory of Open Access Journals (Sweden)
Shirui Huo
2017-01-01
Full Text Available Human action recognition is an important recent challenging task. Projecting depth images onto three depth motion maps (DMMs and extracting deep convolutional neural network (DCNN features are discriminant descriptor features to characterize the spatiotemporal information of a specific action from a sequence of depth images. In this paper, a unified improved collaborative representation framework is proposed in which the probability that a test sample belongs to the collaborative subspace of all classes can be well defined and calculated. The improved collaborative representation classifier (ICRC based on l2-regularized for human action recognition is presented to maximize the likelihood that a test sample belongs to each class, then theoretical investigation into ICRC shows that it obtains a final classification by computing the likelihood for each class. Coupled with the DMMs and DCNN features, experiments on depth image-based action recognition, including MSRAction3D and MSRGesture3D datasets, demonstrate that the proposed approach successfully using a distance-based representation classifier achieves superior performance over the state-of-the-art methods, including SRC, CRC, and SVM.
Block-classified bidirectional motion compensation scheme for wavelet-decomposed digital video
Energy Technology Data Exchange (ETDEWEB)
Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Zhang, Y.Q. [David Sarnoff Research Center, Princeton, NJ (United States); Jabbari, B. [George Mason Univ., Fairfax, VA (United States)
1997-08-01
In this paper the authors introduce a block-classified bidirectional motion compensation scheme for the previously developed wavelet-based video codec, where multiresolution motion estimation is performed in the wavelet domain. The frame classification structure described in this paper is similar to that used in the MPEG standard. Specifically, the I-frames are intraframe coded, the P-frames are interpolated from a previous I- or a P-frame, and the B-frames are bidirectional interpolated frames. They apply this frame classification structure to the wavelet domain with variable block sizes and multiresolution representation. They use a symmetric bidirectional scheme for the B-frames and classify the motion blocks as intraframe, compensated either from the preceding or the following frame, or bidirectional (i.e., compensated based on which type yields the minimum energy). They also introduce the concept of F-frames, which are analogous to P-frames but are predicted from the following frame only. This improves the overall quality of the reconstruction in a group of pictures (GOP) but at the expense of extra buffering. They also study the effect of quantization of the I-frames on the reconstruction of a GOP, and they provide intuitive explanation for the results. In addition, the authors study a variety of wavelet filter-banks to be used in a multiresolution motion-compensated hierarchical video codec.
15 CFR 4.8 - Classified Information.
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily the...
DEFF Research Database (Denmark)
Steinhauer, Jeremy; Delcambre, Lois M.L.; Lykke, Marianne
2014-01-01
to the question that they are answering. Since a large class of machine learning algorithms use a distance measure at the core, we evaluate the suitability of common machine learning distance measures to distinguish sessions of users searching for the answer to same or different questions. We found that two......We seek to improve information retrieval in a domain-specific collection by clustering user sessions from a click log and then classifying later user sessions in real time. As a preliminary step, we explore the main assumption of this approach: whether user sessions in such a site are related...
Particle swarm optimization for determining shortest distance to voltage collapse
Energy Technology Data Exchange (ETDEWEB)
Arya, L.D.; Choube, S.C. [Electrical Engineering Department, S.G.S.I.T.S. Indore, MP 452 003 (India); Shrivastava, M. [Electrical Engineering Department, Government Engineering College Ujjain, MP 456 010 (India); Kothari, D.P. [Centre for Energy Studies, Indian Institute of Technology, Delhi (India)
2007-12-15
This paper describes an algorithm for computing shortest distance to voltage collapse or determination of CSNBP using PSO technique. A direction along CSNBP gives conservative results from voltage security view point. This information is useful to the operator to steer the system away from this point by taking corrective actions. The distance to a closest bifurcation is a minimum of the loadability given a slack bus or participation factors for increasing generation as the load increases. CSNBP determination has been formulated as an optimization problem to be used in PSO technique. PSO is a new evolutionary algorithm (EA) which is population based inspired by the social behavior of animals such as fish schooling and birds flocking. It can handle optimization problems with any complexity since mechanization is simple with few parameters to be tuned. The developed algorithm has been implemented on two standard test systems. (author)
Vertices Contained In All Or In No Minimum Semitotal Dominating Set Of A Tree
Directory of Open Access Journals (Sweden)
Henning Michael A.
2016-02-01
Full Text Available Let G be a graph with no isolated vertex. In this paper, we study a parameter that is squeezed between arguably the two most important domination parameters; namely, the domination number, γ(G, and the total domination number, γt(G. A set S of vertices in a graph G is a semitotal dominating set of G if it is a dominating set of G and every vertex in S is within distance 2 of another vertex of S. The semitotal domination number, γt2(G, is the minimum cardinality of a semitotal dominating set of G. We observe that γ(G ≤ γt2(G ≤ γt(G. We characterize the set of vertices that are contained in all, or in no minimum semitotal dominating set of a tree.
Fingerprint prediction using classifier ensembles
CSIR Research Space (South Africa)
Molale, P
2011-11-01
Full Text Available ); logistic discrimination (LgD), k-nearest neighbour (k-NN), artificial neural network (ANN), association rules (AR) decision tree (DT), naive Bayes classifier (NBC) and the support vector machine (SVM). The performance of several multiple classifier systems...
Characteristics of low-latitude ionospheric depletions and enhancements during solar minimum
Haaser, R. A.; Earle, G. D.; Heelis, R. A.; Klenzing, J.; Stoneback, R.; Coley, W. R.; Burrell, A. G.
2012-10-01
Under the waning solar minimum conditions during 2009 and 2010, the Ion Velocity Meter, part of the Coupled Ion Neutral Dynamics Investigation aboard the Communication/Navigation Outage Forecasting System satellite, is used to measure in situ nighttime ion densities and drifts at altitudes between 400 and 550 km during the hours 21:00-03:00 solar local time. A new approach to detecting and classifying well-formed ionospheric plasma depletions and enhancements (bubbles and blobs) with scale sizes between 50 and 500 km is used to develop geophysical statistics for the summer, winter, and equinox seasons during the quiet solar conditions. Some diurnal and seasonal geomagnetic distribution characteristics confirm previous work on equatorial irregularities and scintillations, while other elements reveal new behaviors that will require further investigation before they may be fully understood. Events identified in the study reveal very different and often opposite behaviors of bubbles and blobs during solar minimum. In particular, more bubbles demonstrating deeper density fluctuations and faster perturbation plasma drifts typically occur earlier near the magnetic equator, while blobs of similar magnitude occur more often far away from the geomagnetic equator closer to midnight.
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
A proposal of comparative Maunder minimum cosmogenic isotope measurements
International Nuclear Information System (INIS)
Attolini, M.R.; Nanni, T.; Galli, M.; Povinec, P.
1989-01-01
There are at present contraddictory conclusions about solar activity and cosmogenic isotope production variation during Maunder Minimum. The interaction of solar wind with galactic cosmic rays, the dynamic behaviour of the Sun either as a system having an internal clock, and/or as a forced non linear system, are important aspects that can shed new light on solar physics, the Earth-Sun relationship and the climatic variation. An essential progress in the matter might be made by clarifying the cosmogenic isotope production during the mentioned interval. As it seems that during Maunder Minimum the Be10 production oscillates of about a factor of two, the authors have also to expect short scale enhanced variations in tree rings radiocarbon concentrations for the same interval. It is therefore highly desirable that for the same interval, that the authors would identify with 1640-1720 AD, detailed concentration measurements both of Be10 (in dated polar ice in addition to those of Beer et al.) and of tree ring radiocarbon, be made with cross-checking, in samples of different latitudes, longitudes and within short and large distance of the sea. The samples could be taken, as for example in samples from the central Mediterranean region, in the Baltic region and in other sites from central Europe and Asia
Automatic Samples Selection Using Histogram of Oriented Gradients (HOG Feature Distance
Directory of Open Access Journals (Sweden)
Inzar Salfikar
2018-01-01
Full Text Available Finding victims at a disaster site is the primary goal of Search-and-Rescue (SAR operations. Many technologies created from research for searching disaster victims through aerial imaging. but, most of them are difficult to detect victims at tsunami disaster sites with victims and backgrounds which are look similar. This research collects post-tsunami aerial imaging data from the internet to builds dataset and model for detecting tsunami disaster victims. Datasets are built based on distance differences from features every sample using Histogram-of-Oriented-Gradient (HOG method. We use the longest distance to collect samples from photo to generate victim and non-victim samples. We claim steps to collect samples by measuring HOG feature distance from all samples. the longest distance between samples will take as a candidate to build the dataset, then classify victim (positives and non-victim (negatives samples manually. The dataset of tsunami disaster victims was re-analyzed using cross-validation Leave-One-Out (LOO with Support-Vector-Machine (SVM method. The experimental results show the performance of two test photos with 61.70% precision, 77.60% accuracy, 74.36% recall and f-measure 67.44% to distinguish victim (positives and non-victim (negatives.
An Algorithm Based on the Self-Organized Maps for the Classification of Facial Features
Directory of Open Access Journals (Sweden)
Gheorghe Gîlcă
2015-12-01
Full Text Available This paper deals with an algorithm based on Self Organized Maps networks which classifies facial features. The proposed algorithm can categorize the facial features defined by the input variables: eyebrow, mouth, eyelids into a map of their grouping. The groups map is based on calculating the distance between each input vector and each output neuron layer , the neuron with the minimum distance being declared winner neuron. The network structure consists of two levels: the first level contains three input vectors, each having forty-one values, while the second level contains the SOM competitive network which consists of 100 neurons. The proposed system can classify facial features quickly and easily using the proposed algorithm based on SOMs.
Scaling of Natal Dispersal Distances in Terrestrial Birds and Mammals
Directory of Open Access Journals (Sweden)
Glenn D. Sutherland
2000-07-01
Full Text Available Natal dispersal is a process that is critical in the spatial dynamics of populations, including population spread, recolonization, and gene flow. It is a central focus of conservation issues for many vertebrate species. Using data for 77 bird and 68 mammal species, we tested whether median and maximum natal dispersal distances were correlated with body mass, diet type, social system, taxonomic family, and migratory status. Body mass and diet type were found to predict both median and maximum natal dispersal distances in mammals: large species dispersed farther than small ones, and carnivorous species dispersed farther than herbivores and omnivores. Similar relationships occurred for carnivorous bird species, but not for herbivorous or omnivorous ones. Natal dispersal distances in birds or mammals were not significantly related to broad categories of social systems. Only in birds were factors such as taxonomic relatedness and migratory status correlated with natal dispersal, and then only for maximum distances. Summary properties of dispersal processes appeared to be derived from interactions among behavioral and morphological characteristics of species and from their linkages to the dynamics of resource availability in landscapes. In all the species we examined, most dispersers moved relatively short distances, and long-distance dispersal was uncommon. On the basis of these findings, we fit an empirical model based on the negative exponential distribution for calculating minimum probabilities that animals disperse particular distances from their natal areas. This model, coupled with knowledge of a species' body mass and diet type, can be used to conservatively predict dispersal distances for different species and examine possible consequences of large-scale habitat alterations on connectedness between populations. Taken together, our results can provide managers with the means to identify species vulnerable to landscape-level habitat changes
Classifying Sluice Occurrences in Dialogue
DEFF Research Database (Denmark)
Baird, Austin; Hamza, Anissa; Hardt, Daniel
2018-01-01
perform manual annotation with acceptable inter-coder agreement. We build classifier models with Decision Trees and Naive Bayes, with accuracy of 67%. We deploy a classifier to automatically classify sluice occurrences in OpenSubtitles, resulting in a corpus with 1.7 million occurrences. This will support....... Despite this, the corpus can be of great use in research on sluicing and development of systems, and we are making the corpus freely available on request. Furthermore, we are in the process of improving the accuracy of sluice identification and annotation for the purpose of created a subsequent version...
Adam Głowacz; Witold Głowacz; Andrzej Głowacz
2010-01-01
The paper presents method of diagnostics of imminent failure conditions of synchronous motor. This method is based on a study ofacoustic signals generated by synchronous motor. Sound recognition system is based on algorithms of data processing, such as MFCC andNearest Mean classifier with cosine distance. Software to recognize the sounds of synchronous motor was implemented. The studies werecarried out for four imminent failure conditions of synchronous motor. The results confirm that the sys...
Directory of Open Access Journals (Sweden)
V. Indira
2015-03-01
Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.
Employment effects of minimum wages
Neumark, David
2014-01-01
The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.
Classifiers based on optimal decision rules
Amin, Talha
2013-11-25
Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).
Classifiers based on optimal decision rules
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).
Directory of Open Access Journals (Sweden)
Victor Hugo C. de Albuquerque
2015-05-01
Full Text Available Secondary phases, such as laves and carbides, are formed during the final solidification stages of nickel-based superalloy coatings deposited during the gas tungsten arc welding cold wire process. However, when aged at high temperatures, other phases can precipitate in the microstructure, like the γ'' and δ phases. This work presents an evaluation of the powerful optimum path forest (OPF classifier configured with six distance functions to classify background echo and backscattered ultrasonic signals from samples of the inconel 625 superalloy thermally aged at 650 and 950 \\(^\\circ\\C for 10, 100 and 200 h. The background echo and backscattered ultrasonic signals were acquired using transducers with frequencies of 4 and 5 MHz. The potentiality of ultrasonic sensor signals combined with the OPF to characterize the microstructures of an inconel 625 thermally aged and in the as-welded condition were confirmed by the results. The experimental results revealed that the OPF classifier is sufficiently fast (classification total time of 0.316 ms and accurate (accuracy of 88.75% and harmonic mean of 89.52 for the application proposed.
de Albuquerque, Victor Hugo C; Barbosa, Cleisson V; Silva, Cleiton C; Moura, Elineudo P; Filho, Pedro P Rebouças; Papa, João P; Tavares, João Manuel R S
2015-05-27
Secondary phases, such as laves and carbides, are formed during the final solidification stages of nickel-based superalloy coatings deposited during the gas tungsten arc welding cold wire process. However, when aged at high temperatures, other phases can precipitate in the microstructure, like the γ'' and δ phases. This work presents an evaluation of the powerful optimum path forest (OPF) classifier configured with six distance functions to classify background echo and backscattered ultrasonic signals from samples of the inconel 625 superalloy thermally aged at 650 and 950 °C for 10, 100 and 200 h. The background echo and backscattered ultrasonic signals were acquired using transducers with frequencies of 4 and 5 MHz. The potentiality of ultrasonic sensor signals combined with the OPF to characterize the microstructures of an inconel 625 thermally aged and in the as-welded condition were confirmed by the results. The experimental results revealed that the OPF classifier is sufficiently fast (classification total time of 0.316 ms) and accurate (accuracy of 88.75%" and harmonic mean of 89.52) for the application proposed.
Directory of Open Access Journals (Sweden)
Shehzad Khalid
2014-01-01
Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.
Evaluating a k-nearest neighbours-based classifier for locating faulty areas in power systems
Directory of Open Access Journals (Sweden)
Juan José Mora Flórez
2008-09-01
Full Text Available This paper reports a strategy for identifying and locating faults in a power distribution system. The strategy was based on the K-nearest neighbours technique. This technique simply helps to estimate a distance from the features used for describing a particu-lar fault being classified to the faults presented during the training stage. If new data is presented to the proposed fault locator, it is classified according to the nearest example recovered. A characterisation of the voltage and current measurements obtained at one single line end is also presented in this document for assigning the area in the case of a fault in a power system. The pro-posed strategy was tested in a real power distribution system, average 93% confidence indexes being obtained which gives a good indicator of the proposal’s high performance. The results showed how a fault could be located by using features obtained from voltage and current, improving utility response and thereby improving system continuity indexes in power distribution sys-tems.
Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C
2016-08-01
Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.
PERBANDINGAN EUCLIDEAN DISTANCE DENGAN CANBERRA DISTANCE PADA FACE RECOGNITION
Directory of Open Access Journals (Sweden)
Sendhy Rachmat Wurdianarto
2014-08-01
Full Text Available Perkembangan ilmu pada dunia komputer sangatlah pesat. Salah satu yang menandai hal ini adalah ilmu komputer telah merambah pada dunia biometrik. Arti biometrik sendiri adalah karakter-karakter manusia yang dapat digunakan untuk membedakan antara orang yang satu dengan yang lainnya. Salah satu pemanfaatan karakter / organ tubuh pada setiap manusia yang digunakan untuk identifikasi (pengenalan adalah dengan memanfaatkan wajah. Dari permasalahan diatas dalam pengenalan lebih tentang aplikasi Matlab pada Face Recognation menggunakan metode Euclidean Distance dan Canberra Distance. Model pengembangan aplikasi yang digunakan adalah model waterfall. Model waterfall beriisi rangkaian aktivitas proses yang disajikan dalam proses analisa kebutuhan, desain menggunakan UML (Unified Modeling Language, inputan objek gambar diproses menggunakan Euclidean Distance dan Canberra Distance. Kesimpulan yang dapat ditarik adalah aplikasi face Recognation menggunakan metode euclidean Distance dan Canverra Distance terdapat kelebihan dan kekurangan masing-masing. Untuk kedepannya aplikasi tersebut dapat dikembangkan dengan menggunakan objek berupa video ataupun objek lainnya. Kata kunci : Euclidean Distance, Face Recognition, Biometrik, Canberra Distance
International Nuclear Information System (INIS)
Svoren, J.
1982-01-01
The present statistical analysis is based on a sample of long-period comets selected according to two criteria: (1) availability of photometric observations made at large distances from the Sun and covering an orbital arc long enough for a reliable determination of the photometric parameters, and (2) availability of a well determined orbit making it possible to classify the comet as new or old in Oort's (1950) sense. The selection was confined to comets with nearly parabolic orbits. 67 objects were found to satisfy the selection criteria. Photometric data referring to heliocentric distances of r > 2.5 AU were only used, yielding a total of 2,842 individual estimates and measurements. (Auth.)
Fields, Gary S.; Kanbur, Ravi
2005-01-01
Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...
Heat stroke risk for open-water swimmers during long-distance events.
Macaluso, Filippo; Barone, Rosario; Isaacs, Ashwin W; Farina, Felicia; Morici, Giuseppe; Di Felice, Valentina
2013-12-01
Open-water swimming is a rapidly growing sport discipline worldwide, and clinical problems associated with long-distance swimming are now better recognized and managed more effectively. The most prevalent medical risk associated with an open-water swimming event is hypothermia; therefore, the Federation Internationale De Natation (FINA) has instituted 2 rules to reduce this occurrence related to the minimum water temperature and the time taken to complete the race. Another medical risk that is relevant to open-water swimmers is heat stroke, a condition that can easily go unnoticed. The purpose of this review is to shed light on this physiological phenomenon by examining the physiological response of swimmers during long-distance events, to define a maximum water temperature limit for competitions. We conclude that competing in water temperatures exceeding 33°C should be avoided. Copyright © 2013 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.
76 FR 34761 - Classified National Security Information
2011-06-14
... MARINE MAMMAL COMMISSION Classified National Security Information [Directive 11-01] AGENCY: Marine... Commission's (MMC) policy on classified information, as directed by Information Security Oversight Office... of Executive Order 13526, ``Classified National Security Information,'' and 32 CFR part 2001...
Error minimizing algorithms for nearest eighbor classifiers
Energy Technology Data Exchange (ETDEWEB)
Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M
2011-01-03
Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.
2010-02-08
... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...
Aggregation Operator Based Fuzzy Pattern Classifier Design
DEFF Research Database (Denmark)
Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker
2009-01-01
This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....
A Pareto-Improving Minimum Wage
Eliav Danziger; Leif Danziger
2014-01-01
This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...
DEFF Research Database (Denmark)
Govindan, Kannan; Jafarian, Ahmad; Nourbakhsh, Vahid
2015-01-01
simultaneously considering the sustainable OAP in the sustainable SCND as a strategic decision. The proposed supply chain network is composed of five echelons including suppliers classified in different classes, plants, distribution centers that dispatch products via two different ways, direct shipment......, a novel multi-objective hybrid approach called MOHEV with two strategies for its best particle selection procedure (BPSP), minimum distance, and crowding distance is proposed. MOHEV is constructed through hybridization of two multi-objective algorithms, namely the adapted multi-objective electromagnetism...
Psychological distance of pedestrian at the bus terminal area
Firdaus Mohamad Ali, Mohd; Salleh Abustan, Muhamad; Hidayah Abu Talib, Siti; Abustan, Ismail; Rahman, Noorhazlinda Abd; Gotoh, Hitoshi
2018-03-01
Walking is a part of transportation modes that is effective for pedestrian in either short or long trips. All people are classified as pedestrian because people do walk every day and the higher number of people walking will lead to crowd conditions and that is the reason of the importance to study about the behaviour of pedestrian specifically the psychological distance in both indoor and outdoor. Nowadays, the number of studies of crowd dynamics among pedestrian have increased due to the concern about the safety issues primarily related to the emergency cases such as fire, earthquake, festival and etc. An observation of pedestrian was conducted at one of the main bus terminals in Kuala Lumpur with the main objective to obtain pedestrian psychological distance and it took place for 45 minutes by using a camcorder that was set up by using a tripod on the upper floor from the area of observation at the main lobby and the trapped area was approximately 100 m2. The analysis was focused on obtaining the gap between pedestrian based on two different categories, which are; (a) Pedestrian with relationship, and (b) Pedestrian without relationship. In total, 1,766 data were obtained during the analysis in which 561 data were obtained for `Pedestrian with relationship' and 1,205 data were obtained for "Pedestrian without relationship". Based on the obtained results, "Pedestrian without relationship" had shown a slightly higher average value of psychological distance between them compare to "Pedestrian with relationship" with the results of 1.6360m and 1.5909m respectively. In gender case, "Pedestrian without relationship" had higher mean of psychological distance in all three categories as well. Therefore, it can be concluded that pedestrian without relationship tend to have longer distance when walking in crowds.
Deza, Michel Marie
2016-01-01
This 4th edition of the leading reference volume on distance metrics is characterized by updated and rewritten sections on some items suggested by experts and readers, as well a general streamlining of content and the addition of essential new topics. Though the structure remains unchanged, the new edition also explores recent advances in the use of distances and metrics for e.g. generalized distances, probability theory, graph theory, coding theory, data analysis. New topics in the purely mathematical sections include e.g. the Vitanyi multiset-metric, algebraic point-conic distance, triangular ratio metric, Rossi-Hamming metric, Taneja distance, spectral semimetric between graphs, channel metrization, and Maryland bridge distance. The multidisciplinary sections have also been supplemented with new topics, including: dynamic time wrapping distance, memory distance, allometry, atmospheric depth, elliptic orbit distance, VLBI distance measurements, the astronomical system of units, and walkability distance. Lea...
International Nuclear Information System (INIS)
Dam, H. van; Leege, P.F.A. de
1987-01-01
An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)
National Research Council Canada - National Science Library
Braddock, Joseph
1997-01-01
A study reviewing the existing Army Distance Learning Plan (ADLP) and current Distance Learning practices, with a focus on the Army's training and educational challenges and the benefits of applying Distance Learning techniques...
Composite Classifiers for Automatic Target Recognition
National Research Council Canada - National Science Library
Wang, Lin-Cheng
1998-01-01
...) using forward-looking infrared (FLIR) imagery. Two existing classifiers, one based on learning vector quantization and the other on modular neural networks, are used as the building blocks for our composite classifiers...
Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model
Directory of Open Access Journals (Sweden)
Bogdan Gliwa
2011-01-01
Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.
REPRESENTATIONS OF DISTANCE: DIFFERENCES IN UNDERSTANDING DISTANCE ACCORDING TO TRAVEL METHOD
Directory of Open Access Journals (Sweden)
Gunvor Riber Larsen
2017-12-01
Full Text Available This paper explores how Danish tourists represent distance in relation to their holiday mobility and how these representations of distance are a result of being aero-mobile as opposed to being land-mobile. Based on interviews with Danish tourists, whose holiday mobility ranges from the European continent to global destinations, the first part of this qualitative study identifies three categories of representations of distance that show how distance is being ‘translated’ by the tourists into non-geometric forms: distance as resources, distance as accessibility, and distance as knowledge. The representations of distance articulated by the Danish tourists show that distance is often not viewed in ‘just’ kilometres. Rather, it is understood in forms that express how transcending the physical distance through holiday mobility is dependent on individual social and economic contexts, and on whether the journey was undertaken by air or land. The analysis also shows that being aeromobile is the holiday transportation mode that removes the tourists the furthest away from physical distance, resulting in the distance travelled by air being represented in ways that have the least correlation, in the tourists’ minds, with physical distance measured in kilometres.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
International Nuclear Information System (INIS)
Stenin, V.Ya.; Stepanov, P.V.
2015-01-01
A hardened DICE cell layout design is based on the two spaced transistor clusters of the DICE cell each consisting of four transistors. The larger the distance between these two CMOS transistor clusters, the more robust the hardened DICE SRAM to Single Event Upsets. Some versions of the 28-nm and 65-nm DICE CMOS SRAM block composition have been suggested with minimum cluster distances of 2.27-2.32 mkm. The area of hardened 28-nm DICE CMOS cells is larger than the area of 28-nm 6T CMOS cells by a factor of 2.1 [ru
Directory of Open Access Journals (Sweden)
Keiichi Kubota
2007-03-01
Full Text Available We implemented a synchronous distance course entitled: Introductory Finance designed for undergraduate students. This course was held between two Japanese universities. Stable Internet connections allowing minimum delay and minimum interruptions of the audio-video streaming signals were used. Students were equipped with their own PCs with pre-loaded learning materials and Microsoft Excel exercises. These accompanying course and exercise materials helped students comprehend the mathematical equations and statistical numerical exercises that are indispensable to learning Introductory Finance effectively. The general tendency for students, not to raise questions during the class hours in Japan, however, was found to be a big obstacle. As such, motivational devices are needed and should ideally be combined to promote interaction between the e-classrooms.
A test for the minimum scale of grooving on the Amatrice and Norcia earthquakes
Okamoto, K.; Brodsky, E. E.; Billi, A.
2017-12-01
As stress builds up along a fault, elastic strain energy builds until it cannot be accommodated by small-scale ductile deformation and then the fault brittlely fails. This brittle failure is associated with the grooving process that causes slickensides along fault planes. Therefore the scale at which slickensides disappear could be geological evidence of earthquake nucleation. Past studies found the minimum scale of grooving, however the studied fault surfaces were not exposed by recent earthquakes. These measurements could have been a product of chemical or mechanical weathering. On August 24th and October 30th of 2016, MW 6.0 and 6.5 earthquakes shook central Italy. The earthquakes caused decimeter to meter scale fault scarps along the Mt. Vettoretto Fault. Here, we analyze samples of a scarp using white light interferometry in order to determine if the minimum scale of grooving is present. Results suggest that grooving begins around 100 μm for these samples, which is consistent with previous findings of faults without any direct evidence of earthquakes. The measurement is also consistent with typical values of the frictional weakening distance Dc, which also is associated with a transition between ductile and brittle behavior. The measurements show that the minimum scale of grooving is a useful measure of the behavior of faults.
36 CFR 1256.46 - National security-classified information.
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National security-classified... Restrictions § 1256.46 National security-classified information. In accordance with 5 U.S.C. 552(b)(1), NARA... properly classified under the provisions of the pertinent Executive Order on Classified National Security...
Directory of Open Access Journals (Sweden)
Kim Guan SAW
2017-10-01
Full Text Available This article revisits the cognitive load theory to explore the use of worked examples to teach a selected topic in a higher level undergraduate physics course for distance learners at the School of Distance Education, Universiti Sains Malaysia. With a break of several years from receiving formal education and having only minimum science background, distance learners need an appropriate instructional strategy for courses that require complex conceptualization and mathematical manipulations. As the working memory is limited, distance learners need to acquire domain specific knowledge in stages to lessen cognitive load. This article charts a learning task with a lower cognitive load to teach Fermi-Dirac distribution and demonstrates the use of sequential worked examples. Content taught in stages using worked examples can be presented as a form of didactic conversation to reduce transactional distance. This instructional strategy can be applied to similar challenging topics in other well-structured domains in a distance learning environment.
Solar wind and coronal structure near sunspot minimum - Pioneer and SMM observations from 1985-1987
Mihalov, J. D.; Barnes, A.; Hundhausen, A. J.; Smith, E. J.
1990-01-01
Changes in solar wind speed and magnetic polarity observed at the Pioneer spacecraft are discussed here in terms of the changing magnetic geometry implied by SMM coronagraph observations over the period 1985-1987. The pattern of recurrent solar wind streams, the long-term average speed, and the sector polarity of the interplanetary magnetic field all changed in a manner suggesting both a temporal variation, and a changing dependence on heliographic latitude. Coronal observations during this epoch show a systematic variation in coronal structure and the magnetic structure imposed on the expanding solar wind. These observations suggest interpretation of the solar wind speed variations in terms of the familiar model where the speed increases with distance from a nearly flat interplanetary current sheet, and where this current sheet becomes aligned with the solar equatorial plane as sunspot minimum approaches, but deviates rapidly from that orientation after minimum.
International Nuclear Information System (INIS)
Cheng Shaoyong; Xiu Shixin; Wang Jimei; Shen Zhengchao
2006-01-01
The greenhouse effect of SF 6 is a great concern today. The development of high voltage vacuum circuit breakers becomes more important. The vacuum circuit breaker has minimum pollution to the environment. The vacuum interrupter is the key part of a vacuum circuit breaker. The interrupting characteristics in vacuum and arc-controlling technique are the main problems to be solved for a longer gap distance in developing high voltage vacuum interrupters. To understand the vacuum arc characteristics and provide effective technique to control vacuum arc in a long gap distance, the arc mode transition of a cup-type axial magnetic field electrode is observed by a high-speed charge coupled device (CCD) video camera under different gap distances while the arc voltage and arc current are recorded. The controlling ability of the axial magnetic field on vacuum arc obviously decreases when the gap distance is longer than 40 mm. The noise components and mean value of the arc voltage significantly increase. The effective method for controlling the vacuum arc characteristics is provided by long gap distances based on the test results. The test results can be used as a reference to develop high voltage and large capacity vacuum interrupters
Class-specific Error Bounds for Ensemble Classifiers
Energy Technology Data Exchange (ETDEWEB)
Prenger, R; Lemmond, T; Varshney, K; Chen, B; Hanley, W
2009-10-06
The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.
Deconvolution When Classifying Noisy Data Involving Transformations
Carroll, Raymond
2012-09-01
In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.
Deconvolution When Classifying Noisy Data Involving Transformations.
Carroll, Raymond; Delaigle, Aurore; Hall, Peter
2012-09-01
In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.
Just-in-time classifiers for recurrent concepts.
Alippi, Cesare; Boracchi, Giacomo; Roveri, Manuel
2013-04-01
Just-in-time (JIT) classifiers operate in evolving environments by classifying instances and reacting to concept drift. In stationary conditions, a JIT classifier improves its accuracy over time by exploiting additional supervised information coming from the field. In nonstationary conditions, however, the classifier reacts as soon as concept drift is detected; the current classification setup is discarded and a suitable one activated to keep the accuracy high. We present a novel generation of JIT classifiers able to deal with recurrent concept drift by means of a practical formalization of the concept representation and the definition of a set of operators working on such representations. The concept-drift detection activity, which is crucial in promptly reacting to changes exactly when needed, is advanced by considering change-detection tests monitoring both inputs and classes distributions.
Deconvolution When Classifying Noisy Data Involving Transformations
Carroll, Raymond; Delaigle, Aurore; Hall, Peter
2012-01-01
In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.
On the minimum core mass for giant planet formation at wide separations
International Nuclear Information System (INIS)
Piso, Ana-Maria A.; Youdin, Andrew N.
2014-01-01
In the core accretion hypothesis, giant planets form by gas accretion onto solid protoplanetary cores. The minimum (or critical) core mass to form a gas giant is typically quoted as 10 M ⊕ . The actual value depends on several factors: the location in the protoplanetary disk, atmospheric opacity, and the accretion rate of solids. Motivated by ongoing direct imaging searches for giant planets, this study investigates core mass requirements in the outer disk. To determine the fastest allowed rates of gas accretion, we consider solid cores that no longer accrete planetesimals, as this would heat the gaseous envelope. Our spherical, two-layer atmospheric cooling model includes an inner convective region and an outer radiative zone that matches onto the disk. We determine the minimum core mass for a giant planet to form within a typical disk lifetime of 3 Myr. The minimum core mass declines with disk radius, from ∼8.5 M ⊕ at 5 AU to ∼3.5 M ⊕ at 100 AU, with standard interstellar grain opacities. Lower temperatures in the outer disk explain this trend, while variations in disk density are less influential. At all distances, a lower dust opacity or higher mean molecular weight reduces the critical core mass. Our non-self-gravitating, analytic cooling model reveals that self-gravity significantly affects early atmospheric evolution, starting when the atmosphere is only ∼10% as massive as the core.
Distribution of the minimum path on percolation clusters: A renormalization group calculation
International Nuclear Information System (INIS)
Hipsh, Lior.
1993-06-01
This thesis uses the renormalization group for the research of the chemical distance or the minimal path on percolation clusters on a 2 dimensional square lattice. Our aims are to calculate analytically (iterative calculation) the fractal dimension of the minimal path. d min. , and the distributions of the minimum paths, l min for different lattice sizes and for different starting densities (including the threshold value p c ). For the distributions. We seek for an analytic form which describes them. The probability to get a minimum path for each linear size L is calculated by iterating the distribution of l min for the basic cell of size 2*2 to the next scale sizes, using the H cell renormalization group. For the threshold value of p and for values near to p c . We confirm a scaling in the form: P(l,L) =f1/l(l/(L d min ). L - the linear size, l - the minimum path. The distribution can be also represented in the Fourier space, so we will try to solve the renormalization group equations in this space. A numerical fitting is produced and compared to existing numerical results. In order to improve the agreement between the renormalization group and the numerical simulations, we also present attempts to generalize the renormalization group by adding more parameters, e.g. correlations between bonds in different directions or finite densities for occupation of bonds and sites. (author) 17 refs
Deza, Michel Marie
2014-01-01
This updated and revised third edition of the leading reference volume on distance metrics includes new items from very active research areas in the use of distances and metrics such as geometry, graph theory, probability theory and analysis. Among the new topics included are, for example, polyhedral metric space, nearness matrix problems, distances between belief assignments, distance-related animal settings, diamond-cutting distances, natural units of length, Heidegger’s de-severance distance, and brain distances. The publication of this volume coincides with intensifying research efforts into metric spaces and especially distance design for applications. Accurate metrics have become a crucial goal in computational biology, image analysis, speech recognition and information retrieval. Leaving aside the practical questions that arise during the selection of a ‘good’ distance function, this work focuses on providing the research community with an invaluable comprehensive listing of the main available di...
Comparing Phylogenetic Trees by Matching Nodes Using the Transfer Distance Between Partitions.
Bogdanowicz, Damian; Giaro, Krzysztof
2017-05-01
Ability to quantify dissimilarity of different phylogenetic trees describing the relationship between the same group of taxa is required in various types of phylogenetic studies. For example, such metrics are used to assess the quality of phylogeny construction methods, to define optimization criteria in supertree building algorithms, or to find horizontal gene transfer (HGT) events. Among the set of metrics described so far in the literature, the most commonly used seems to be the Robinson-Foulds distance. In this article, we define a new metric for rooted trees-the Matching Pair (MP) distance. The MP metric uses the concept of the minimum-weight perfect matching in a complete bipartite graph constructed from partitions of all pairs of leaves of the compared phylogenetic trees. We analyze the properties of the MP metric and present computational experiments showing its potential applicability in tasks related to finding the HGT events.
International Nuclear Information System (INIS)
Peng, W.H.
1977-01-01
A specialized moments-method computer code was constructed for the calculation of the even spatial moments of the scalar flux, phi/sub 2n/, through 2n = 80. Neutron slowing-down and transport in a medium with constant cross sections was examined and the effect of a superimposed square-well cross section minimum on the penetrating flux was studied. In the constant cross section case, for nuclei that are not too light, the scalar flux is essentially independent of the nuclide mass. The numerical results obtained were used to test the validity of existing analytic approximations to the flux at both small and large lethargies relative to the source energy. As a result it was possible to define the regions in the lethargy--distance plane where these analytic solutions apply with reasonable accuracy. A parametric study was made of the effect of a square-well cross section minimum on neutron fluxes at energies below the minimum. It was shown that the flux at energies well below the minimum is essentially independent of the position of the minimum in lethargy. The results can be described by a convolution-of-sources model involving only the lethargy separation between detector and source, the width and the relative depth of the minimum. On the basis of the computations and the corresponding model, it is possible to predict, e.g., the conditions under which transport in the region of minimum completely determines the penetrating flux. At the other extreme, the model describes when the transport in the minimum can be treated in the same manner as in any comparable lethargy interval. With the aid of these criteria it is possible to understand the apparent paradoxical effects of certain minima in neutron penetration through such media as iron and sodium
Comparing classifiers for pronunciation error detection
Strik, H.; Truong, K.; Wet, F. de; Cucchiarini, C.
2007-01-01
Providing feedback on pronunciation errors in computer assisted language learning systems requires that pronunciation errors be detected automatically. In the present study we compare four types of classifiers that can be used for this purpose: two acoustic-phonetic classifiers (one of which employs
Interplay between strong correlation and adsorption distances: Co on Cu(001)
Bahlke, Marc Philipp; Karolak, Michael; Herrmann, Carmen
2018-01-01
Adsorbed transition metal atoms can have partially filled d or f shells due to strong on-site Coulomb interaction. Capturing all effects originating from electron correlation in such strongly correlated systems is a challenge for electronic structure methods. It requires a sufficiently accurate description of the atomistic structure (in particular bond distances and angles), which is usually obtained from first-principles Kohn-Sham density functional theory (DFT), which due to the approximate nature of the exchange-correlation functional may provide an unreliable description of strongly correlated systems. To elucidate the consequences of this popular procedure, we apply a combination of DFT with the Anderson impurity model (AIM), as well as DFT + U for a calculation of the potential energy surface along the Co/Cu(001) adsorption coordinate, and compare the results with those obtained from DFT. The adsorption minimum is shifted towards larger distances by applying DFT+AIM, or the much cheaper DFT +U method, compared to the corresponding spin-polarized DFT results, by a magnitude comparable to variations between different approximate exchange-correlation functionals (0.08 to 0.12 Å). This shift originates from an increasing correlation energy at larger adsorption distances, which can be traced back to the Co 3 dx y and 3 dz2 orbitals being more correlated as the adsorption distance is increased. We can show that such considerations are important, as they may strongly affect electronic properties such as the Kondo temperature.
Superoutburst of a New Sub-Period-Minimum Dwarf Nova CSS130418 in Hercules
Directory of Open Access Journals (Sweden)
D. Chochol
2015-02-01
Full Text Available Multicolour photometry of a new dwarf nova CSS130418 in Hercules, which underwent superoutburst on April 18, 2013, allow to classified it as a WZ Sge-type dwarf nova. The phase light curves for different stages of superoutburst are presented. The early superhumps were used to determine the orbital period Porb = 64.84(1 minutes, which is shorter than the period minimum ~78 minutes for normal hydrogen-rich cataclysmic variables. We found the mean period of ordinary superhumps Psh = 65.559(1 minutes. The quiescent spectrum is rich in helium, showing double peaked emissionlines of H I and He I from accretion disk, so the dwarf nova is in a late stage of stellar evolution.
Komponen Kebutuhan Hidup Dalam Regulasi Upah Minimum Perspektif Maqasid Al-Shariah
Directory of Open Access Journals (Sweden)
Adin Fadilah
2016-03-01
Full Text Available Abstract: The provisions of minimum wage in Indonesia have been changing four times in the last few decades in line with the changes in the components of life needs referred. In fact, there are many life needs that previously were considered as trivial but they have now become important and should be referred to in setting the minimum wage. In Islam there are five sectors of human needs as established in the discourse of maqasid al-sharī'ah. Each sector is ranked into three levels, namely ḍarūrīyah, ḥajīyah, and taḥsīnīyah. This study examines how the components of life needs have been referred by the regulation of the minimum wage from the viewpoint of maqasid al-sharī'ah. This study came to the conclusion that the development of life needs used as guidelines in determining minimum wage levels has met the demands of life needs as intended by maqasid al-sharī’ah. Most of the components of decent living (KHL occupy levels of ḍarūrīyah and ḥajīyah, and very few are classified as taḥsīnīyah. The enhancement of quantity and quality of the components proven the attention to the level of life needs sequentially from ḍarūrīyah level, the ḥajīyah, then the taḥsīnīyah level. These changes indicate the change of law in accordance with the demands of the circumstances. Abstrak: Komponen kebutuhan hidup yang dijadikan acuan dalam penetapan upah minimum di Indonesia telah mengalami perubahan sebanyak 4 kali. Perubahan ini terjadi karena menyesuaikan perkembangan kebutuhan dahulu dianggap sepele namun kini menjadi penting. Dalam Islam ada 5 unsur pokok kebutuhan manusia yang harus dipenuhi atau yang dikenal dengan istilah maqasid al-sharī’ah. Kelima unsur pokok maqasid al-sharī’ah ini terbagi mejadi 3 kategori yakni ḍarūrīyah, ḥajīyah, dan taḥsīnīyah. Penelitian ini mengkaji bagaimana komponen kebutuhan hidup dalam regulasi upah minimum perspektif maqasid al-sharī’ah. Penelitian ini memberi
Classifier Fusion With Contextual Reliability Evaluation.
Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You
2018-05-01
Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.
Near-IR TRGB Distance to Nearby Dwarf Irregular Galaxy NGC 6822
Directory of Open Access Journals (Sweden)
Y.-J. Sohn
2008-09-01
Full Text Available We report the distance modulus of nearby dwarf irregular galaxy NGC 6822 estimated from the so-called Tip of Red-giant Branch (TRGB method. To detect the apparent magnitudes of the TRGB we use the color-magnitude diagrams (CMDs and luminosity functions (LFs in the near-infrared JHK bands. Foreground stars, main-sequence stars, and supergiant stars have been classified on the (g - K, g plane and removed on the near-infrared CMDs, from which only RGB and AGB stars are remained on the CMDs and LFs. By applying the Savitzky-Golay filter to the obtained LFs and detecting the peak in the second derivative of the observed LFs, we determined the apparent magnitudes of the TRGB. Theoretical absolute magnitudes of the TRGB are estimated from Yonsei-Yale isochrones with the age of 12Gyr and the metallicity range of -2.0 <[Fe/H]< -0.5. The derived values of distance modulus to NGC 6822 are (m - M
Hierarchical mixtures of naive Bayes classifiers
Wiering, M.A.
2002-01-01
Naive Bayes classifiers tend to perform very well on a large number of problem domains, although their representation power is quite limited compared to more sophisticated machine learning algorithms. In this pa- per we study combining multiple naive Bayes classifiers by using the hierar- chical
Molecular Characteristics in MRI-classified Group 1 Glioblastoma Multiforme
Directory of Open Access Journals (Sweden)
William E Haskins
2013-07-01
Full Text Available Glioblastoma multiforme (GBM is a clinically and pathologically heterogeneous brain tumor. Previous study of MRI-classified GBM has revealed a spatial relationship between Group 1 GBM (GBM1 and the subventricular zone (SVZ. The SVZ is an adult neural stem cell niche and is also suspected to be the origin of a subtype of brain tumor. The intimate contact between GBM1 and the SVZ raises the possibility that tumor cells in GBM1 may be most related to SVZ cells. In support of this notion, we found that neural stem cell and neuroblast markers are highly expressed in GBM1. Additionally, we identified molecular characteristics in this type of GBM that include up-regulation of metabolic enzymes, ribosomal proteins, heat shock proteins, and c-Myc oncoprotein. As GBM1 often recurs at great distances from the initial lesion, the rewiring of metabolism and ribosomal biogenesis may facilitate cancer cells’ growth and survival during tumor migration. Taken together, combined our findings and MRI-based classification of GBM1 would offer better prediction and treatment for this multifocal GBM.
A Supervised Multiclass Classifier for an Autocoding System
Directory of Open Access Journals (Sweden)
Yukako Toko
2017-11-01
Full Text Available Classification is often required in various contexts, including in the field of official statistics. In the previous study, we have developed a multiclass classifier that can classify short text descriptions with high accuracy. The algorithm borrows the concept of the naïve Bayes classifier and is so simple that its structure is easily understandable. The proposed classifier has the following two advantages. First, the processing times for both learning and classifying are extremely practical. Second, the proposed classifier yields high-accuracy results for a large portion of a dataset. We have previously developed an autocoding system for the Family Income and Expenditure Survey in Japan that has a better performing classifier. While the original system was developed in Perl in order to improve the efficiency of the coding process of short Japanese texts, the proposed system is implemented in the R programming language in order to explore versatility and is modified to make the system easily applicable to English text descriptions, in consideration of the increasing number of R users in the field of official statistics. We are planning to publish the proposed classifier as an R-package. The proposed classifier would be generally applicable to other classification tasks including coding activities in the field of official statistics, and it would contribute greatly to improving their efficiency.
Interface Simulation Distances
Directory of Open Access Journals (Sweden)
Pavol Černý
2012-10-01
Full Text Available The classical (boolean notion of refinement for behavioral interfaces of system components is the alternating refinement preorder. In this paper, we define a distance for interfaces, called interface simulation distance. It makes the alternating refinement preorder quantitative by, intuitively, tolerating errors (while counting them in the alternating simulation game. We show that the interface simulation distance satisfies the triangle inequality, that the distance between two interfaces does not increase under parallel composition with a third interface, and that the distance between two interfaces can be bounded from above and below by distances between abstractions of the two interfaces. We illustrate the framework, and the properties of the distances under composition of interfaces, with two case studies.
International Nuclear Information System (INIS)
Han, Renmin; Wang, Liansan; Xu, Fan; Zhang, Yongdeng; Zhang, Mingshu; Liu, Zhiyong; Ren, Fei; Zhang, Fa
2015-01-01
The recent developments of far-field optical microscopy (single molecule imaging techniques) have overcome the diffraction barrier of light and improve image resolution by a factor of ten compared with conventional light microscopy. These techniques utilize the stochastic switching of probe molecules to overcome the diffraction limit and determine the precise localizations of molecules, which often requires a long image acquisition time. However, long acquisition times increase the risk of sample drift. In the case of high resolution microscopy, sample drift would decrease the image resolution. In this paper, we propose a novel metric based on the distance between molecules to solve the drift correction. The proposed metric directly uses the position information of molecules to estimate the frame drift. We also designed an algorithm to implement the metric for the general application of drift correction. There are two advantages of our method: First, because our method does not require space binning of positions of molecules but directly operates on the positions, it is more natural for single molecule imaging techniques. Second, our method can estimate drift with a small number of positions in each temporal bin, which may extend its potential application. The effectiveness of our method has been demonstrated by both simulated data and experiments on single molecular images
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its
Logarithmic learning for generalized classifier neural network.
Ozyildirim, Buse Melis; Avci, Mutlu
2014-12-01
Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.
The impact of distance to the farm compound on the options for use of the cereal plot
Directory of Open Access Journals (Sweden)
K. TAMM
2008-12-01
Full Text Available In increasingly competitive conditions, the dominant trend of enlarging the production area of farms is causing a growth in transportation costs making the profitability of cultivating distant plots questionable. The aim of this study was to provide a method to evaluate the rationality of using a plot depending on its distance, area and cultivation technology. An algorithm and a mathematical model were composed to calculate the total costs depending on the distance to the plot. The transportation costs of machines and materials, cost of organisational travel and timeliness costs are taken into account in the model to enable determination of the maximum distance or the minimum area of the plot necessary for profitable cultivation. Simulations allow us to conclude that the growth in yield and selling price of the production allow an increase in the limit value of driving costs and, thus, the profitable distance of the plot; on the other hand, it means also an increase of timeliness costs as a limitation for extending distance. Exploitation of more distant plots can be uneconomical in coming years because of increasing fuel costs.;
DECISION TREE CLASSIFIERS FOR STAR/GALAXY SEPARATION
International Nuclear Information System (INIS)
Vasconcellos, E. C.; Ruiz, R. S. R.; De Carvalho, R. R.; Capelato, H. V.; Gal, R. R.; LaBarbera, F. L.; Frago Campos Velho, H.; Trevisan, M.
2011-01-01
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 ≤ r ≤ 21 (85.2%) and r ≥ 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 ≤ r ≤ 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination (∼2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 ≤ r ≤ 21.
Impact of Distance on Mode of Active Commuting in Chilean Children and Adolescents
Directory of Open Access Journals (Sweden)
Fernando Rodríguez-Rodríguez
2017-11-01
Full Text Available Active commuting could contribute to increasing physical activity. The objective of this study was to characterise patterns of active commuting to and from schools in children and adolescents in Chile. A total of 453 Chilean children and adolescents aged between 10 and 18 years were included in this study. Data regarding modes of commuting and commuting distance was collected using a validated questionnaire. Commuting mode was classified as active commuting (walking and/or cycling or non-active commuting (car, motorcycle and/or bus. Commuting distance expressed in kilometres was categorised into six subgroups (0 to 0.5, 0.6 to 1, 1.1 to 2, 2.1 to 3, 3.1 to 5 and >5 km. Car commuting was the main mode for children (to school 64.9%; from school 51.2% and adolescents (to school 50.2%; from school 24.7%. Whereas public bus commuting was the main transport used by adolescents to return from school. Only 11.0% and 24.8% of children and adolescents, respectively, walk to school. The proportion of children and adolescents who engage in active commuting was lower in those covering longer distances compared to a short distance. Adolescents walked to and from school more frequently than children. These findings show that non-active commuting was the most common mode of transport and that journey distances may influence commuting modes in children and adolescents.
The graph-theoretic minimum energy path problem for ionic conduction
Directory of Open Access Journals (Sweden)
Ippei Kishida
2015-10-01
Full Text Available A new computational method was developed to analyze the ionic conduction mechanism in crystals through graph theory. The graph was organized into nodes, which represent the crystal structures modeled by ionic site occupation, and edges, which represent structure transitions via ionic jumps. We proposed a minimum energy path problem, which is similar to the shortest path problem. An effective algorithm to solve the problem was established. Since our method does not use randomized algorithm and time parameters, the computational cost to analyze conduction paths and a migration energy is very low. The power of the method was verified by applying it to α-AgI and the ionic conduction mechanism in α-AgI was revealed. The analysis using single point calculations found the minimum energy path for long-distance ionic conduction, which consists of 12 steps of ionic jumps in a unit cell. From the results, the detailed theoretical migration energy was calculated as 0.11 eV by geometry optimization and nudged elastic band method. Our method can refine candidates for possible jumps in crystals and it can be adapted to other computational methods, such as the nudged elastic band method. We expect that our method will be a powerful tool for analyzing ionic conduction mechanisms, even for large complex crystals.
Reducing the distance in distance-caregiving by technology innovation
Directory of Open Access Journals (Sweden)
Lazelle E Benefield
2007-07-01
Full Text Available Lazelle E Benefield1, Cornelia Beck21College of Nursing, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA; 2Pat & Willard Walker Family Memory Research Center, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USAAbstract: Family caregivers are responsible for the home care of over 34 million older adults in the United States. For many, the elder family member lives more than an hour’s distance away. Distance caregiving is a growing alternative to more familiar models where: 1 the elder and the family caregiver(s may reside in the same household; or 2 the family caregiver may live nearby but not in the same household as the elder. The distance caregiving model involves elders and their family caregivers who live at some distance, defined as more than a 60-minute commute, from one another. Evidence suggests that distance caregiving is a distinct phenomenon, differs substantially from on-site family caregiving, and requires additional assistance to support the physical, social, and contextual dimensions of the caregiving process. Technology-based assists could virtually connect the caregiver and elder and provide strong support that addresses the elder’s physical, social, cognitive, and/or sensory impairments. Therefore, in today’s era of high technology, it is surprising that so few affordable innovations are being marketed for distance caregiving. This article addresses distance caregiving, proposes the use of technology innovation to support caregiving, and suggests a research agenda to better inform policy decisions related to the unique needs of this situation.Keywords: caregiving, family, distance, technology, elders
32 CFR 2400.28 - Dissemination of classified information.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Dissemination of classified information. 2400.28... SECURITY PROGRAM Safeguarding § 2400.28 Dissemination of classified information. Heads of OSTP offices... originating official may prescribe specific restrictions on dissemination of classified information when...
Censoring distances based on labeled cortical distance maps in cortical morphometry.
Ceyhan, Elvan; Nishino, Tomoyuki; Alexopolous, Dimitrios; Todd, Richard D; Botteron, Kelly N; Miller, Michael I; Ratnanather, J Tilak
2013-01-01
It has been demonstrated that shape differences in cortical structures may be manifested in neuropsychiatric disorders. Such morphometric differences can be measured by labeled cortical distance mapping (LCDM) which characterizes the morphometry of the laminar cortical mantle of cortical structures. LCDM data consist of signed/labeled distances of gray matter (GM) voxels with respect to GM/white matter (WM) surface. Volumes and other summary measures for each subject and the pooled distances can help determine the morphometric differences between diagnostic groups, however they do not reveal all the morphometric information contained in LCDM distances. To extract more information from LCDM data, censoring of the pooled distances is introduced for each diagnostic group where the range of LCDM distances is partitioned at a fixed increment size; and at each censoring step, the distances not exceeding the censoring distance are kept. Censored LCDM distances inherit the advantages of the pooled distances but also provide information about the location of morphometric differences which cannot be obtained from the pooled distances. However, at each step, the censored distances aggregate, which might confound the results. The influence of data aggregation is investigated with an extensive Monte Carlo simulation analysis and it is demonstrated that this influence is negligible. As an illustrative example, GM of ventral medial prefrontal cortices (VMPFCs) of subjects with major depressive disorder (MDD), subjects at high risk (HR) of MDD, and healthy control (Ctrl) subjects are used. A significant reduction in laminar thickness of the VMPFC in MDD and HR subjects is observed compared to Ctrl subjects. Moreover, the GM LCDM distances (i.e., locations with respect to the GM/WM surface) for which these differences start to occur are determined. The methodology is also applicable to LCDM-based morphometric measures of other cortical structures affected by disease.
Censoring Distances Based on Labeled Cortical Distance Maps in Cortical Morphometry
Directory of Open Access Journals (Sweden)
Elvan eCeyhan
2013-10-01
Full Text Available It has been demonstrated that shape differences are manifested in cortical structures due to neuropsychiatric disorders. Such morphometric differences can be measured by labeled cortical distance mapping (LCDM which characterizes the morphometry of the laminar cortical mantle of cortical structures. LCDM data consist of signed/labeled distances of gray matter (GM voxels with respect to GM/white matter (WM surface. Volumes and other summary measures for each subject and the pooled distances can help determine the morphometric differences between diagnostic groups, however they do not reveal all the morphometric information con-tained in LCDM distances. To extract more information from LCDM data, censoring of the pooled distances is introduced for each diagnostic group where the range of LCDM distances is partitioned at a fixed increment size; and at each censoring step, the distances not exceeding the censoring distance are kept. Censored LCDM distances inherit the advantages of the pooled distances but also provide information about the location of morphometric differences which cannot be obtained from the pooled distances. However, at each step, the censored distances aggregate, which might confound the results. The influence of data aggregation is investigated with an extensive Monte Carlo simulation analysis and it is demonstrated that this influence is negligible. As an illustrative example, GM of ventral medial prefrontal cortices (VMPFCs of subjects with major depressive disorder (MDD, subjects at high risk (HR of MDD, and healthy control (Ctrl subjects are used. A significant reduction in laminar thickness of the VMPFC in MDD and HR subjects is observed compared to Ctrl subjects. Moreover, the GM LCDM distances (i.e., locations with respect to the GM/WM surface for which these differences start to occur are determined. The methodology is also applicable to LCDM-based morphometric measures of other cortical structures affected by disease.
Training for Distance Teaching through Distance Learning.
Cadorath, Jill; Harris, Simon; Encinas, Fatima
2002-01-01
Describes a mixed-mode bachelor degree course in English language teaching at the Universidad Autonoma de Puebla (Mexico) that was designed to help practicing teachers write appropriate distance education materials by giving them the experience of being distance students. Includes a course outline and results of a course evaluation. (Author/LRW)
The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.
Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar
2018-03-01
This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility ( 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
The environmental impact of tourism mobility is linked to the distances travelled in order to reach a holiday destination, and with tourists travelling more and further than previously, an understanding of how the tourists view the distance they travel across becomes relevant. Based on interviews...... contribute to an understanding of how it is possible to change tourism travel behaviour towards becoming more sustainable. How tourists 'consume distance' is discussed, from the practical level of actually driving the car or sitting in the air plane, to the symbolic consumption of distance that occurs when...... travelling on holiday becomes part of a lifestyle and a social positioning game. Further, different types of tourist distance consumers are identified, ranging from the reluctant to the deliberate and nonchalant distance consumers, who display very differing attitudes towards the distance they all travel...
Christian, Josef; Kröll, Josef; Schwameder, Hermann
2017-06-01
Common summary measures of gait quality such as the Gait Profile Score (GPS) are based on the principle of measuring a distance from the mean pattern of a healthy reference group in a gait pattern vector space. The recently introduced Classifier Oriented Gait Score (COGS) is a pathology specific score that measures this distance in a unique direction, which is indicated by a linear classifier. This approach has potentially improved the discriminatory power to detect subtle changes in gait patterns but does not incorporate a profile of interpretable sub-scores like the GPS. The main aims of this study were to extend the COGS by decomposing it into interpretable sub-scores as realized in the GPS and to compare the discriminative power of the GPS and COGS. Two types of gait impairments were imitated to enable a high level of control of the gait patterns. Imitated impairments were realized by restricting knee extension and inducing leg length discrepancy. The results showed increased discriminatory power of the COGS for differentiating diverse levels of impairment. Comparison of the GPS and COGS sub-scores and their ability to indicate changes in specific variables supports the validity of both scores. The COGS is an overall measure of gait quality with increased power to detect subtle changes in gait patterns and might be well suited for tracing the effect of a therapeutic treatment over time. The newly introduced sub-scores improved the interpretability of the COGS, which is helpful for practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Distancing, not embracing, the Distancing-Embracing model of art reception.
Davies, Stephen
2017-01-01
Despite denials in the target article, the Distancing-Embracing model appeals to compensatory ideas in explaining the appeal of artworks that elicit negative affect. The model also appeals to the deflationary effects of psychological distancing. Having pointed to the famous rejection in the 1960s of the view that aesthetic experience involves psychological distancing, I suggest that "distance" functions here as a weak metaphor that cannot sustain the explanatory burden the theory demands of it.
Foundations of Distance Education. Third Edition. Routledge Studies in Distance Education.
Keegan, Desmond
This text gives an overview of distance education for students, administrators, and practitioners in distance education. Chapter 1 discusses the study of distance education. Chapter 2 analyzes forms of nonconventional education (open, nontraditional) that may have similarities to distance education but are not to be identified with it. Chapter 3…
Directory of Open Access Journals (Sweden)
Robert F. Love
2001-01-01
Full Text Available Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance predicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations, SD (Squared Deviations and NAD (Normalized Absolute Deviations are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the ℓkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute prediction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
Youth minimum wages and youth employment
Marimpi, Maria; Koning, Pierre
2018-01-01
This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median
Discretization of space and time: determining the values of minimum length and minimum time
Roatta , Luca
2017-01-01
Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .
Minimum wage development in the Russian Federation
Bolsheva, Anna
2012-01-01
The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...
Pairing call-response surveys and distance sampling for a mammalian carnivore
Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.
2015-01-01
Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.
Knick, Steven T.; Rotenberry, J.T.
1998-01-01
We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. If successful, the technique could be used to predict animal use areas, or those undergoing change, in different regions from the same selection function and variables without additional sampling. We determined the multivariate mean vector of 7 GIS variables that described habitats used by jackrabbits. We then ranked the similarity of all cells in the GIS coverage from their Mahalanobis distance to the mean habitat vector. The resulting map accurately depicted areas where we sighted jackrabbits on verification surveys. We then simulated an increase in shrublands (which are important habitats). Contrary to expectation, the new configurations were classified as lower similarity relative to the original mean habitat vector. Because the selection function is based on a unimodal mean, any deviation, even if biologically positive, creates larger Malanobis distances and lower similarity values. We recommend the Mahalanobis distance technique for mapping animal use areas when animals are distributed optimally, the landscape is well-sampled to determine the mean habitat vector, and distributions of the habitat variables does not change.
Directory of Open Access Journals (Sweden)
Milinkovitch Michel C
2007-11-01
Full Text Available Abstract Background Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME principle (aiming at recovering the phylogeny with shortest length is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGacaGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7Kaeeiuaafaaa@3888@-hard class of problems. Results In this paper, we introduce an Ant Colony Optimization (ACO algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems. Conclusion We show that the ACO algorithm is potentially competitive in comparison with state-of-the-art algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.
Neural Network Classifiers for Local Wind Prediction.
Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz
2004-05-01
This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.
3D Bayesian contextual classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
2000-01-01
We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....
Minimum emittance of three-bend achromats
International Nuclear Information System (INIS)
Li Xiaoyu; Xu Gang
2012-01-01
The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)
Székely, Gábor J.; Rizzo, Maria L.
2010-01-01
Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...
A CLASSIFIER SYSTEM USING SMOOTH GRAPH COLORING
Directory of Open Access Journals (Sweden)
JORGE FLORES CRUZ
2017-01-01
Full Text Available Unsupervised classifiers allow clustering methods with less or no human intervention. Therefore it is desirable to group the set of items with less data processing. This paper proposes an unsupervised classifier system using the model of soft graph coloring. This method was tested with some classic instances in the literature and the results obtained were compared with classifications made with human intervention, yielding as good or better results than supervised classifiers, sometimes providing alternative classifications that considers additional information that humans did not considered.
Motion Primitives and Probabilistic Edit Distance for Action Recognition
DEFF Research Database (Denmark)
Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.
2009-01-01
the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize......The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent...... different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognition rates of 88.7% and 85.5%, respectively....
Redetermination of the luminosity, distance, and reddening of Beta Lyrae
International Nuclear Information System (INIS)
Dobias, J.J.; Plavec, M.J.
1985-01-01
We have redetermined the distance to β Lyrae and found that it probably lies between 345 and 400 pc, with the most likely value being 370 pc. With the corresponding true distance modulus of 7.8 mag, we find that the eclipsing system of β Lyrae has a maximum absolute visual magnitude of -4.7 mag. Using Wilson's model, we conclude that the average absolute visual magnitude of the primary component is -4.1 mag, so that the star is best classified as B8.5 or B9 II-Ib. The visual absolute magnitude of the secondary component is -3.3 mag, but this figure cannot be used to derive its luminosity, since that object has a nonstellar energy distribution. The color excess is small, E(B-V) = 0.04 mag. These data are based on our optical and IUE scans of the brightest visual companion to β Lyrae, HD 174664, and on an analysis of the hydrogen line profiles in its spectrum. We find that the star is mildly evolved within the main-sequence band. Its effective temperature (14 250 K) and surface gravity (log gapprox. =4.0) correspond most closely to those of stars classified as B6 V. This conclusion creates a certain evolutionary dilemma, since the age of HD 174664 should not exceed 20--30 million years if the basic model of β Lyrae as an Algol-type binary is correct, while our result is 48 x 10 6 yr. We address this problem at the end of the paper and conclude that the discrepancy may well be due to uncertainties in observational data and theoretical models and in the various calibrations involved. Nevertheless, attention should be paid to this potential age dilemma for β Lyrae
Traversing psychological distance.
Liberman, Nira; Trope, Yaacov
2014-07-01
Traversing psychological distance involves going beyond direct experience, and includes planning, perspective taking, and contemplating counterfactuals. Consistent with this view, temporal, spatial, and social distances as well as hypotheticality are associated, affect each other, and are inferred from one another. Moreover, traversing all distances involves the use of abstraction, which we define as forming a belief about the substitutability for a specific purpose of subjectively distinct objects. Indeed, across many instances of both abstraction and psychological distancing, more abstract constructs are used for more distal objects. Here, we describe the implications of this relation for prediction, choice, communication, negotiation, and self-control. We ask whether traversing distance is a general mental ability and whether distance should replace expectancy in expected-utility theories. Copyright © 2014 Elsevier Ltd. All rights reserved.
30 CFR 57.19021 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0. (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0. (c) Tail...
30 CFR 56.19021 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0-0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0-0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...
Du, Peng; Ouahsine, Abdellatif; Sergent, Philippe
2018-05-01
Ship maneuvering in the confined inland waterway is investigated using the system-based method, where a nonlinear transient hydrodynamic model is adopted and confinement models are implemented to account for the influence of the channel bank and bottom. The maneuvering model is validated using the turning circle test, and the confinement model is validated using the experimental data. The separation distance, ship speed, and channel width are then varied to investigate their influences on ship maneuverability. With smaller separation distances and higher speeds near the bank, the ship's trajectory deviates more from the original course and the bow is repelled with a larger yaw angle, which increase the difficulty of maneuvering. Smaller channel widths induce higher advancing resistances on the ship. The minimum distance to the bank are extracted and studied. It is suggested to navigate the ship in the middle of the channel and with a reasonable speed in the restricted waterway.
DEFF Research Database (Denmark)
Lippert, Ingmar
2012-01-01
. Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...
30 CFR 77.1431 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...
Directory of Open Access Journals (Sweden)
Haryati Jaafar
2015-01-01
Full Text Available Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN, was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%.
A Phosphate Minimum in the Oxygen Minimum Zone (OMZ) off Peru
Paulmier, A.; Giraud, M.; Sudre, J.; Jonca, J.; Leon, V.; Moron, O.; Dewitte, B.; Lavik, G.; Grasse, P.; Frank, M.; Stramma, L.; Garcon, V.
2016-02-01
The Oxygen Minimum Zone (OMZ) off Peru is known to be associated with the advection of Equatorial SubSurface Waters (ESSW), rich in nutrients and poor in oxygen, through the Peru-Chile UnderCurrent (PCUC), but this circulation remains to be refined within the OMZ. During the Pelágico cruise in November-December 2010, measurements of phosphate revealed the presence of a phosphate minimum (Pmin) in various hydrographic stations, which could not be explained so far and could be associated with a specific water mass. This Pmin, localized at a relatively constant layer ( 20minimum with a mean vertical phosphate decrease of 0.6 µM but highly variable between 0.1 and 2.2 µM. In average, these Pmin are associated with a predominant mixing of SubTropical Under- and Surface Waters (STUW and STSW: 20 and 40%, respectively) within ESSW ( 25%), complemented evenly by overlying (ESW, TSW: 8%) and underlying waters (AAIW, SPDW: 7%). The hypotheses and mechanisms leading to the Pmin formation in the OMZ are further explored and discussed, considering the physical regional contribution associated with various circulation pathways ventilating the OMZ and the local biogeochemical contribution including the potential diazotrophic activity.
Selecting the minimum risk route in the transportation of hazardous materials
Directory of Open Access Journals (Sweden)
Marijan Žura
1992-12-01
Full Text Available The transportation of hazardous materials is a broad and complex topic. Percent and iveight of accidents of vehicles carrying dangerous goods are growing fast. Modern computer based information system for dangerous materials management is becoming more and more important. In this paper I present an interactive software system for minimum risk route selection based on the PC ARC/INFO. The model computes optimal path based on accident probability is computed from traffic accident rates, highway operational speed, traffic volume and technical characteristic of the roadwidth, radius and slope. Dangerous goods are classified into nine classes according to their impact to different sensible environment elements. Those sensible elements are drinking water resourses, natural heritage, forestry, agricultural areas, cultural heritage, urban areas and tourist resorts. Some results of system implementation on Slovenia road network are be presented.
In-situ position and vibration measurement of rough surfaces using laser Doppler distance sensors
Czarske, J.; Pfister, T.; Günther, P.; Büttner, L.
2009-06-01
In-situ measurement of distances and shapes as well as dynamic deformations and vibrations of fast moving and especially rotating objects, such as gear shafts and turbine blades, is an important task at process control. We recently developed a laser Doppler distance frequency sensor, employing two superposed fan-shaped interference fringe systems with contrary fringe spacing gradients. Via two Doppler frequency evaluations the non-incremental position (i.e. distance) and the tangential velocity of rotating bodies are determined simultaneously. The distance uncertainty is in contrast to e.g. triangulation in principle independent of the object velocity. This unique feature allows micrometer resolutions of fast moved rough surfaces. The novel sensor was applied at turbo machines in order to control the tip clearance. The measurements at a transonic centrifugal compressor were performed during operation at up to 50,000 rpm, i.e. 586 m/s velocity of the blade tips. Due to the operational conditions such as temperatures of up to 300 °C, a flexible and robust measurement system with a passive fiber-coupled sensor, using diffractive optics, has been realized. Since the tip clearance of individual blades could be temporally resolved an analysis of blade vibrations was possible. A Fourier transformation of the blade distances results in an average period of 3 revolutions corresponding to a frequency of 1/3 of the rotary frequency. Additionally, a laser Doppler distance sensor using two tilted fringe systems and phase evaluation will be presented. This phase sensor exhibits a minimum position resolution of σz = 140 nm. It allows precise in-situ shape measurements at grinding and turning processes.
Foot strike patterns of recreational and sub-elite runners in a long-distance road race.
Larson, Peter; Higgins, Erin; Kaminski, Justin; Decker, Tamara; Preble, Janine; Lyons, Daniela; McIntyre, Kevin; Normile, Adam
2011-12-01
Although the biomechanical properties of the various types of running foot strike (rearfoot, midfoot, and forefoot) have been studied extensively in the laboratory, only a few studies have attempted to quantify the frequency of running foot strike variants among runners in competitive road races. We classified the left and right foot strike patterns of 936 distance runners, most of whom would be considered of recreational or sub-elite ability, at the 10 km point of a half-marathon/marathon road race. We classified 88.9% of runners at the 10 km point as rearfoot strikers, 3.4% as midfoot strikers, 1.8% as forefoot strikers, and 5.9% of runners exhibited discrete foot strike asymmetry. Rearfoot striking was more common among our sample of mostly recreational distance runners than has been previously reported for samples of faster runners. We also compared foot strike patterns of 286 individual marathon runners between the 10 km and 32 km race locations and observed increased frequency of rearfoot striking at 32 km. A large percentage of runners switched from midfoot and forefoot foot strikes at 10 km to rearfoot strikes at 32 km. The frequency of discrete foot strike asymmetry declined from the 10 km to the 32 km location. Among marathon runners, we found no significant relationship between foot strike patterns and race times.
Local-global classifier fusion for screening chest radiographs
Ding, Meng; Antani, Sameer; Jaeger, Stefan; Xue, Zhiyun; Candemir, Sema; Kohli, Marc; Thoma, George
2017-03-01
Tuberculosis (TB) is a severe comorbidity of HIV and chest x-ray (CXR) analysis is a necessary step in screening for the infective disease. Automatic analysis of digital CXR images for detecting pulmonary abnormalities is critical for population screening, especially in medical resource constrained developing regions. In this article, we describe steps that improve previously reported performance of NLM's CXR screening algorithms and help advance the state of the art in the field. We propose a local-global classifier fusion method where two complementary classification systems are combined. The local classifier focuses on subtle and partial presentation of the disease leveraging information in radiology reports that roughly indicates locations of the abnormalities. In addition, the global classifier models the dominant spatial structure in the gestalt image using GIST descriptor for the semantic differentiation. Finally, the two complementary classifiers are combined using linear fusion, where the weight of each decision is calculated by the confidence probabilities from the two classifiers. We evaluated our method on three datasets in terms of the area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity and accuracy. The evaluation demonstrates the superiority of our proposed local-global fusion method over any single classifier.
Open and Distance Learning Today. Routledge Studies in Distance Education Series.
Lockwood, Fred, Ed.
This book contains the following papers on open and distance learning today: "Preface" (Daniel); "Big Bang Theory in Distance Education" (Hawkridge); "Practical Agenda for Theorists of Distance Education" (Perraton); "Trends, Directions and Needs: A View from Developing Countries" (Koul); "American…
High dimensional classifiers in the imbalanced case
DEFF Research Database (Denmark)
Bak, Britta Anker; Jensen, Jens Ledet
We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...
Ho, Jeff C; Russel, Kory C; Davis, Jennifer
2014-03-01
Support is growing for the incorporation of fetching time and/or distance considerations in the definition of access to improved water supply used for global monitoring. Current efforts typically rely on self-reported distance and/or travel time data that have been shown to be unreliable. To date, however, there has been no head-to-head comparison of such indicators with other possible distance/time metrics. This study provides such a comparison. We examine the association between both straight-line distance and self-reported one-way travel time with measured route distances to water sources for 1,103 households in Nampula province, Mozambique. We find straight-line, or Euclidean, distance to be a good proxy for route distance (R(2) = 0.98), while self-reported travel time is a poor proxy (R(2) = 0.12). We also apply a variety of time- and distance-based indicators proposed in the literature to our sample data, finding that the share of households classified as having versus lacking access would differ by more than 70 percentage points depending on the particular indicator employed. This work highlights the importance of the ongoing debate regarding valid, reliable, and feasible strategies for monitoring progress in the provision of improved water supply services.
Balanced sensitivity functions for tuning multi-dimensional Bayesian network classifiers
Bolt, J.H.; van der Gaag, L.C.
Multi-dimensional Bayesian network classifiers are Bayesian networks of restricted topological structure, which are tailored to classifying data instances into multiple dimensions. Like more traditional classifiers, multi-dimensional classifiers are typically learned from data and may include
Recognition of pornographic web pages by classifying texts and images.
Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve
2007-06-01
With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.
Using Neural Networks to Classify Digitized Images of Galaxies
Goderya, S. N.; McGuire, P. C.
2000-12-01
Automated classification of Galaxies into Hubble types is of paramount importance to study the large scale structure of the Universe, particularly as survey projects like the Sloan Digital Sky Survey complete their data acquisition of one million galaxies. At present it is not possible to find robust and efficient artificial intelligence based galaxy classifiers. In this study we will summarize progress made in the development of automated galaxy classifiers using neural networks as machine learning tools. We explore the Bayesian linear algorithm, the higher order probabilistic network, the multilayer perceptron neural network and Support Vector Machine Classifier. The performance of any machine classifier is dependant on the quality of the parameters that characterize the different groups of galaxies. Our effort is to develop geometric and invariant moment based parameters as input to the machine classifiers instead of the raw pixel data. Such an approach reduces the dimensionality of the classifier considerably, and removes the effects of scaling and rotation, and makes it easier to solve for the unknown parameters in the galaxy classifier. To judge the quality of training and classification we develop the concept of Mathews coefficients for the galaxy classification community. Mathews coefficients are single numbers that quantify classifier performance even with unequal prior probabilities of the classes.
Classifier fusion for VoIP attacks classification
Safarik, Jakub; Rezac, Filip
2017-05-01
SIP is one of the most successful protocols in the field of IP telephony communication. It establishes and manages VoIP calls. As the number of SIP implementation rises, we can expect a higher number of attacks on the communication system in the near future. This work aims at malicious SIP traffic classification. A number of various machine learning algorithms have been developed for attack classification. The paper presents a comparison of current research and the use of classifier fusion method leading to a potential decrease in classification error rate. Use of classifier combination makes a more robust solution without difficulties that may affect single algorithms. Different voting schemes, combination rules, and classifiers are discussed to improve the overall performance. All classifiers have been trained on real malicious traffic. The concept of traffic monitoring depends on the network of honeypot nodes. These honeypots run in several networks spread in different locations. Separation of honeypots allows us to gain an independent and trustworthy attack information.
Fast Most Similar Neighbor (MSN) classifiers for Mixed Data
Hernández Rodríguez, Selene
2010-01-01
The k nearest neighbor (k-NN) classifier has been extensively used in Pattern Recognition because of its simplicity and its good performance. However, in large datasets applications, the exhaustive k-NN classifier becomes impractical. Therefore, many fast k-NN classifiers have been developed; most of them rely on metric properties (usually the triangle inequality) to reduce the number of prototype comparisons. Hence, the existing fast k-NN classifiers are applicable only when the comparison f...
From a distance: implications of spontaneous self-distancing for adaptive self-reflection.
Ayduk, Ozlem; Kross, Ethan
2010-05-01
Although recent experimental work indicates that self-distancing facilitates adaptive self-reflection, it remains unclear (a) whether spontaneous self-distancing leads to similar adaptive outcomes, (b) how spontaneous self-distancing relates to avoidance, and (c) how this strategy impacts interpersonal behavior. Three studies examined these issues demonstrating that the more participants spontaneously self-distanced while reflecting on negative memories, the less emotional (Studies 1-3) and cardiovascular (Study 2) reactivity they displayed in the short term. Spontaneous self-distancing was also associated with lower emotional reactivity and intrusive ideation over time (Study 1). The negative association between spontaneous self-distancing and emotional reactivity was mediated by how participants construed their experience (i.e., less recounting relative to reconstruing) rather than avoidance (Studies 1-2). In addition, spontaneous self-distancing was associated with more problem-solving behavior and less reciprocation of negativity during conflicts among couples in ongoing relationships (Study 3). Although spontaneous self-distancing was empirically related to trait rumination, it explained unique variance in predicting key outcomes. 2010 APA, all rights reserved
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Feature extraction for dynamic integration of classifiers
Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Patterson, D.W.
2007-01-01
Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique
Hierarchical traits distances explain grassland Fabaceae species' ecological niches distances
Fort, Florian; Jouany, Claire; Cruz, Pablo
2015-01-01
Fabaceae species play a key role in ecosystem functioning through their capacity to fix atmospheric nitrogen via their symbiosis with Rhizobium bacteria. To increase benefits of using Fabaceae in agricultural systems, it is necessary to find ways to evaluate species or genotypes having potential adaptations to sub-optimal growth conditions. We evaluated the relevance of phylogenetic distance, absolute trait distance and hierarchical trait distance for comparing the adaptation of 13 grassland Fabaceae species to different habitats, i.e., ecological niches. We measured a wide range of functional traits (root traits, leaf traits, and whole plant traits) in these species. Species phylogenetic and ecological distances were assessed from a species-level phylogenetic tree and species' ecological indicator values, respectively. We demonstrated that differences in ecological niches between grassland Fabaceae species were related more to their hierarchical trait distances than to their phylogenetic distances. We showed that grassland Fabaceae functional traits tend to converge among species with the same ecological requirements. Species with acquisitive root strategies (thin roots, shallow root systems) are competitive species adapted to non-stressful meadows, while conservative ones (coarse roots, deep root systems) are able to tolerate stressful continental climates. In contrast, acquisitive species appeared to be able to tolerate low soil-P availability, while conservative ones need high P availability. Finally we highlight that traits converge along the ecological gradient, providing the assumption that species with similar root-trait values are better able to coexist, regardless of their phylogenetic distance. PMID:25741353
Hierarchical traits distances explain grassland Fabaceae species’ ecological niches distances
Directory of Open Access Journals (Sweden)
Florian eFort
2015-02-01
Full Text Available Fabaceae species play a key role in ecosystem functioning through their capacity to fix atmospheric nitrogen via their symbiosis with Rhizobium bacteria. To increase benefits of using Fabaceae in agricultural systems, it is necessary to find ways to evaluate species or genotypes having potential adaptations to sub-optimal growth conditions. We evaluated the relevance of phylogenetic distance, absolute trait distance and hierarchical trait distance for comparing the adaptation of 13 grassland Fabaceae species to different habitats, i.e. ecological niches. We measured a wide range of functional traits (root traits, leaf traits and whole plant traits in these species. Species phylogenetic and ecological distances were assessed from a species-level phylogenetic tree and species’ ecological indicator values, respectively. We demonstrated that differences in ecological niches between grassland Fabaceae species were related more to their hierarchical trait distances than to their phylogenetic distances. We showed that grassland Fabaceae functional traits tend to converge among species with the same ecological requirements. Species with acquisitive root strategies (thin roots, shallow root systems are competitive species adapted to non-stressful meadows, while conservative ones (coarse roots, deep root systems are able to tolerate stressful continental climates. In contrast, acquisitive species appeared to be able to tolerate low soil-P availability, while conservative ones need high P availability. Finally we highlight that traits converge along the ecological gradient, providing the assumption that species with similar root-trait values are better able to coexist, regardless of their phylogenetic distance.
Classifying Returns as Extreme
DEFF Research Database (Denmark)
Christiansen, Charlotte
2014-01-01
I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...
Ziegler, Gerhard
2011-01-01
Distance protection provides the basis for network protection in transmission systems and meshed distribution systems. This book covers the fundamentals of distance protection and the special features of numerical technology. The emphasis is placed on the application of numerical distance relays in distribution and transmission systems.This book is aimed at students and engineers who wish to familiarise themselves with the subject of power system protection, as well as the experienced user, entering the area of numerical distance protection. Furthermore it serves as a reference guide for s
The Protection of Classified Information: The Legal Framework
National Research Council Canada - National Science Library
Elsea, Jennifer K
2006-01-01
Recent incidents involving leaks of classified information have heightened interest in the legal framework that governs security classification, access to classified information, and penalties for improper disclosure...
12 CFR 564.4 - Minimum appraisal standards.
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...
The minimum wage in the Czech enterprises
Eva Lajtkepová
2010-01-01
Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...
Consistency Analysis of Nearest Subspace Classifier
Wang, Yi
2015-01-01
The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...
MORIKAWA Masayuki
2013-01-01
This paper, using prefecture level panel data, empirically analyzes 1) the recent evolution of price-adjusted regional minimum wages and 2) the effects of minimum wages on firm profitability. As a result of rapid increases in minimum wages in the metropolitan areas since 2007, the regional disparity of nominal minimum wages has been widening. However, the disparity of price-adjusted minimum wages has been shrinking. According to the analysis of the effects of minimum wages on profitability us...
Energy-Efficient Neuromorphic Classifiers.
Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano
2016-10-01
Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.
41 CFR 50-201.1101 - Minimum wages.
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wages. 50-201... Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 201-GENERAL REGULATIONS § 50-201.1101 Minimum wages. Determinations of prevailing minimum wages or changes therein will be published in the Federal Register by the...
The Distance Standard Deviation
Edelmann, Dominic; Richards, Donald; Vogel, Daniel
2017-01-01
The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...
Directory of Open Access Journals (Sweden)
Eduardo Mendes Nascimento
2014-03-01
Full Text Available Based on Scriven’s User-Focused Evaluation Theory, the general objective in this study was to identify and analyze the degree of importance Brazilian students attribute to the variables that influence them when choosing distance education lato sensu graduate business programs. The research is classified as descriptive and an electronic questionnaire was used to survey the data, involving 354 students from distance education lato sensu graduate business programs distributed across different Brazilian locations. The questionnaire included 16 variables, which the students were expected to score from 0 to 10. The results indicated that 04 variables obtained a mean score superior to 9, and that flexibility was the main factor the respondents considered in the choice of a distance education program. This evidences that the possibility to structure the program according to their available time is fundamental for the students. Nevertheless, having a trained teaching staff (second most influential variable and a curriculum appropriate to their pedagogical needs (fourth are also essential characteristics. Finally, the respondents indicated the cost as the third most important variable. Some authors even consider it decisive in the students’ choice as distance education programs are frequently cheaper than in-class programs. In addition, it was verified that women score the investigated internal variables higher than men. In addition, the location of the support hub appeared as a determinant variable in the choice of the program.
Directory of Open Access Journals (Sweden)
Dr. Nursel Selver RUZGAR,
2004-04-01
Full Text Available Distance Education in Turkey Assistant Professor Dr. Nursel Selver RUZGAR Technical Education Faculty Marmara University, TURKEY ABSTRACT Many countries of the world are using distance education with various ways, by internet, by post and by TV. In this work, development of distance education in Turkey has been presented from the beginning. After discussing types and applications for different levels of distance education in Turkey, the distance education was given in the cultural aspect of the view. Then, in order to create the tendencies and thoughts of graduates of Higher Education Institutions and Distance Education Institutions about being competitors in job markets, sufficiency of education level, advantages for education system, continuing education in different Institutions, a face-to-face survey was applied to 1284 graduates, 958 from Higher Education Institutions and 326 from Distance Education Institutions. The results were evaluated and discussed. In the last part of this work, suggestions to become widespread and improve the distance education in the country were made.
Minimum Wage Laws and the Distribution of Employment.
Lang, Kevin
The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…
29 CFR 505.3 - Prevailing minimum compensation.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Prevailing minimum compensation. 505.3 Section 505.3 Labor... HUMANITIES § 505.3 Prevailing minimum compensation. (a)(1) In the absence of an alternative determination...)(2) of this section, the prevailing minimum compensation required to be paid under the Act to the...
Verification of classified fissile material using unclassified attributes
International Nuclear Information System (INIS)
Nicholas, N.J.; Fearey, B.L.; Puckett, J.M.; Tape, J.W.
1998-01-01
This paper reports on the most recent efforts of US technical experts to explore verification by IAEA of unclassified attributes of classified excess fissile material. Two propositions are discussed: (1) that multiple unclassified attributes could be declared by the host nation and then verified (and reverified) by the IAEA in order to provide confidence in that declaration of a classified (or unclassified) inventory while protecting classified or sensitive information; and (2) that attributes could be measured, remeasured, or monitored to provide continuity of knowledge in a nonintrusive and unclassified manner. They believe attributes should relate to characteristics of excess weapons materials and should be verifiable and authenticatable with methods usable by IAEA inspectors. Further, attributes (along with the methods to measure them) must not reveal any classified information. The approach that the authors have taken is as follows: (1) assume certain attributes of classified excess material, (2) identify passive signatures, (3) determine range of applicable measurement physics, (4) develop a set of criteria to assess and select measurement technologies, (5) select existing instrumentation for proof-of-principle measurements and demonstration, and (6) develop and design information barriers to protect classified information. While the attribute verification concepts and measurements discussed in this paper appear promising, neither the attribute verification approach nor the measurement technologies have been fully developed, tested, and evaluated
Reinforcement Learning Based Artificial Immune Classifier
Directory of Open Access Journals (Sweden)
Mehmet Karakose
2013-01-01
Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.
Haptic Discrimination of Distance
van Beek, Femke E.; Bergmann Tiest, Wouter M.; Kappers, Astrid M. L.
2014-01-01
While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive) and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices. PMID:25116638
Haptic discrimination of distance.
Directory of Open Access Journals (Sweden)
Femke E van Beek
Full Text Available While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices.
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
29 CFR 4.159 - General minimum wage.
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...
Intelligent Garbage Classifier
Directory of Open Access Journals (Sweden)
Ignacio Rodríguez Novelle
2008-12-01
Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.
Correlation Dimension-Based Classifier
Czech Academy of Sciences Publication Activity Database
Jiřina, Marcel; Jiřina jr., M.
2014-01-01
Roč. 44, č. 12 (2014), s. 2253-2263 ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014
An ensemble classifier to predict track geometry degradation
International Nuclear Information System (INIS)
Cárdenas-Gallo, Iván; Sarmiento, Carlos A.; Morales, Gilberto A.; Bolivar, Manuel A.; Akhavan-Tabatabaei, Raha
2017-01-01
Railway operations are inherently complex and source of several problems. In particular, track geometry defects are one of the leading causes of train accidents in the United States. This paper presents a solution approach which entails the construction of an ensemble classifier to forecast the degradation of track geometry. Our classifier is constructed by solving the problem from three different perspectives: deterioration, regression and classification. We considered a different model from each perspective and our results show that using an ensemble method improves the predictive performance. - Highlights: • We present an ensemble classifier to forecast the degradation of track geometry. • Our classifier considers three perspectives: deterioration, regression and classification. • We construct and test three models and our results show that using an ensemble method improves the predictive performance.
Pixel Classification of SAR ice images using ANFIS-PSO Classifier
Directory of Open Access Journals (Sweden)
G. Vasumathi
2016-12-01
Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.
Schmitt, Oliver; Steinmann, Paul
2017-09-01
We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.
Directory of Open Access Journals (Sweden)
Matheswaran Saravanan
2014-01-01
Full Text Available Wireless sensor network (WSN consists of sensor nodes that need energy efficient routing techniques as they have limited battery power, computing, and storage resources. WSN routing protocols should enable reliable multihop communication with energy constraints. Clustering is an effective way to reduce overheads and when this is aided by effective resource allocation, it results in reduced energy consumption. In this work, a novel hybrid evolutionary algorithm called Bee Algorithm-Simulated Annealing Weighted Minimal Spanning Tree (BASA-WMST routing is proposed in which randomly deployed sensor nodes are split into the best possible number of independent clusters with cluster head and optimal route. The former gathers data from sensors belonging to the cluster, forwarding them to the sink. The shortest intrapath selection for the cluster is selected using Weighted Minimum Spanning Tree (WMST. The proposed algorithm computes the distance-based Minimum Spanning Tree (MST of the weighted graph for the multihop network. The weights are dynamically changed based on the energy level of each sensor during route selection and optimized using the proposed bee algorithm simulated annealing algorithm.
Energy Technology Data Exchange (ETDEWEB)
Addai, Emmanuel Kwasi, E-mail: emmanueladdai41@yahoo.com; Gabel, Dieter; Krause, Ulrich
2016-04-15
Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.
International Nuclear Information System (INIS)
Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich
2016-01-01
Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.
Disassembly and Sanitization of Classified Matter
International Nuclear Information System (INIS)
Stockham, Dwight J.; Saad, Max P.
2008-01-01
The Disassembly Sanitization Operation (DSO) process was implemented to support weapon disassembly and disposition by using recycling and waste minimization measures. This process was initiated by treaty agreements and reconfigurations within both the DOD and DOE Complexes. The DOE is faced with disassembling and disposing of a huge inventory of retired weapons, components, training equipment, spare parts, weapon maintenance equipment, and associated material. In addition, regulations have caused a dramatic increase in the need for information required to support the handling and disposition of these parts and materials. In the past, huge inventories of classified weapon components were required to have long-term storage at Sandia and at many other locations throughout the DoE Complex. These materials are placed in onsite storage unit due to classification issues and they may also contain radiological and/or hazardous components. Since no disposal options exist for this material, the only choice was long-term storage. Long-term storage is costly and somewhat problematic, requiring a secured storage area, monitoring, auditing, and presenting the potential for loss or theft of the material. Overall recycling rates for materials sent through the DSO process have enabled 70 to 80% of these components to be recycled. These components are made of high quality materials and once this material has been sanitized, the demand for the component metals for recycling efforts is very high. The DSO process for NGPF, classified components established the credibility of this technique for addressing the long-term storage requirements of the classified weapons component inventory. The success of this application has generated interest from other Sandia organizations and other locations throughout the complex. Other organizations are requesting the help of the DSO team and the DSO is responding to these requests by expanding its scope to include Work-for- Other projects. For example
Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa
2013-03-01
Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
Cao, Jianfeng; Liu, Yong; Hu, Songjie; Liu, Lei; Tang, Geshi; Huang, Yong; Li, Peijia
2015-01-01
China's space probe Chang'E-2 began its asteroid exploration mission on April 15, 2012 and had been in space for 243 days before its encounter with Toutatis. With no onboard navigation equipment available, the navigation of CE-2 during its fly-by of the asteroid relied totally on ground-based Unified S-Band (USB) and Very Long Baseline Interferometry (VLBI) tracking data. The orbit determination of Toutatis was achieved by using a combination of optical measurements and radar ranging. On November 30, 2012, CE-2 was targeted at a destination that was 15 km away from the asteroid as it performed its third trajectory correction maneuver. Later orbit determination analysis showed that a correction residual was still present, which necessitated another maneuver on December 12. During the two maneuvers, ground-based navigation faced a challenge in terms of the orbit determination accuracy. With the optimization of our strategy, an accuracy of better than 15 km was finally achieved for the post-maneuver orbit solution. On December 13, CE-2 successfully passed by Toutatis and conducted continuous photographing of Toutatis during the entire process. An analysis of the images that were taken from the solar panel monitoring camera and the satellite attitude information demonstrates that the closest distance obtained between CE-2 and Toutatis (Toutatis's surface) was 1.9 km, which is considerably better than the 30 km fly-by distance that we originally hoped based on the accuracies that we can obtain on the satellite and Toutatis' orbits.
Teaching the Minimum Wage in Econ 101 in Light of the New Economics of the Minimum Wage.
Krueger, Alan B.
2001-01-01
Argues that the recent controversy over the effect of the minimum wage on employment offers an opportunity for teaching introductory economics. Examines eight textbooks to determine topic coverage but finds little consensus. Describes how minimum wage effects should be taught. (RLH)
International Nuclear Information System (INIS)
Daling, P.M.; Graham, T.M.
1999-01-01
The US Department of Energy has undertaken a project to reduce energy expenditures and improve energy system reliability in the 300 Area of the Hanford Site near Richland, Washington. This project replaced the centralized heating system with heating units for individual buildings or groups of buildings, constructed a new natural-gas distribution system to provide a fuel source for many of these units, and constructed a central control building to operate and maintain the system. The individual heating units include steam boilers that are housed in individual annex buildings located in the vicinity of a number of nuclear facilities operated by the Pacific Northwest National Laboratory (PNNL). The described analysis develops the basis for siting the package boilers and natural-gas distribution system used to supply steam to PNNL's 300 Area nuclear facilities. Minimum separation distances that would eliminate or reduce the risks of accidental dispersal of radioactive and hazardous materials in nearby nuclear facilities were calculated based on the effects of four potential fire and explosion (detonation) scenarios involving the boiler and natural-gas distribution system. These minimum separation distances were used to support siting decisions for the boilers and natural-gas pipelines
Directory of Open Access Journals (Sweden)
Antonio Colmenar-Santos
2014-02-01
Full Text Available The goal of the research is to assess the minimum requirement of fast charging infrastructure to allow country-wide interurban electric vehicle (EV mobility. Charging times comparable to fueling times in conventional internal combustion vehicles are nowadays feasible, given the current availability of fast charging technologies. The main contribution of this paper is the analysis of the planning method and the investment requirements for the necessary infrastructure, including the definition of the Maximum Distance between Fast Charge (MDFC and the Basic Highway Charging Infrastructure (BHCI concepts. According to the calculations, distance between stations will be region-dependent, influenced primarily by weather conditions. The study considers that the initial investment should be sufficient to promote the EV adoption, proposing an initial state-financed public infrastructure and, once the adoption rate for EVs increases, additional infrastructure will be likely developed through private investment. The Spanish network of state highways is used as a case study to demonstrate the methodology and calculate the investment required. Further, the results are discussed and quantitatively compared to other incentives and policies supporting EV technology adoption in the light-vehicle sector.
Frog sound identification using extended k-nearest neighbor classifier
Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati
2017-09-01
Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.
30 CFR 75.1431 - Minimum rope strength.
2010-07-01
..., including rotation resistant). For rope lengths less than 3,000 feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet...
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert
2005-12-01
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Directory of Open Access Journals (Sweden)
Saleh LAshkari
2016-06-01
Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.
Liberman, Nira; Förster, Jens
2009-08-01
In 4 studies, the authors examined the prediction derived from construal level theory (CLT) that higher level of perceptual construal would enhance estimated egocentric psychological distance. The authors primed participants with global perception, local perception, or both (the control condition). Relative to the control condition, global processing made participants estimate larger psychological distances in time (Study 1), space (Study 2), social distance (Study 3), and hypotheticality (Study 4). Local processing had the opposite effect. Consistent with CLT, all studies show that the effect of global-versus-local processing did emerge when participants estimated egocentric distances, which are distances from the experienced self in the here and now, but did not emerge with temporal distances not from now (Study 1), spatial distances not from here (Study 2), social distances not from the self (Study 3), or hypothetical events that did not involve altering an experienced reality (Study 4).
Fast Computing for Distance Covariance
Huo, Xiaoming; Szekely, Gabor J.
2014-01-01
Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...
ORDERED WEIGHTED DISTANCE MEASURE
Institute of Scientific and Technical Information of China (English)
Zeshui XU; Jian CHEN
2008-01-01
The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.
30 CFR 281.30 - Minimum royalty.
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...
32 CFR 2400.30 - Reproduction of classified information.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Reproduction of classified information. 2400.30... SECURITY PROGRAM Safeguarding § 2400.30 Reproduction of classified information. Documents or portions of... the originator or higher authority. Any stated prohibition against reproduction shall be strictly...
State cigarette minimum price laws - United States, 2009.
2010-04-09
Cigarette price increases reduce the demand for cigarettes and thereby reduce smoking prevalence, cigarette consumption, and youth initiation of smoking. Excise tax increases are the most effective government intervention to increase the price of cigarettes, but cigarette manufacturers use trade discounts, coupons, and other promotions to counteract the effects of these tax increases and appeal to price-sensitive smokers. State cigarette minimum price laws, initiated by states in the 1940s and 1950s to protect tobacco retailers from predatory business practices, typically require a minimum percentage markup to be added to the wholesale and/or retail price. If a statute prohibits trade discounts from the minimum price calculation, these laws have the potential to counteract discounting by cigarette manufacturers. To assess the status of cigarette minimum price laws in the United States, CDC surveyed state statutes and identified those states with minimum price laws in effect as of December 31, 2009. This report summarizes the results of that survey, which determined that 25 states had minimum price laws for cigarettes (median wholesale markup: 4.00%; median retail markup: 8.00%), and seven of those states also expressly prohibited the use of trade discounts in the minimum retail price calculation. Minimum price laws can help prevent trade discounting from eroding the positive effects of state excise tax increases and higher cigarette prices on public health.
Chen, Xiaodian; Deng, Licai; de Grijs, Richard; Wang, Shu; Feng, Yuting
2018-06-01
W Ursa Majoris (W UMa)-type contact binary systems (CBs) are useful statistical distance indicators because of their large numbers. Here, we establish (orbital) period–luminosity relations (PLRs) in 12 optical to mid-infrared bands (GBVRIJHK s W1W2W3W4) based on 183 nearby W UMa-type CBs with accurate Tycho–Gaia parallaxes. The 1σ dispersion of the PLRs decreases from optical to near- and mid-infrared wavelengths. The minimum scatter, 0.16 mag, implies that W UMa-type CBs can be used to recover distances to 7% precision. Applying our newly determined PLRs to 19 open clusters containing W UMa-type CBs demonstrates that the PLR and open cluster CB distance scales are mutually consistent to within 1%. Adopting our PLRs as secondary distance indicators, we compiled a catalog of 55,603 CB candidates, of which 80% have distance estimates based on a combination of optical, near-infrared, and mid-infrared photometry. Using Fourier decomposition, 27,318 high-probability W UMa-type CBs were selected. The resulting 8% distance accuracy implies that our sample encompasses the largest number of objects with accurate distances within a local volume with a radius of 3 kpc available to date. The distribution of W UMa-type CBs in the Galaxy suggests that in different environments, the CB luminosity function may be different: larger numbers of brighter (longer-period) W UMa-type CBs are found in younger environments.
Three data partitioning strategies for building local classifiers (Chapter 14)
Zliobaite, I.; Okun, O.; Valentini, G.; Re, M.
2011-01-01
Divide-and-conquer approach has been recognized in multiple classifier systems aiming to utilize local expertise of individual classifiers. In this study we experimentally investigate three strategies for building local classifiers that are based on different routines of sampling data for training.
SLOPE STABILITY EVALUATION AND EQUIPMENT SETBACK DISTANCES FOR BURIAL GROUND EXCAVATIONS
Energy Technology Data Exchange (ETDEWEB)
MCSHANE DS
2010-03-25
After 1970 Transuranic (TRU) and suspect TRU waste was buried in the ground with the intention that at some later date the waste would be retrieved and processed into a configuration for long term storage. To retrieve this waste the soil must be removed (excavated). Sloping the bank of the excavation is the method used to keep the excavation from collapsing and to provide protection for workers retrieving the waste. The purpose of this paper is to document the minimum distance (setback) that equipment must stay from the edge of the excavation to maintain a stable slope. This evaluation examines the equipment setback distance by dividing the equipment into two categories, (1) equipment used for excavation and (2) equipment used for retrieval. The section on excavation equipment will also discuss techniques used for excavation including the process of benching. Calculations 122633-C-004, 'Slope Stability Analysis' (Attachment A), and 300013-C-001, 'Crane Stability Analysis' (Attachment B), have been prepared to support this evaluation. As shown in the calculations the soil has the following properties: Unit weight 110 pounds per cubic foot; and Friction Angle (natural angle of repose) 38{sup o} or 1.28 horizontal to 1 vertical. Setback distances are measured from the top edge of the slope to the wheels/tracks of the vehicles and heavy equipment being utilized. The computer program utilized in the calculation uses the center of the wheel or track load for the analysis and this difference is accounted for in this evaluation.
Slope Stability Evaluation And Equipment Setback Distances For Burial Ground Excavations
International Nuclear Information System (INIS)
Mcshane, D.S.
2010-01-01
After 1970 Transuranic (TRU) and suspect TRU waste was buried in the ground with the intention that at some later date the waste would be retrieved and processed into a configuration for long term storage. To retrieve this waste the soil must be removed (excavated). Sloping the bank of the excavation is the method used to keep the excavation from collapsing and to provide protection for workers retrieving the waste. The purpose of this paper is to document the minimum distance (setback) that equipment must stay from the edge of the excavation to maintain a stable slope. This evaluation examines the equipment setback distance by dividing the equipment into two categories, (1) equipment used for excavation and (2) equipment used for retrieval. The section on excavation equipment will also discuss techniques used for excavation including the process of benching. Calculations 122633-C-004, 'Slope Stability Analysis' (Attachment A), and 300013-C-001, 'Crane Stability Analysis' (Attachment B), have been prepared to support this evaluation. As shown in the calculations the soil has the following properties: Unit weight 110 pounds per cubic foot; and Friction Angle (natural angle of repose) 38 o or 1.28 horizontal to 1 vertical. Setback distances are measured from the top edge of the slope to the wheels/tracks of the vehicles and heavy equipment being utilized. The computer program utilized in the calculation uses the center of the wheel or track load for the analysis and this difference is accounted for in this evaluation.
Araucaria Project: Pulsating stars in binary systems and as distance indicators
Directory of Open Access Journals (Sweden)
Pilecki Bogumił
2017-01-01
Type II Cepheids are recently becoming more important as distance indicators and astrophysics laboratory, although our knowledge of these stars is quite limited. Their evolutionary status is also not well understood and observational constraints are needed to confirm the current theories. We are presenting here our first results of the spectroscopic analysis of 4 of these systems. The masses of type II Cepheids seem consistent with the expected 0.5 − 0.6 M⊙. We also present first results of the fully modeled pulsator originally classified as peculiar W Vir star. The mass of this star is 1.51 ± 0.09 M⊙ and the p-factor 1.3 ± 0.03. It was eventually found not to belong to any typical Cepheid group.
9 CFR 147.51 - Authorized laboratory minimum requirements.
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Authorized laboratory minimum requirements. 147.51 Section 147.51 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE... Authorized Laboratories and Approved Tests § 147.51 Authorized laboratory minimum requirements. These minimum...
Robust Combining of Disparate Classifiers Through Order Statistics
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich
2016-04-15
The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. Copyright © 2016 Elsevier B.V. All rights reserved.
Knowledge Uncertainty and Composed Classifier
Czech Academy of Sciences Publication Activity Database
Klimešová, Dana; Ocelíková, E.
2007-01-01
Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science
The decision tree classifier - Design and potential. [for Landsat-1 data
Hauska, H.; Swain, P. H.
1975-01-01
A new classifier has been developed for the computerized analysis of remote sensor data. The decision tree classifier is essentially a maximum likelihood classifier using multistage decision logic. It is characterized by the fact that an unknown sample can be classified into a class using one or several decision functions in a successive manner. The classifier is applied to the analysis of data sensed by Landsat-1 over Kenosha Pass, Colorado. The classifier is illustrated by a tree diagram which for processing purposes is encoded as a string of symbols such that there is a unique one-to-one relationship between string and decision tree.
Representative Vector Machines: A Unified Framework for Classical Classifiers.
Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu
2016-08-01
Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.
Minimum Price Guarantees In a Consumer Search Model
M.C.W. Janssen (Maarten); A. Parakhonyak (Alexei)
2009-01-01
textabstractThis paper is the first to examine the effect of minimum price guarantees in a sequential search model. Minimum price guarantees are not advertised and only known to consumers when they come to the shop. We show that in such an environment, minimum price guarantees increase the value of
41 CFR 105-62.102 - Authority to originally classify.
2010-07-01
... originally classify. (a) Top secret, secret, and confidential. The authority to originally classify information as Top Secret, Secret, or Confidential may be exercised only by the Administrator and is delegable...
Wage inequality, minimum wage effects and spillovers
Stewart, Mark B.
2011-01-01
This paper investigates possible spillover effects of the UK minimum wage. The halt in the growth in inequality in the lower half of the wage distribution (as measured by the 50:10 percentile ratio) since the mid-1990s, in contrast to the continued inequality growth in the upper half of the distribution, suggests the possibility of a minimum wage effect and spillover effects on wages above the minimum. This paper analyses individual wage changes, using both a difference-in-differences estimat...
Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose
Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.
Directory of Open Access Journals (Sweden)
K.K. Lim
2017-12-01
Full Text Available Existing evidence on the association between built environment and cardiovascular disease (CVD risk factors focused on the general population, which may not generalize to higher risk subgroups such as those with lower socio-economic status (SES. We examined the associations between distance to 5 public amenities from residential housing (public polyclinic, subsidized private clinic, healthier eatery, public park and train station and 12 CVD risk factors (physical inactivity, medical histories and unhealthy dietary habits among a study sample of low income Singaporeans aged ≥40years (N=1972. Using data from the Singapore Heart Foundation Health Mapping Exercise 2013–2015, we performed a series of logistic mixed effect regressions, accounting for clustering of respondents in residential blocks and multiple comparisons. Each regression analysis used the minimum distance (in km between residential housing and each public amenity as an independent continuous variable and a single risk factor as the dependent variable, controlling for demographic characteristics. Increased distance (geographical inaccessibility to a train station was significantly associated with lower odds of participation in sports whereas greater distance to a subsidized private clinic was associated with lower odds of having high cholesterol diagnosed. Increasing distance to park was positively associated with higher odds of less vegetable and fruits consumption, deep fried food and fast food consumption in the preceding week/month, high BMI at screening and history of diabetes, albeit not achieving statistical significance. Our findings highlighted potential effects of health-promoting amenities on CVD risk factors in urban low-income setting, suggesting gaps for further investigations. Keywords: Cardiovascular risk, Urban health, Socioeconomic status, Singapore, Health promotion, Primary prevention
Energy Technology Data Exchange (ETDEWEB)
Santos, Jose O. dos, E-mail: osmansantos@ig.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sergipe (IFS), Lagarto, SE (Brazil); Munita, Casimiro S., E-mail: camunita@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Soares, Emilio A.A., E-mail: easoares@ufan.edu.br [Universidade Federal do Amazonas (UFAM), Manaus, AM (Brazil). Dept. de Geociencias
2013-07-01
The detection of outlier in geochemical studies is one of the main difficulties in the interpretation of dataset because they can disturb the statistical method. The search for outliers in geochemical studies is usually based in the Mahalanobis distance (MD), since points in multivariate space that are a distance larger the some predetermined values from center of the data are considered outliers. However, the MD is very sensitive to the presence of discrepant samples. Many robust estimators for location and covariance have been introduced in the literature, such as Minimum Covariance Determinant (MCD) estimator. When MCD estimators are used to calculate the MD leads to the so-called Robust Mahalanobis Distance (RD). In this context, in this work RD was used to detect outliers in geological study of samples collected from confluence of Negro and Solimoes rivers. The purpose of this study was to study the contributions of the sediments deposited by the Solimoes and Negro rivers in the filling of the tectonic depressions at Parana do Ariau. For that 113 samples were analyzed by Instrumental Neutron Activation Analysis (INAA) in which were determined the concentration of As, Ba, Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, U, Yb, Ta, Tb, Th and Zn. In the dataset was possible to construct the ellipse corresponding to robust Mahalanobis distance for each group of samples. The samples found outside of the tolerance ellipse were considered an outlier. The results showed that Robust Mahalanobis Distance was more appropriate for the identification of the outliers, once it is a more restrictive method. (author)
International Nuclear Information System (INIS)
Santos, Jose O. dos; Munita, Casimiro S.; Soares, Emilio A.A.
2013-01-01
The detection of outlier in geochemical studies is one of the main difficulties in the interpretation of dataset because they can disturb the statistical method. The search for outliers in geochemical studies is usually based in the Mahalanobis distance (MD), since points in multivariate space that are a distance larger the some predetermined values from center of the data are considered outliers. However, the MD is very sensitive to the presence of discrepant samples. Many robust estimators for location and covariance have been introduced in the literature, such as Minimum Covariance Determinant (MCD) estimator. When MCD estimators are used to calculate the MD leads to the so-called Robust Mahalanobis Distance (RD). In this context, in this work RD was used to detect outliers in geological study of samples collected from confluence of Negro and Solimoes rivers. The purpose of this study was to study the contributions of the sediments deposited by the Solimoes and Negro rivers in the filling of the tectonic depressions at Parana do Ariau. For that 113 samples were analyzed by Instrumental Neutron Activation Analysis (INAA) in which were determined the concentration of As, Ba, Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, U, Yb, Ta, Tb, Th and Zn. In the dataset was possible to construct the ellipse corresponding to robust Mahalanobis distance for each group of samples. The samples found outside of the tolerance ellipse were considered an outlier. The results showed that Robust Mahalanobis Distance was more appropriate for the identification of the outliers, once it is a more restrictive method. (author)
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Minimum Covers of Fixed Cardinality in Weighted Graphs.
White, Lee J.
Reported is the result of research on combinatorial and algorithmic techniques for information processing. A method is discussed for obtaining minimum covers of specified cardinality from a given weighted graph. By the indicated method, it is shown that the family of minimum covers of varying cardinality is related to the minimum spanning tree of…
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, S.E., University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-07-20
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties.
A cardiorespiratory classifier of voluntary and involuntary electrodermal activity
Directory of Open Access Journals (Sweden)
Sejdic Ervin
2010-02-01
Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.
Deza, Michel Marie
2009-01-01
Distance metrics and distances have become an essential tool in many areas of pure and applied Mathematics. This title offers both independent introductions and definitions, while at the same time making cross-referencing easy through hyperlink-like boldfaced references to original definitions.
Who Benefits from a Minimum Wage Increase?
John W. Lopresti; Kevin J. Mumford
2015-01-01
This paper addresses the question of how a minimum wage increase affects the wages of low-wage workers. Most studies assume that there is a simple mechanical increase in the wage for workers earning a wage between the old and the new minimum wage, with some studies allowing for spillovers to workers with wages just above this range. Rather than assume that the wages of these workers would have remained constant, this paper estimates how a minimum wage increase impacts a low-wage worker's wage...
Learning to classify wakes from local sensory information
Alsalman, Mohamad; Colvert, Brendan; Kanso, Eva; Kanso Team
2017-11-01
Aquatic organisms exhibit remarkable abilities to sense local flow signals contained in their fluid environment and to surmise the origins of these flows. For example, fish can discern the information contained in various flow structures and utilize this information for obstacle avoidance and prey tracking. Flow structures created by flapping and swimming bodies are well characterized in the fluid dynamics literature; however, such characterization relies on classical methods that use an external observer to reconstruct global flow fields. The reconstructed flows, or wakes, are then classified according to the unsteady vortex patterns. Here, we propose a new approach for wake identification: we classify the wakes resulting from a flapping airfoil by applying machine learning algorithms to local flow information. In particular, we simulate the wakes of an oscillating airfoil in an incoming flow, extract the downstream vorticity information, and train a classifier to learn the different flow structures and classify new ones. This data-driven approach provides a promising framework for underwater navigation and detection in application to autonomous bio-inspired vehicles.
Bayesian Classifier for Medical Data from Doppler Unit
Directory of Open Access Journals (Sweden)
J. Málek
2006-01-01
Full Text Available Nowadays, hand-held ultrasonic Doppler units (probes are often used for noninvasive screening of atherosclerosis in the arteries of the lower limbs. The mean velocity of blood flow in time and blood pressures are measured on several positions on each lower limb. By listening to the acoustic signal generated by the device or by reading the signal displayed on screen, a specialist can detect peripheral arterial disease (PAD.This project aims to design software that will be able to analyze data from such a device and classify it into several diagnostic classes. At the Department of Functional Diagnostics at the Regional Hospital in Liberec a database of several hundreds signals was collected. In cooperation with the specialist, the signals were manually classified into four classes. For each class, selected signal features were extracted and then used for training a Bayesian classifier. Another set of signals was used for evaluating and optimizing the parameters of the classifier. Slightly above 84 % of successfully recognized diagnostic states, was recently achieved on the test data.
An ensemble self-training protein interaction article classifier.
Chen, Yifei; Hou, Ping; Manderick, Bernard
2014-01-01
Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Classified facilities for environmental protection
International Nuclear Information System (INIS)
Anon.
1993-02-01
The legislation of the classified facilities governs most of the dangerous or polluting industries or fixed activities. It rests on the law of 9 July 1976 concerning facilities classified for environmental protection and its application decree of 21 September 1977. This legislation, the general texts of which appear in this volume 1, aims to prevent all the risks and the harmful effects coming from an installation (air, water or soil pollutions, wastes, even aesthetic breaches). The polluting or dangerous activities are defined in a list called nomenclature which subjects the facilities to a declaration or an authorization procedure. The authorization is delivered by the prefect at the end of an open and contradictory procedure after a public survey. In addition, the facilities can be subjected to technical regulations fixed by the Environment Minister (volume 2) or by the prefect for facilities subjected to declaration (volume 3). (A.B.)
Defending Malicious Script Attacks Using Machine Learning Classifiers
Directory of Open Access Journals (Sweden)
Nayeem Khan
2017-01-01
Full Text Available The web application has become a primary target for cyber criminals by injecting malware especially JavaScript to perform malicious activities for impersonation. Thus, it becomes an imperative to detect such malicious code in real time before any malicious activity is performed. This study proposes an efficient method of detecting previously unknown malicious java scripts using an interceptor at the client side by classifying the key features of the malicious code. Feature subset was obtained by using wrapper method for dimensionality reduction. Supervised machine learning classifiers were used on the dataset for achieving high accuracy. Experimental results show that our method can efficiently classify malicious code from benign code with promising results.
Directory of Open Access Journals (Sweden)
Ali Soner Kilinc
2017-08-01
Full Text Available A Linear Wireless Sensor Network (LWSN is a kind of wireless sensor network where the nodes are deployed in a line. Since the sensor nodes are energy restricted, energy efficiency becomes one of the most significant design issues for LWSNs as well as wireless sensor networks. With the proper deployment, the power consumption could be minimized by adjusting the distance between the sensor nodes which is known as hop length. In this paper, analytical and algorithmic approaches are presented to determine the number of hops and sensor nodes for minimum power consumption in a linear wireless sensor network including equidistantly placed sensor nodes.
Wren, Tishya A L; Mueske, Nicole M; Brophy, Christopher H; Pace, J Lee; Katzel, Mia J; Edison, Bianca R; VandenBerg, Curtis D; Zaslow, Tracy L
2018-03-30
Study Design Retrospective cohort. Background Return to sport (RTS) protocols after anterior cruciate ligament reconstruction (ACLR) often include assessment of hop distance symmetry. However, it is unclear if movement deficits are present regardless of hop symmetry. Objectives To assess biomechanics and symmetry of adolescent athletes following ACLR during a single leg hop for distance. Methods Forty-six patients with ACLR (5-12 months post-surgery; 27 female; age 15.6, SD 1.7 years) were classified as asymmetric (operative limb hop distance biomechanics were compared among operative and contralateral limbs and 24 symmetric controls (12 female; age 14.7, SD 1.5 years) using ANOVA. Results Compared to controls, asymmetric patients hopped a shorter distance on their operative limb (P<0.001), while symmetric patients hopped an intermediate distance on both sides (P≥0.12). During landing, operative limbs, regardless of hop distance, exhibited lower knee flexion moments compared to controls and the contralateral side (P≤0.04) with lower knee energy absorption than the contralateral side (P≤0.006). During take-off, both symmetric and asymmetric patients had less hip extension and smaller ankle range of motion on the operative side compared with controls (P≤0.05). Asymmetric patients also had lower hip range of motion on the operative, compared with the contralateral, side (P=0.001). Conclusion Both symmetric and asymmetric patients offloaded the operative knee; symmetric patients achieved symmetry in part by hopping a shorter distance on the contralateral side. Therefore, hop distance symmetry may not be an adequate test of single limb function and RTS readiness. Level of Evidence 2b. J Orthop Sports Phys Ther, Epub 30 Mar 2018. doi:10.2519/jospt.2018.7817.
The minimum wage in the Czech enterprises
Directory of Open Access Journals (Sweden)
Eva Lajtkepová
2010-01-01
Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.
Fisher classifier and its probability of error estimation
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
How unprecedented a solar minimum was it?
Russell, C T; Jian, L K; Luhmann, J G
2013-05-01
The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.
The K giant stars from the LAMOST survey data. I. Identification, metallicity, and distance
Energy Technology Data Exchange (ETDEWEB)
Liu, Chao; Deng, Li-Cai; Li, Jing; Gao, Shuang; Yang, Fan; Xu, Yan; Zhang, Yue-Yang; Xin, Yu; Wu, Yue [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Datun Road 20A, Beijing 100012 (China); Carlin, Jeffrey L.; Newberg, Heidi Jo [Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180 (United States); Smith, Martin C. [Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030 (China); Xue, Xiang-Xiang [Max Planck Institute for Astronomy, Königstuhl 17, Heidelberg D-69117 (Germany); Jin, Ge, E-mail: liuchao@nao.cas.cn [University of Science and Technology of China, Hefei 230026 (China)
2014-08-01
We present a support vector machine classifier to identify the K giant stars from the LAMOST survey directly using their spectral line features. The completeness of the identification is about 75% for tests based on LAMOST stellar parameters. The contamination in the identified K giant sample is lower than 2.5%. Applying the classification method to about two million LAMOST spectra observed during the pilot survey and the first year survey, we select 298,036 K giant candidates. The metallicities of the sample are also estimated with an uncertainty of 0.13 ∼ 0.29 dex based on the equivalent widths of Mg{sub b} and iron lines. A Bayesian method is then developed to estimate the posterior probability of the distance for the K giant stars, based on the estimated metallicity and 2MASS photometry. The synthetic isochrone-based distance estimates have been calibrated using 7 globular clusters with a wide range of metallicities. The uncertainty of the estimated distance modulus at K = 11 mag, which is the median brightness of the K giant sample, is about 0.6 mag, corresponding to ∼30% in distance. As a scientific verification case, the trailing arm of the Sagittarius stream is clearly identified with the selected K giant sample. Moreover, at about 80 kpc from the Sun, we use our K giant stars to confirm a detection of stream members near the apo-center of the trailing tail. These rediscoveries of the features of the Sagittarius stream illustrate the potential of the LAMOST survey for detecting substructures in the halo of the Milky Way.
Minimum-Cost Reachability for Priced Timed Automata
DEFF Research Database (Denmark)
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...
Energy Technology Data Exchange (ETDEWEB)
Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)
2013-10-31
We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.
Minimum Q Electrically Small Antennas
DEFF Research Database (Denmark)
Kim, O. S.
2012-01-01
Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....
Reducing variability in the output of pattern classifiers using histogram shaping
International Nuclear Information System (INIS)
Gupta, Shalini; Kan, Chih-Wen; Markey, Mia K.
2010-01-01
Purpose: The authors present a novel technique based on histogram shaping to reduce the variability in the output and (sensitivity, specificity) pairs of pattern classifiers with identical ROC curves, but differently distributed outputs. Methods: The authors identify different sources of variability in the output of linear pattern classifiers with identical ROC curves, which also result in classifiers with differently distributed outputs. They theoretically develop a novel technique based on the matching of the histograms of these differently distributed pattern classifier outputs to reduce the variability in their (sensitivity, specificity) pairs at fixed decision thresholds, and to reduce the variability in their actual output values. They empirically demonstrate the efficacy of the proposed technique by means of analyses on the simulated data and real world mammography data. Results: For the simulated data, with three different known sources of variability, and for the real world mammography data with unknown sources of variability, the proposed classifier output calibration technique significantly reduced the variability in the classifiers' (sensitivity, specificity) pairs at fixed decision thresholds. Furthermore, for classifiers with monotonically or approximately monotonically related output variables, the histogram shaping technique also significantly reduced the variability in their actual output values. Conclusions: Classifier output calibration based on histogram shaping can be successfully employed to reduce the variability in the output values and (sensitivity, specificity) pairs of pattern classifiers with identical ROC curves, but differently distributed outputs.
Use of information barriers to protect classified information
International Nuclear Information System (INIS)
MacArthur, D.; Johnson, M.W.; Nicholas, N.J.; Whiteson, R.
1998-01-01
This paper discusses the detailed requirements for an information barrier (IB) for use with verification systems that employ intrusive measurement technologies. The IB would protect classified information in a bilateral or multilateral inspection of classified fissile material. Such a barrier must strike a balance between providing the inspecting party the confidence necessary to accept the measurement while protecting the inspected party's classified information. The authors discuss the structure required of an IB as well as the implications of the IB on detector system maintenance. A defense-in-depth approach is proposed which would provide assurance to the inspected party that all sensitive information is protected and to the inspecting party that the measurements are being performed as expected. The barrier could include elements of physical protection (such as locks, surveillance systems, and tamper indicators), hardening of key hardware components, assurance of capabilities and limitations of hardware and software systems, administrative controls, validation and verification of the systems, and error detection and resolution. Finally, an unclassified interface could be used to display and, possibly, record measurement results. The introduction of an IB into an analysis system may result in many otherwise innocuous components (detectors, analyzers, etc.) becoming classified and unavailable for routine maintenance by uncleared personnel. System maintenance and updating will be significantly simplified if the classification status of as many components as possible can be made reversible (i.e. the component can become unclassified following the removal of classified objects)
Directory of Open Access Journals (Sweden)
Eduardo Shimoda
2009-05-01
Full Text Available This study aimed to determine the minimum concentration of 2-phenoxyethanol for long-term exposure and evaluate the effect of inter-suture distance on wound healing in the goldfish Carassius auratus. Twenty adult goldfish (standard length = 12.4 ± 1.1 cm; weight = 58.7 ± 17.2 g were anesthetized in 2-phenoxiethanol at 1.2‰ and placed in an anesthesia delivery system at the following concentrations of 2-phenoxiethanol: 0.0 (control; 0.1; 0.2; 0.3 and 0.4‰, and the duration of sedation was measured. Fifteen days later, fishes were anesthetized using the same procedure, and a 36 mm incision was performed in the ventro-lateral region. The incision was sutured using a simple-interrupted pattern with 3, 6 or 9 mm as inter-suture distances. Results demonstrated that 2-phenoxiethanol at 0.4‰ maintain the sedation for surgical procedures up to 60 minutes, and 9 mm as inter-suture distance optimized the wound healing in goldfish.This study aimed to determine the minimum concentration of 2-phenoxyethanol for long-term exposure and evaluate the effect of inter-suture distance on wound healing in the goldfish Carassius auratus. Twenty adult goldfish (standard length = 12.4 ± 1.1 cm; weight = 58.7 ± 17.2 g were anesthetized in 2-phenoxiethanol at 1.2‰ and placed in an anesthesia delivery system at the following concentrations of 2-phenoxiethanol: 0.0 (control; 0.1; 0.2; 0.3 and 0.4‰, and the duration of sedation was measured. Fifteen days later, fishes were anesthetized using the same procedure, and a 36 mm incision was performed in the ventro-lateral region. The incision was sutured using a simple-interrupted pattern with 3, 6 or 9 mm as inter-suture distances. Results demonstrated that 2-phenoxiethanol at 0.4‰ maintain the sedation for surgical procedures up to 60 minutes, and 9 mm as inter-suture distance optimized the wound healing in goldfish.
International Nuclear Information System (INIS)
1994-05-01
This report provides new estimates of separation distances for nuclear power plant gaseous hydrogen storage facilities. Unacceptable damage to plant structures from hydrogen detonations will be prevented by having hydrogen storage facilities meet separation distance criteria recommended in this report. The revised standoff distances are based on improved calculations on hydrogen gas cloud detonations and structural analysis of reinforced concrete structures. Also, the results presented in this study do not depend upon equivalencing a hydrogen detonation to an equivalent TNT detonation. The static and stagnation pressures, wave velocity, and the shock wave impulse delivered to wall surfaces were computed for several different size hydrogen explosions. Separation distance equations were developed and were used to compute the minimum separation distance for six different wall cases and for seven detonating volumes (from 1.59 to 79.67 lbm of hydrogen). These improved calculation results were compared to previous calculations. The ratio between the separation distance predicted in this report versus that predicted for hydrogen detonation in previous calculations varies from 0 to approximately 4. Thus, the separation distances results from the previous calculations can be either overconservative or unconservative depending upon the set of hydrogen detonation parameters that are used. Consequently, it is concluded that the hydrogen-to-TNT detonation equivalency utilized in previous calculations should no longer be used
Stochastic variational approach to minimum uncertainty states
Energy Technology Data Exchange (ETDEWEB)
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
Learner characteristics involved in distance learning
Energy Technology Data Exchange (ETDEWEB)
Cernicek, A.T.; Hahn, H.A.
1991-01-01
Distance learning represents a strategy for leveraging resources to solve educational and training needs. Although many distance learning programs have been developed, lessons learned regarding differences between distance learning and traditional education with respect to learner characteristics have not been well documented. Therefore, we conducted a survey of 20 distance learning professionals. The questionnaire was distributed to experts attending the second Distance Learning Conference sponsored by Los Alamos National Laboratory. This survey not only acquired demographic information from each of the respondents but also identified important distance learning student characteristics. Significant distance learner characteristics, which were revealed statistically and which influence the effectiveness of distance learning, include the following: reading level, student autonomy, and self-motivation. Distance learning cannot become a more useful and effective method of instruction without identifying and recognizing learner characteristics. It will be important to consider these characteristics when designing all distance learning courses. This paper will report specific survey findings and their implications for developing distance learning courses. 9 refs., 6 tabs.
Classifying sows' activity types from acceleration patterns
DEFF Research Database (Denmark)
Cornou, Cecile; Lundbye-Christensen, Søren
2008-01-01
An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen....... This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three...
Distance Magic-Type and Distance Antimagic-Type Labelings of Graphs
Freyberg, Bryan J.
Generally speaking, a distance magic-type labeling of a graph G of order n is a bijection l from the vertex set of the graph to the first n natural numbers or to the elements of a group of order n, with the property that the weight of each vertex is the same. The weight of a vertex x is defined as the sum (or appropriate group operation) of all the labels of vertices adjacent to x. If instead we require that all weights differ, then we refer to the labeling as a distance antimagic-type labeling. This idea can be generalized for directed graphs; the weight will take into consideration the direction of the arcs. In this manuscript, we provide new results for d-handicap labeling, a distance antimagic-type labeling, and introduce a new distance magic-type labeling called orientable Gamma-distance magic labeling. A d-handicap distance antimagic labeling (or just d-handicap labeling for short) of a graph G = ( V,E) of order n is a bijection l from V to the set {1,2,...,n} with induced weight function [special characters omitted]. such that l(xi) = i and the sequence of weights w(x 1),w(x2),...,w (xn) forms an arithmetic sequence with constant difference d at least 1. If a graph G admits a d-handicap labeling, we say G is a d-handicap graph. A d-handicap incomplete tournament, H(n,k,d ) is an incomplete tournament of n teams ranked with the first n natural numbers such that each team plays exactly k games and the strength of schedule of the ith ranked team is d more than the i + 1st ranked team. That is, strength of schedule increases arithmetically with strength of team. Constructing an H(n,k,d) is equivalent to finding a d-handicap labeling of a k-regular graph of order n.. In Chapter 2 we provide general constructions for every d for large classes of both n and k, providing breadfth and depth to the catalog of known H(n,k,d)'s. In Chapters 3 - 6, we introduce a new type of labeling called orientable Gamma-distance magic labeling. Let Gamma be an abelian group of order
Naive Bayesian classifiers for multinomial features: a theoretical analysis
CSIR Research Space (South Africa)
Van Dyk, E
2007-11-01
Full Text Available The authors investigate the use of naive Bayesian classifiers for multinomial feature spaces and derive error estimates for these classifiers. The error analysis is done by developing a mathematical model to estimate the probability density...
Rebouças Filho, Pedro P; Sarmento, Róger Moura; Holanda, Gabriel Bandeira; de Alencar Lima, Daniel
2017-09-01
Cerebral vascular accident (CVA), also known as stroke, is an important health problem worldwide and it affects 16 million people worldwide every year. About 30% of those that have a stroke die and 40% remain with serious physical limitations. However, recovery in the damaged region is possible if treatment is performed immediately. In the case of a stroke, Computed Tomography (CT) is the most appropriate technique to confirm the occurrence and to investigate its extent and severity. Stroke is an emergency problem for which early identification and measures are difficult; however, computer-aided diagnoses (CAD) can play an important role in obtaining information imperceptible to the human eye. Thus, this work proposes a new method for extracting features based on radiological density patterns of the brain, called Analysis of Brain Tissue Density (ABTD). The proposed method is a specific approach applied to CT images to identify and classify the occurrence of stroke diseases. The evaluation of the results of the ABTD extractor proposed in this paper were compared with extractors already established in the literature, such as features from Gray-Level Co-Occurrence Matrix (GLCM), Local binary patterns (LBP), Central Moments (CM), Statistical Moments (SM), Hu's Moment (HM) and Zernike's Moments (ZM). Using a database of 420 CT images of the skull, each extractor was applied with the classifiers such as MLP, SVM, kNN, OPF and Bayesian to classify if a CT image represented a healthy brain or one with an ischemic or hemorrhagic stroke. ABTD had the shortest extraction time and the highest average accuracy (99.30%) when combined with OPF using the Euclidean distance. Also, the average accuracy values for all classifiers were higher than 95%. The relevance of the results demonstrated that the ABTD method is a useful algorithm to extract features that can potentially be integrated with CAD systems to assist in stroke diagnosis. Copyright © 2017 Elsevier B.V. All rights
Minimum entropy production principle
Czech Academy of Sciences Publication Activity Database
Maes, C.; Netočný, Karel
2013-01-01
Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle
Cohen, A.M.; Beineke, L.W.; Wilson, R.J.; Cameron, P.J.
2004-01-01
In this chapter we investigate the classification of distance-transitive graphs: these are graphs whose automorphism groups are transitive on each of the sets of pairs of vertices at distance i, for i = 0, 1,.... We provide an introduction into the field. By use of the classification of finite
Traveling salesman problems with PageRank Distance on complex networks reveal community structure
Jiang, Zhongzhou; Liu, Jing; Wang, Shuai
2016-12-01
In this paper, we propose a new algorithm for community detection problems (CDPs) based on traveling salesman problems (TSPs), labeled as TSP-CDA. Since TSPs need to find a tour with minimum cost, cities close to each other are usually clustered in the tour. This inspired us to model CDPs as TSPs by taking each vertex as a city. Then, in the final tour, the vertices in the same community tend to cluster together, and the community structure can be obtained by cutting the tour into a couple of paths. There are two challenges. The first is to define a suitable distance between each pair of vertices which can reflect the probability that they belong to the same community. The second is to design a suitable strategy to cut the final tour into paths which can form communities. In TSP-CDA, we deal with these two challenges by defining a PageRank Distance and an automatic threshold-based cutting strategy. The PageRank Distance is designed with the intrinsic properties of CDPs in mind, and can be calculated efficiently. In the experiments, benchmark networks with 1000-10,000 nodes and varying structures are used to test the performance of TSP-CDA. A comparison is also made between TSP-CDA and two well-established community detection algorithms. The results show that TSP-CDA can find accurate community structure efficiently and outperforms the two existing algorithms.
Minimum emittance in TBA and MBA lattices
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
Minimum emittance in TBA and MBA lattices
International Nuclear Information System (INIS)
Xu Gang; Peng Yuemei
2015-01-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 3 1/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design. (authors)
Robust linear discriminant analysis with distance based estimators
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina
2017-11-01
Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.
Neural network classifier of attacks in IP telephony
Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin
2014-05-01
Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.
Educational Triage in Open Distance Learning: Walking a Moral Tightrope
Directory of Open Access Journals (Sweden)
Paul Prinsloo
2014-09-01
Full Text Available Higher education, and more specifically, distance education, is in the midst of a rapidly changing environment. Higher education institutions increasingly rely on the harvesting and analyses of student data to inform key strategic decisions across a wide range of issues, including marketing, enrolment, curriculum development, the appointment of staff, and student assessment. In the light of persistent concerns regarding student success and retention in distance education contexts, the harvesting and analysis of student data in particular in the emerging field of learning analytics holds much promise. As such the notion of educational triage needs to be interrogated. Educational triage is defined as balancing between the futility or impact of the intervention juxtaposed with the number of students requiring care, the scope of care required, and the resources available for care/interventions. The central question posed by this article is “how do we make moral decisions when resources are (increasingly limited?” An attempt is made to address this by discussing the use of data to support decisions regarding student support and examining the concept of educational triage. Despite the increase in examples of institutions implementing a triage based approach to student support, there is a serious lack of supporting conceptual and theoretical development, and, more importantly, to consideration of the moral cost of triage in educational settings. This article provides a conceptual framework to realise the potential of educational triage to responsibly and ethically respond to legitimate concerns about the “revolving door” in distance and online learning and the sustainability of higher education, without compromising ‘openness.’ The conceptual framework does not attempt to provide a detailed map, but rather a compass consisting of principles to consider in using learning analytics to classify students according to their perceived risk of
41 CFR 50-202.2 - Minimum wage in all industries.
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wage in all... Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 202-MINIMUM WAGE DETERMINATIONS Groups of Industries § 50-202.2 Minimum wage in all industries. In all industries, the minimum wage applicable to...
Making Distance Visible: Assembling Nearness in an Online Distance Learning Programme
Ross, Jen; Gallagher, Michael Sean; Macleod, Hamish
2013-01-01
Online distance learners are in a particularly complex relationship with the educational institutions they belong to (Bayne, Gallagher, & Lamb, 2012). For part-time distance students, arrivals and departures can be multiple and invisible as students take courses, take breaks, move into independent study phases of a programme, find work or…
29 CFR 525.13 - Renewal of special minimum wage certificates.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Renewal of special minimum wage certificates. 525.13... minimum wage certificates. (a) Applications may be filed for renewal of special minimum wage certificates.... (c) Workers with disabilities may not continue to be paid special minimum wages after notice that an...
An Empirical Analysis of the Relationship between Minimum Wage ...
African Journals Online (AJOL)
An Empirical Analysis of the Relationship between Minimum Wage, Investment and Economic Growth in Ghana. ... In addition, the ratio of public investment to tax revenue must increase as minimum wage increases since such complementary changes are more likely to lead to economic growth. Keywords: minimum wage ...
SVM classifier on chip for melanoma detection.
Afifi, Shereen; GholamHosseini, Hamid; Sinha, Roopak
2017-07-01
Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy.
12 CFR 3.6 - Minimum capital ratios.
2010-01-01
... should have well-diversified risks, including no undue interest rate risk exposure; excellent control... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Minimum capital ratios. 3.6 Section 3.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE...
12 CFR 615.5330 - Minimum surplus ratios.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Minimum surplus ratios. 615.5330 Section 615.5330 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL AFFAIRS, LOAN POLICIES AND OPERATIONS, AND FUNDING OPERATIONS Surplus and Collateral Requirements § 615.5330 Minimum...
Li, Qingbo; Hao, Can; Kang, Xue; Zhang, Jialin; Sun, Xuejun; Wang, Wenbo; Zeng, Haishan
2017-11-27
Combining Fourier transform infrared spectroscopy (FTIR) with endoscopy, it is expected that noninvasive, rapid detection of colorectal cancer can be performed in vivo in the future. In this study, Fourier transform infrared spectra were collected from 88 endoscopic biopsy colorectal tissue samples (41 colitis and 47 cancers). A new method, viz., entropy weight local-hyperplane k-nearest-neighbor (EWHK), which is an improved version of K-local hyperplane distance nearest-neighbor (HKNN), is proposed for tissue classification. In order to avoid limiting high dimensions and small values of the nearest neighbor, the new EWHK method calculates feature weights based on information entropy. The average results of the random classification showed that the EWHK classifier for differentiating cancer from colitis samples produced a sensitivity of 81.38% and a specificity of 92.69%.
Theoretical Principles of Distance Education.
Keegan, Desmond, Ed.
This book contains the following papers examining the didactic, academic, analytic, philosophical, and technological underpinnings of distance education: "Introduction"; "Quality and Access in Distance Education: Theoretical Considerations" (D. Randy Garrison); "Theory of Transactional Distance" (Michael G. Moore);…
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, SE, University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-11-01
M104 (NGC 4594; the Sombrero galaxy) is a nearby, well-studied elliptical galaxy included in scores of surveys focused on understanding the details of galaxy evolution. Despite the importance of observations of M104, a consensus distance has not yet been established. Here, we use newly obtained Hubble Space Telescope optical imaging to measure the distance to M104 based on the tip of the red giant branch (TRGB) method. Our measurement yields the distance to M104 to be 9.55 ± 0.13 ± 0.31 Mpc equivalent to a distance modulus of 29.90 ± 0.03 ± 0.07 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian maximum likelihood technique that reduces measurement uncertainties. The most discrepant previous results are due to Tully–Fisher method distances, which are likely inappropriate for M104 given its peculiar morphology and structure. Our results are part of a larger program to measure accurate distances to a sample of well-known spiral galaxies (including M51, M74, and M63) using the TRGB method.
McQuinn, Kristen. B. W.; Skillman, Evan D.; Dolphin, Andrew E.; Berg, Danielle; Kennicutt, Robert
2016-07-01
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Are contemporary tourists consuming distance?
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
2012. Background The background for this research, which explores how tourists represent distance and whether or not distance can be said to be consumed by contemporary tourists, is the increasing leisure mobility of people. Travelling for the purpose of visiting friends and relatives is increasing...... of understanding mobility at a conceptual level, and distance matters to people's manifest mobility: how they travel and how far they travel are central elements of their movements. Therefore leisure mobility (indeed all mobility) is the activity of relating across distance, either through actual corporeal...... metric representation. These representations are the focus for this research. Research Aim and Questions The aim of this research is thus to explore how distance is being represented within the context of leisure mobility. Further the aim is to explore how or whether distance is being consumed...
Performance of classification confidence measures in dynamic classifier systems
Czech Academy of Sciences Publication Activity Database
Štefka, D.; Holeňa, Martin
2013-01-01
Roč. 23, č. 4 (2013), s. 299-319 ISSN 1210-0552 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : classifier combining * dynamic classifier systems * classification confidence Subject RIV: IN - Informatics, Computer Science Impact factor: 0.412, year: 2013
Terminology report respect distance. The Use of the term respect distance in Posiva and SKB
International Nuclear Information System (INIS)
Lampinen, H.
2007-09-01
The term respect distance is used in some key publications of the Finnish Nuclear Waste Management Company, Posiva, and the Swedish Nuclear Waste Management Company, SKB (Svensk Kaernbrenslehantering). Posiva and SKB researchers use the same terms in their reports, and it is acknowledged that the terms used by both companies are not used in the same way, though the differences are often subtle. This report is a literature study of the term 'respect distance' and the terms immediately associated to it. Vital terms related to the respect distance and issues concerning the use of scale concepts in Posiva and SKB are gathered in the end of report. Posiva's respect distances consider the seismic, hydrological and mechanical properties of the deterministic deformation zones as important issues that constitute a risk for longterm safety. These requirements for respect distances are an interpretation of STUK's YVL 8.4 Guide. At present, Posiva's criteria regarding respect distances follow the instructions given in the Host Rock Classification system (HRC), whereas the size of a deformation zone to which respect distances are applied vary from the regional to local major and minor. This and other criteria that are given for respect distances may, however, change in the near future as Posiva's Rock Suitability Criteria (RSC) programme proceeds. SKB's considerations of respect distances acknowledge that the hydraulic and mechanical aspects of a deformation zone have an effect on the respect distance. However, the seismic risk is considered to overshadow the other effects on a regional scale. The respect distance defined for a deformation zone is coupled with the size of a fracture where secondary slip could occur. In the safety assessment it is assumed that this fracture cuts a deposition hole location. In SKB the respect distance is determined for regional and local major deformation zones. The trace length of such a zone is defined as being ≥ 3 km. For deformation zones
Directory of Open Access Journals (Sweden)
Imam Widhiono
2016-09-01
Full Text Available In agricultural landscape in northern slope of Mount Slamet, diversity of wild bee species as pollinator depend on forested habitats. This study aimed to assess the effects of distance from the forest edge on the diversity of wild bees on strawberry and tomato crops. This study was conducted from July 2014 to October 2014. The experimental fields contained tomato and strawberry with a total area of 4 ha (2 ha each and divided into five plots based on distance from the forest edge (0, 50, 100, 150, and 200 m. Wild bee was catched with kite netting in 7.00 -9.00 in ten consecutive days. Wild bee diversity differed according to distance from the forest edge, the highest value was at 0 m for strawberry plots (H’ = 2.008, E = 0.72 and Chao1= 16 and for tomato plots, the highest diversity was at 50 m from the forest edge (H’ = 2.298, E = 0.95 and Chao1= 11 and the lowest was at 200 m in both plots. Wild bee species richness and abundance decreased with distance, resulting in the minimum diversity and abundance of wild bee at 200 m from forest edge in both crops. How to CiteWidhiono, I., & Sudiana, E. (2016. Impact of Distance from the Forest Edge on The Wild Bee Diversity on the Northern Slope of Mount Slamet. Biosaintifika: Journal of Biology & Biology Education, 8(2, 148-154.
SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion
Directory of Open Access Journals (Sweden)
Zhang Xinzheng
2017-10-01
Full Text Available In this paper, we present a Synthetic Aperture Radar (SAR image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR database,and they demonstrate the effectiveness of the proposed approach.
5 CFR 551.601 - Minimum age standards.
2010-01-01
... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year... subject to its child labor provisions, with certain exceptions not applicable here. (b) 18-year minimum... occupation found and declared by the Secretary of Labor to be particularly hazardous for the employment of...
12 CFR 932.8 - Minimum liquidity requirements.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum liquidity requirements. 932.8 Section... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.8 Minimum liquidity requirements. In addition to meeting the deposit liquidity requirements contained in § 965.3 of this chapter, each Bank...
Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm
Directory of Open Access Journals (Sweden)
E. Parvinnia
2014-01-01
Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.
Directory of Open Access Journals (Sweden)
Katarina Pucelj
2006-12-01
Full Text Available I would like to underline the role and importance of knowledge, which is acquired by individuals as a result of a learning process and experience. I have established that a form of learning, such as distance learning definitely contributes to a higher learning quality and leads to innovative, dynamic and knowledgebased society. Knowledge and skills enable individuals to cope with and manage changes, solve problems and also create new knowledge. Traditional learning practices face new circumstances, new and modern technologies appear, which enable quick and quality-oriented knowledge implementation. The centre of learning process at distance learning is to increase the quality of life of citizens, their competitiveness on the workforce market and ensure higher economic growth. Intellectual capital is the one, which represents the biggest capital of each society and knowledge is the key factor for succes of everybody, who are fully aware of this. Flexibility, openness and willingness of people to follow new IT solutions form suitable environment for developing and deciding to take up distance learning.
[Hospitals failing minimum volumes in 2004: reasons and consequences].
Geraedts, M; Kühnen, C; Cruppé, W de; Blum, K; Ohmann, C
2008-02-01
In 2004 Germany introduced annual minimum volumes nationwide on five surgical procedures: kidney, liver, stem cell transplantation, complex oesophageal, and pancreatic interventions. Hospitals that fail to reach the minimum volumes are no longer allowed to perform the respective procedures unless they raise one of eight legally accepted exceptions. The goal of our study was to investigate how many hospitals fell short of the minimum volumes in 2004, whether and how this was justified, and whether hospitals that failed the requirements experienced any consequences. We analysed data on meeting the minimum volume requirements in 2004 that all German hospitals were obliged to publish as part of their biannual structured quality reports. We performed telephone interviews: a) with all hospitals not achieving the minimum volumes for complex oesophageal, and pancreatic interventions, and b) with the national umbrella organisations of all German sickness funds. In 2004, one quarter of all German acute care hospitals (N=485) performed 23,128 procedures where minimum volumes applied. 197 hospitals (41%) did not meet at least one of the minimum volumes. These hospitals performed N=715 procedures (3.1%) where the minimum volumes were not met. In 43% of these cases the hospitals raised legally accepted exceptions. In 33% of the cases the hospitals argued using reasons that were not legally acknowledged. 69% of those hospitals that failed to achieve the minimum volumes for complex oesophageal and pancreatic interventions did not experience any consequences from the sickness funds. However, one third of those hospitals reported that the sickness funds addressed the issue and partially announced consequences for the future. The sickness funds' umbrella organisations stated that there were only sparse activities related to the minimum volumes and that neither uniform registrations nor uniform proceedings in case of infringements of the standards had been agreed upon. In spite of the
Directory of Open Access Journals (Sweden)
Angga Sukma Wijaya
2017-01-01
Full Text Available Proyek pembangunan hotel The Alimar Surabaya dengan 7 lantai , mempunyai luas tanah sebesar 900 m2 dengan KDB 650 m2 KLB 4550 m2. Permasalahan yang terjadi adalah bangunan tersebut mempunyai lahan yang tidak terlalu luas dan berhimpitan langsung dengan rumah warga. Ruang gerak yang sempit akan sulit untuk menentukan penempatan site facilities. Dengan perencanaan site layout facilities yang dapat menghemat pemakaian ruang bangun. Semakin besar area yang digunakan dalam penempatan site facilities maka perjalanan antar fasilitas juga semakin banyak memakan waktu. Pembuatan alternatif site layout perlu dilakukan agar memperoleh site facilities yang optimal. Pada penelitian ini dilakukan perencanaan site layout facilities dengan traveling distance dan safety index atau bisa disebut multi objectives function sebagai acuannya. Perencanaan site layout tersebut pada dasarnya dibagi dua tahap yaitu pada saat pekerjaan sub structure dan pekerjaan upper structure. Dari kedua tahap pekerjaan tersebut dapat menggunakan metode equal maupun unequal site layout. Pada perencanaan fasilitas juga diperhitungkan kebutuhan luas dan fasilitas material pada stock yard. Setelah melakukan iterasi dan membuat skenario-skenario bentuk site layout yang berbeda-beda kemudian terpilih salah satu yang mempunyai nilai multi objectives function paling minimum. Kemudian dengan menggunakan grafik pareto optima maka grafik tersebut mampu menunjukkan titik-titik objectives function yang paling minimum. Hasil yang di dapatkan adalah pada saat pekerjaan Sub Structure, site layout yang paling optimal adalah pada alternatif 665 yang mempunyai travelling distance dan safety index terendah dengan nilai TD sebesar 13246,18 m atau mengalami penurunan sebesar 3,30% dan nilai SI sebesar 1048 atau mengalami penurunan sebesar 5,76% dari kondisi perencanaan awal. Sedangkan Pada saat pekerjaan Upper Structure, site layout yang paling optimal adalah pada alternatif 122 yang mempunyai
Akhtar, Naveed; Mian, Ajmal
2017-10-03
We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.
Classifying a smoker scale in adult daily and nondaily smokers.
Pulvers, Kim; Scheuermann, Taneisha S; Romero, Devan R; Basora, Brittany; Luo, Xianghua; Ahluwalia, Jasjit S
2014-05-01
Smoker identity, or the strength of beliefs about oneself as a smoker, is a robust marker of smoking behavior. However, many nondaily smokers do not identify as smokers, underestimating their risk for tobacco-related disease and resulting in missed intervention opportunities. Assessing underlying beliefs about characteristics used to classify smokers may help explain the discrepancy between smoking behavior and smoker identity. This study examines the factor structure, reliability, and validity of the Classifying a Smoker scale among a racially diverse sample of adult smokers. A cross-sectional survey was administered through an online panel survey service to 2,376 current smokers who were at least 25 years of age. The sample was stratified to obtain equal numbers of 3 racial/ethnic groups (African American, Latino, and White) across smoking level (nondaily and daily smoking). The Classifying a Smoker scale displayed a single factor structure and excellent internal consistency (α = .91). Classifying a Smoker scores significantly increased at each level of smoking, F(3,2375) = 23.68, p smoker identity, stronger dependence on cigarettes, greater health risk perceptions, more smoking friends, and were more likely to carry cigarettes. Classifying a Smoker scores explained unique variance in smoking variables above and beyond that explained by smoker identity. The present study supports the use of the Classifying a Smoker scale among diverse, experienced smokers. Stronger endorsement of characteristics used to classify a smoker (i.e., stricter criteria) was positively associated with heavier smoking and related characteristics. Prospective studies are needed to inform prevention and treatment efforts.
The Distribution of the Sample Minimum-Variance Frontier
Raymond Kan; Daniel R. Smith
2008-01-01
In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...
Classifying spaces with virtually cyclic stabilizers for linear groups
DEFF Research Database (Denmark)
Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen
2015-01-01
We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...
Intuitive Action Set Formation in Learning Classifier Systems with Memory Registers
Simões, L.F.; Schut, M.C.; Haasdijk, E.W.
2008-01-01
An important design goal in Learning Classifier Systems (LCS) is to equally reinforce those classifiers which cause the level of reward supplied by the environment. In this paper, we propose a new method for action set formation in LCS. When applied to a Zeroth Level Classifier System with Memory
Data Stream Classification Based on the Gamma Classifier
Directory of Open Access Journals (Sweden)
Abril Valeria Uriarte-Arcia
2015-01-01
Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.
Design of Robust Neural Network Classifiers
DEFF Research Database (Denmark)
Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads
1998-01-01
This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...
24 CFR 891.145 - Owner deposit (Minimum Capital Investment).
2010-04-01
... General Program Requirements § 891.145 Owner deposit (Minimum Capital Investment). As a Minimum Capital... Investment shall be one-half of one percent (0.5%) of the HUD-approved capital advance, not to exceed $25,000. ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Owner deposit (Minimum Capital...
Fast Exact Euclidean Distance (FEED): A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT starting directly from the definition or rather its inverse. The principle of FEED class algorithms is
Fast Exact Euclidean Distance (FEED) : A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon L.
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT startingdirectly from the definition or rather its inverse. The principle of FEED class algorithms is introduced,
Distance covariance for stochastic processes
DEFF Research Database (Denmark)
Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady
2017-01-01
The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...
Moeglin, Pierre; Vidal, Martine
2015-01-01
The purpose of this review, spanning over 12 years of publication of "Distances et Médiations des Savoirs" ("DMS"), formerly "Distance et Savoirs" ("DMS") (2003-2014), is guided by the question why and how French-speaking researchers addressed the issues of time, workload and costs in distance learning, and…
Minimum Wages and the Distribution of Family Incomes
Dube, Arindrajit
2017-01-01
Using the March Current Population Survey data from 1984 to 2013, I provide a comprehensive evaluation of how minimum wage policies influence the distribution of family incomes. I find robust evidence that higher minimum wages shift down the cumulative distribution of family incomes at the bottom, reducing the share of non-elderly individuals with incomes below 50, 75, 100, and 125 percent of the federal poverty threshold. The long run (3 or more years) minimum wage elasticity of the non-elde...
Combining multiple classifiers for age classification
CSIR Research Space (South Africa)
Van Heerden, C
2009-11-01
Full Text Available The authors compare several different classifier combination methods on a single task, namely speaker age classification. This task is well suited to combination strategies, since significantly different feature classes are employed. Support vector...
7 CFR 1610.5 - Minimum Bank loan.
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Minimum Bank loan. 1610.5 Section 1610.5 Agriculture Regulations of the Department of Agriculture (Continued) RURAL TELEPHONE BANK, DEPARTMENT OF AGRICULTURE LOAN POLICIES § 1610.5 Minimum Bank loan. A Bank loan will not be made unless the applicant qualifies for a Bank...
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
Maximum margin classifier working in a set of strings.
Koyano, Hitoshi; Hayashida, Morihiro; Akutsu, Tatsuya
2016-03-01
Numbers and numerical vectors account for a large portion of data. However, recently, the amount of string data generated has increased dramatically. Consequently, classifying string data is a common problem in many fields. The most widely used approach to this problem is to convert strings into numerical vectors using string kernels and subsequently apply a support vector machine that works in a numerical vector space. However, this non-one-to-one conversion involves a loss of information and makes it impossible to evaluate, using probability theory, the generalization error of a learning machine, considering that the given data to train and test the machine are strings generated according to probability laws. In this study, we approach this classification problem by constructing a classifier that works in a set of strings. To evaluate the generalization error of such a classifier theoretically, probability theory for strings is required. Therefore, we first extend a limit theorem for a consensus sequence of strings demonstrated by one of the authors and co-workers in a previous study. Using the obtained result, we then demonstrate that our learning machine classifies strings in an asymptotically optimal manner. Furthermore, we demonstrate the usefulness of our machine in practical data analysis by applying it to predicting protein-protein interactions using amino acid sequences and classifying RNAs by the secondary structure using nucleotide sequences.
29 CFR 783.43 - Computation of seaman's minimum wage.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Computation of seaman's minimum wage. 783.43 Section 783.43...'s minimum wage. Section 6(b) requires, under paragraph (2) of the subsection, that an employee...'s minimum wage requirements by reason of the 1961 Amendments (see §§ 783.23 and 783.26). Although...
12 CFR 931.3 - Minimum investment in capital stock.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum investment in capital stock. 931.3... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.3 Minimum investment in capital stock. (a) A Bank shall require each member to maintain a minimum investment in the capital stock of the Bank, both...
Current Directional Protection of Series Compensated Line Using Intelligent Classifier
Directory of Open Access Journals (Sweden)
M. Mollanezhad Heydarabadi
2016-12-01
Full Text Available Current inversion condition leads to incorrect operation of current based directional relay in power system with series compensated device. Application of the intelligent system for fault direction classification has been suggested in this paper. A new current directional protection scheme based on intelligent classifier is proposed for the series compensated line. The proposed classifier uses only half cycle of pre-fault and post fault current samples at relay location to feed the classifier. A lot of forward and backward fault simulations under different system conditions upon a transmission line with a fixed series capacitor are carried out using PSCAD/EMTDC software. The applicability of decision tree (DT, probabilistic neural network (PNN and support vector machine (SVM are investigated using simulated data under different system conditions. The performance comparison of the classifiers indicates that the SVM is a best suitable classifier for fault direction discriminating. The backward faults can be accurately distinguished from forward faults even under current inversion without require to detect of the current inversion condition.
Obscenity detection using haar-like features and Gentle Adaboost classifier.
Mustafa, Rashed; Min, Yang; Zhu, Dingju
2014-01-01
Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.
Obscenity Detection Using Haar-Like Features and Gentle Adaboost Classifier
Directory of Open Access Journals (Sweden)
Rashed Mustafa
2014-01-01
Full Text Available Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.
Critical Points in Distance Learning System
Directory of Open Access Journals (Sweden)
Airina Savickaitė
2013-08-01
Full Text Available Purpose – This article presents the results of distance learning system analysis, i.e. the critical elements of the distance learning system. The critical points of distance learning are a part of distance education online environment interactivity/community process model. The most important is the fact that the critical point is associated with distance learning participants. Design/methodology/approach – Comparative review of articles and analysis of distance learning module. Findings – A modern man is a lifelong learner and distance learning is a way to be a modern person. The focus on a learner and feedback is the most important thing of learning distance system. Also, attention should be paid to the lecture-appropriate knowledge and ability to convey information. Distance system adaptation is the way to improve the learner’s learning outcomes. Research limitations/implications – Different learning disciplines and learning methods may have different critical points. Practical implications – The information of analysis could be important for both lecturers and students, who studies distance education systems. There are familiar critical points which may deteriorate the quality of learning. Originality/value – The study sought to develop remote systems for applications in order to improve the quality of knowledge. Keywords: distance learning, process model, critical points. Research type: review of literature and general overview.
Green, Adam E; Kraemer, David J M; Fugelsang, Jonathan A; Gray, Jeremy R; Dunbar, Kevin N
2010-01-01
Solving problems often requires seeing new connections between concepts or events that seemed unrelated at first. Innovative solutions of this kind depend on analogical reasoning, a relational reasoning process that involves mapping similarities between concepts. Brain-based evidence has implicated the frontal pole of the brain as important for analogical mapping. Separately, cognitive research has identified semantic distance as a key characteristic of the kind of analogical mapping that can support innovation (i.e., identifying similarities across greater semantic distance reveals connections that support more innovative solutions and models). However, the neural substrates of semantically distant analogical mapping are not well understood. Here, we used functional magnetic resonance imaging (fMRI) to measure brain activity during an analogical reasoning task, in which we parametrically varied the semantic distance between the items in the analogies. Semantic distance was derived quantitatively from latent semantic analysis. Across 23 participants, activity in an a priori region of interest (ROI) in left frontopolar cortex covaried parametrically with increasing semantic distance, even after removing effects of task difficulty. This ROI was centered on a functional peak that we previously associated with analogical mapping. To our knowledge, these data represent a first empirical characterization of how the brain mediates semantically distant analogical mapping.
Minimum-Cost Reachability for Priced Timed Automata
DEFF Research Database (Denmark)
Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin
2001-01-01
This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...... and a new notion of priced regions. The latter allows symbolic representation and manipulation of reachable states together with the cost of reaching them....
Newell, Felicia D; Williams, Patricia L; Watt, Cynthia G
2014-05-09
This paper aims to assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia (NS) from 2002 to 2012 using an economic simulation that includes food costing and secondary data. The cost of the National Nutritious Food Basket (NNFB) was assessed with a stratified, random sample of grocery stores in NS during six time periods: 2002, 2004/2005, 2007, 2008, 2010 and 2012. The NNFB's cost was factored into affordability scenarios for three different household types relying on minimum wage earnings: a household of four; a lone mother with three children; and a lone man. Essential monthly living expenses were deducted from monthly net incomes using methods that were standardized from 2002 to 2012 to determine whether adequate funds remained to purchase a basic nutritious diet across the six time periods. A 79% increase to the minimum wage in NS has resulted in a decrease in the potential deficit faced by each household scenario in the period examined. However, the household of four and the lone mother with three children would still face monthly deficits ($44.89 and $496.77, respectively, in 2012) if they were to purchase a nutritiously sufficient diet. As a social determinant of health, risk of food insecurity is a critical public health issue for low wage earners. While it is essential to increase the minimum wage in the short term, adequately addressing income adequacy in NS and elsewhere requires a shift in thinking from a focus on minimum wage towards more comprehensive policies ensuring an adequate livable income for everyone.
Employment Effects of Minimum and Subminimum Wages. Recent Evidence.
Neumark, David
Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…
Minimum Wage Effects on Educational Enrollments in New Zealand
Pacheco, Gail A.; Cruickshank, Amy A.
2007-01-01
This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…
Equivalence of massive propagator distance and mathematical distance on graphs
International Nuclear Information System (INIS)
Filk, T.
1992-01-01
It is shown in this paper that the assignment of distance according to the massive propagator method and according to the mathematical definition (length of minimal path) on arbitrary graphs with a bound on the degree leads to equivalent large scale properties of the graph. Especially, the internal scaling dimension is the same for both definitions. This result holds for any fixed, non-vanishing mass, so that a really inequivalent definition of distance requires the limit m → 0
Zero forcing parameters and minimum rank problems
Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.
2010-01-01
The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero
Minimum bias measurement at 13 TeV
Orlando, Nicola; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes and to simulate the environment at the LHC with many concurrent pp interactions (pile-up). We summarise the ATLAS minimum bias measurements with proton-proton collision at 13 TeV center-of-mass-energy at the Large Hadron Collider.
Language distance and tree reconstruction
International Nuclear Information System (INIS)
Petroni, Filippo; Serva, Maurizio
2008-01-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others
Language distance and tree reconstruction
Petroni, Filippo; Serva, Maurizio
2008-08-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others.
Comprehensive long distance and real-time pipeline monitoring system based on fiber optic sensing
Energy Technology Data Exchange (ETDEWEB)
Nikles, Marc; Ravet, Fabien; Briffod, Fabien [Omnisens S.A., Morges (Switzerland)
2009-07-01
An increasing number of pipelines are constructed in remote regions affected by harsh environmental conditions. These pipeline routes often cross mountain areas which are characterized by unstable grounds and where soil texture changes between winter and summer increase the probability of hazards. Due to the long distances to be monitored and the linear nature of pipelines, distributed fiber optic sensing techniques offer significant advantages and the capability to detect and localize pipeline disturbance with great precision. Furthermore pipeline owner/operators lay fiber optic cable parallel to transmission pipelines for telecommunication purposes and at minimum additional cost monitoring capabilities can be added to the communication system. The Brillouin-based Omnisens DITEST monitoring system has been used in several long distance pipeline projects. The technique is capable of measuring strain and temperature over 100's kilometers with meter spatial resolution. Dedicated fiber optic cables have been developed for continuous strain and temperature monitoring and their deployment along the pipeline has enabled permanent and continuous pipeline ground movement, intrusion and leak detection. This paper presents a description of the fiber optic Brillouin-based DITEST sensing technique, its measurement performance and limits, while addressing future perspectives for pipeline monitoring. (author)
Minimum airflow reset of single-duct VAV terminal boxes
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and
DEFF Research Database (Denmark)
Dalsgaard, Tage; Stewart, Frank; De Brabandere, Loreto
Oxygen concentrations were consistently below our detection limit of 90 nM for a distance of > 2000 km in the oxygen minimum zone (OMZ) along the coasts of Chile and Peru. In most cases, anammox and denitrification were only detected when in situ oxygen concentrations were below detection...... differently to oxygen. When normalized to a housekeeping gene (rpoB), the expression of 4 out of 9 N-cycle-genes changed with increasing oxygen concentration: The expression of ammonium monooxygenase (amoC) was stimulated, whereas expression of nitrite reductase (nirS), nitric oxide reductase (nor...
A systems biology-based classifier for hepatocellular carcinoma diagnosis.
Directory of Open Access Journals (Sweden)
Yanqiong Zhang
Full Text Available AIM: The diagnosis of hepatocellular carcinoma (HCC in the early stage is crucial to the application of curative treatments which are the only hope for increasing the life expectancy of patients. Recently, several large-scale studies have shed light on this problem through analysis of gene expression profiles to identify markers correlated with HCC progression. However, those marker sets shared few genes in common and were poorly validated using independent data. Therefore, we developed a systems biology based classifier by combining the differential gene expression with topological features of human protein interaction networks to enhance the ability of HCC diagnosis. METHODS AND RESULTS: In the Oncomine platform, genes differentially expressed in HCC tissues relative to their corresponding normal tissues were filtered by a corrected Q value cut-off and Concept filters. The identified genes that are common to different microarray datasets were chosen as the candidate markers. Then, their networks were analyzed by GeneGO Meta-Core software and the hub genes were chosen. After that, an HCC diagnostic classifier was constructed by Partial Least Squares modeling based on the microarray gene expression data of the hub genes. Validations of diagnostic performance showed that this classifier had high predictive accuracy (85.88∼92.71% and area under ROC curve (approximating 1.0, and that the network topological features integrated into this classifier contribute greatly to improving the predictive performance. Furthermore, it has been demonstrated that this modeling strategy is not only applicable to HCC, but also to other cancers. CONCLUSION: Our analysis suggests that the systems biology-based classifier that combines the differential gene expression and topological features of human protein interaction network may enhance the diagnostic performance of HCC classifier.
A Customizable Text Classifier for Text Mining
Directory of Open Access Journals (Sweden)
Yun-liang Zhang
2007-12-01
Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.
A survey of decision tree classifier methodology
Safavian, S. R.; Landgrebe, David
1991-01-01
Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.
Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers
Directory of Open Access Journals (Sweden)
M. Al-Rousan
2005-08-01
Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.
Interactive Distance Learning in Connecticut.
Pietras, Jesse John; Murphy, Robert J.
This paper provides an overview of distance learning activities in Connecticut and addresses the feasibility of such activities. Distance education programs have evolved from the one dimensional electronic mail systems to the use of sophisticated digital fiber networks. The Middlesex Distance Learning Consortium has developed a long-range plan to…
32 CFR 2004.21 - Protection of Classified Information [201(e)].
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Protection of Classified Information [201(e... PROGRAM DIRECTIVE NO. 1 Operations § 2004.21 Protection of Classified Information [201(e)]. Procedures for... coordination process. ...
Minimum wall pressure coefficient of orifice plate energy dissipater
Directory of Open Access Journals (Sweden)
Wan-zheng Ai
2015-01-01
Full Text Available Orifice plate energy dissipaters have been successfully used in large-scale hydropower projects due to their simple structure, convenient construction procedure, and high energy dissipation ratio. The minimum wall pressure coefficient of an orifice plate can indirectly reflect its cavitation characteristics: the lower the minimum wall pressure coefficient is, the better the ability of the orifice plate to resist cavitation damage is. Thus, it is important to study the minimum wall pressure coefficient of the orifice plate. In this study, this coefficient and related parameters, such as the contraction ratio, defined as the ratio of the orifice plate diameter to the flood-discharging tunnel diameter; the relative thickness, defined as the ratio of the orifice plate thickness to the tunnel diameter; and the Reynolds number of the flow through the orifice plate, were theoretically analyzed, and their relationships were obtained through physical model experiments. It can be concluded that the minimum wall pressure coefficient is mainly dominated by the contraction ratio and relative thickness. The lower the contraction ratio and relative thickness are, the larger the minimum wall pressure coefficient is. The effects of the Reynolds number on the minimum wall pressure coefficient can be neglected when it is larger than 105. An empirical expression was presented to calculate the minimum wall pressure coefficient in this study.
Yang, Hao; Cai, Bo-ning; Wang, Xiao-shen; Cong, Xiao-hu; Xu, Wei; Wang, Jin-yuan; Yang, Jun; Xu, Shou-ping; Ju, Zhong-jian; Ma, Lin
2016-02-23
BACKGROUND This study investigated and quantified the dosimetric impact of the distance from the tumor to the spinal cord and fractionation schemes for patients who received stereotactic body radiation therapy (SBRT) and hypofractionated simultaneous integrated boost (HF-SIB). MATERIAL AND METHODS Six modified planning target volumes (PTVs) for 5 patients with spinal metastases were created by artificial uniform extension in the region of PTV adjacent spinal cord with a specified minimum tumor to cord distance (0-5 mm). The prescription dose (biologic equivalent dose, BED) was 70 Gy in different fractionation schemes (1, 3, 5, and 10 fractions). For PTV V100, Dmin, D98, D95, and D1, spinal cord dose, conformity index (CI), V30 were measured and compared. RESULTS PTV-to-cord distance influenced PTV V100, Dmin, D98, and D95, and fractionation schemes influenced Dmin and D98, with a significant difference. Distances of ≥2 mm, ≥1 mm, ≥1 mm, and ≥0 mm from PTV to spinal cord meet dose requirements in 1, 3, 5, and 10 fractionations, respectively. Spinal cord dose, CI, and V30 were not impacted by PTV-to-cord distance and fractionation schemes. CONCLUSIONS Target volume coverage, Dmin, D98, and D95 were directly correlated with distance from the spinal cord for spine SBRT and HF-SIB. Based on our study, ≥2 mm, ≥1 mm, ≥1 mm, and ≥0 mm distance from PTV to spinal cord meets dose requirements in 1, 3, 5 and 10 fractionations, respectively.
Motivation in Distance Leaming
Directory of Open Access Journals (Sweden)
Daniela Brečko
1996-12-01
Full Text Available It is estimated that motivation is one of the most important psychological functions making it possible for people to leam even in conditions that do not meet their needs. In distance learning, a form of autonomous learning, motivation is of outmost importance. When adopting this method in learning an individual has to stimulate himself and take learning decisions on his or her own. These specific characteristics of distance learning should be taken into account. This all different factors maintaining the motivation of participants in distance learning are to be included. Moreover, motivation in distance learning can be stimulated with specific learning materials, clear instructions and guide-lines, an efficient feed back, personal contact between tutors and participants, stimulating learning letters, telephone calls, encouraging letters and through maintaining a positive relationship between tutor and participant.
Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2016-04-01
A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.
Machine learning enhanced optical distance sensor
Amin, M. Junaid; Riza, N. A.
2018-01-01
Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.
Directory of Open Access Journals (Sweden)
Mamta Bansal
2017-05-01
Full Text Available For high end security like surveillance there is a need for a robust system capable of verifying a person under the unconstrained conditions. This paper presents the ear based verification system using a new entropy function that changes not only the information gain function but also the information source values. This entropy function displays peculiar characteristics such as splitting into two modes. Two types of entropy features: Effective Gaussian Information source value and Effective Exponential Information source value functions are derived using the entropy function. To classify the entropy features we have devised refined scores (RS method that refines the scores generated using the Euclidean distance. The experimental results vindicate the superiority of proposed method over literature.
76 FR 15368 - Minimum Security Devices and Procedures
2011-03-21
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures... concerning the following information collection. Title of Proposal: Minimum Security Devices and Procedures... security devices and procedures to discourage robberies, burglaries, and larcenies, and to assist in the...
A novel statistical method for classifying habitat generalists and specialists
DEFF Research Database (Denmark)
Chazdon, Robin L; Chao, Anne; Colwell, Robert K
2011-01-01
in second-growth (SG) and old-growth (OG) rain forests in the Caribbean lowlands of northeastern Costa Rica. We evaluate the multinomial model in detail for the tree data set. Our results for birds were highly concordant with a previous nonstatistical classification, but our method classified a higher......: (1) generalist; (2) habitat A specialist; (3) habitat B specialist; and (4) too rare to classify with confidence. We illustrate our multinomial classification method using two contrasting data sets: (1) bird abundance in woodland and heath habitats in southeastern Australia and (2) tree abundance...... fraction (57.7%) of bird species with statistical confidence. Based on a conservative specialization threshold and adjustment for multiple comparisons, 64.4% of tree species in the full sample were too rare to classify with confidence. Among the species classified, OG specialists constituted the largest...
Energy Technology Data Exchange (ETDEWEB)
Baba, Hiroshi; Saito, Tadashi; Takahashi, Naruto [Osaka Univ., Suita (Japan)] [and others
1997-09-01
Fission product kinetic energies were measured by the double-energy method for thermal-neutron fission of {sup 235,233}U and proton-induced fission of {sup 238}U at the 15.8-MeV excitation. From the obtained energy-mass correlation data, the kinetic-energy distribution was constructed from each mass bin to evaluate the first moment of the kinetic energy for a given fragment mass. The resulting kinetic energy was then converted to the effective distance between the charge centers at the moment of scission. The effective distances deduced for the proton-induced fission was concluded to be classified into two constant values, one for asymmetric and the other for symmetric mode, irrespective of the mass though an additional component was further extracted in the asymmetric mass region. This indicates that the fission takes place via two well-defined saddles, followed by the random neck rupture. On the contrary, the effective distances obtained for thermal-neutron induced fission turned out to lie along the contour line at the same level as the equilibrium deformation in the two-dimensional potential map. This strongly suggests that it is essentially a barrier-penetrating type of fission rather than the over-barrier fission. (author). 73 refs.
International Nuclear Information System (INIS)
Baba, Hiroshi; Saito, Tadashi; Takahashi, Naruto
1997-01-01
Fission product kinetic energies were measured by the double-energy method for thermal-neutron fission of 235,233 U and proton-induced fission of 238 U at the 15.8-MeV excitation. From the obtained energy-mass correlation data, the kinetic-energy distribution was constructed from each mass bin to evaluate the first moment of the kinetic energy for a given fragment mass. The resulting kinetic energy was then converted to the effective distance between the charge centers at the moment of scission. The effective distances deduced for the proton-induced fission was concluded to be classified into two constant values, one for asymmetric and the other for symmetric mode, irrespective of the mass though an additional component was further extracted in the asymmetric mass region. This indicates that the fission takes place via two well-defined saddles, followed by the random neck rupture. On the contrary, the effective distances obtained for thermal-neutron induced fission turned out to lie along the contour line at the same level as the equilibrium deformation in the two-dimensional potential map. This strongly suggests that it is essentially a barrier-penetrating type of fission rather than the over-barrier fission. (author). 73 refs
Ensemble of classifiers based network intrusion detection system performance bound
CSIR Research Space (South Africa)
Mkuzangwe, Nenekazi NP
2017-11-01
Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...
Directory of Open Access Journals (Sweden)
Nazelie Kassabian
2014-06-01
Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
3 CFR - Implementation of the Executive Order, “Classified National Security Information”
2010-01-01
... 29, 2009 Implementation of the Executive Order, “Classified National Security Information” Memorandum..., “Classified National Security Information” (the “order”), which substantially advances my goals for reforming... or handles classified information shall provide the Director of the Information Security Oversight...
Bayes classifiers for imbalanced traffic accidents datasets.
Mujalli, Randa Oqab; López, Griselda; Garach, Laura
2016-03-01
Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.
76 FR 30243 - Minimum Security Devices and Procedures
2011-05-24
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures.... Title of Proposal: Minimum Security Devices and Procedures. OMB Number: 1550-0062. Form Number: N/A... respect to the installation, maintenance, and operation of security devices and procedures to discourage...
Metcalfe, Kristian; Vaughan, Gregory; Vaz, Sandrine; Smith, Robert J
2015-12-01
Marine protected areas (MPAs) are the cornerstone of most marine conservation strategies, but the effectiveness of each one partly depends on its size and distance to other MPAs in a network. Despite this, current recommendations on ideal MPA size and spacing vary widely, and data are lacking on how these constraints might influence the overall spatial characteristics, socio-economic impacts, and connectivity of the resultant MPA networks. To address this problem, we tested the impact of applying different MPA size constraints in English waters. We used the Marxan spatial prioritization software to identify a network of MPAs that met conservation feature targets, whilst minimizing impacts on fisheries; modified the Marxan outputs with the MinPatch software to ensure each MPA met a minimum size; and used existing data on the dispersal distances of a range of species found in English waters to investigate the likely impacts of such spatial constraints on the region's biodiversity. Increasing MPA size had little effect on total network area or the location of priority areas, but as MPA size increased, fishing opportunity cost to stakeholders increased. In addition, as MPA size increased, the number of closely connected sets of MPAs in networks and the average distance between neighboring MPAs decreased, which consequently increased the proportion of the planning region that was isolated from all MPAs. These results suggest networks containing large MPAs would be more viable for the majority of the region's species that have small dispersal distances, but dispersal between MPA sets and spill-over of individuals into unprotected areas would be reduced. These findings highlight the importance of testing the impact of applying different MPA size constraints because there are clear trade-offs that result from the interaction of size, number, and distribution of MPAs in a network. © 2015 Society for Conservation Biology.
Cobalt-60 total body irradiation dosimetry at 220 cm source-axis distance
International Nuclear Information System (INIS)
Glasgow, G.P.; Mill, W.B.
1980-01-01
Adults with acute leukemia are treated with cyclophosphamide and total body irradiation (TBI) followed by autologous marrow transplants. For TBI, patients seated in a stand angled 45 0 above the floor are treated for about 2 hours at 220 cm source-axis distance (SAD) with sequential right and left lateral 87 cm x 87 cm fields to a 900 rad mid-pelvic dose at about 8 rad/min using a 5000 Ci cobalt unit. Maximum (lateral) to minimum (mid-plane) dose ratios are: hips--1.15, shoulders--1.30, and head--1.05, which is shielded by a compensator filter. Organ doses are small intestine, liver and kidneys--1100 rad, lung--1100 to 1200 rad, and heart--1300 rad. Verification dosimetry reveals the prescribed dose is delivered to within +-5%. Details of the dosimetry of this treatment are presented
Does increasing the minimum wage reduce poverty in developing countries?
Gindling, T. H.
2014-01-01
Do minimum wage policies reduce poverty in developing countries? It depends. Raising the minimum wage could increase or decrease poverty, depending on labor market characteristics. Minimum wages target formal sector workers—a minority of workers in most developing countries—many of whom do not live in poor households. Whether raising minimum wages reduces poverty depends not only on whether formal sector workers lose jobs as a result, but also on whether low-wage workers live in poor househol...
Long-distance calls in Neotropical primates
Directory of Open Access Journals (Sweden)
Oliveira Dilmar A.G.
2004-01-01
Full Text Available Long-distance calls are widespread among primates. Several studies concentrate on such calls in just one or in few species, while few studies have treated more general trends within the order. The common features that usually characterize these vocalizations are related to long-distance propagation of sounds. The proposed functions of primate long-distance calls can be divided into extragroup and intragroup ones. Extragroup functions relate to mate defense, mate attraction or resource defense, while intragroup functions involve group coordination or alarm. Among Neotropical primates, several species perform long-distance calls that seem more related to intragroup coordination, markedly in atelines. Callitrichids present long-distance calls that are employed both in intragroup coordination and intergroup contests or spacing. Examples of extragroup directed long-distance calls are the duets of titi monkeys and the roars and barks of howler monkeys. Considerable complexity and gradation exist in the long-distance call repertoires of some Neotropical primates, and female long-distance calls are probably more important in non-duetting species than usually thought. Future research must focus on larger trends in the evolution of primate long-distance calls, including the phylogeny of calling repertoires and the relationships between form and function in these signals.
Robustness of Distance-to-Default
DEFF Research Database (Denmark)
Jessen, Cathrine; Lando, David
2013-01-01
Distance-to-default is a remarkably robust measure for ranking firms according to their risk of default. The ranking seems to work despite the fact that the Merton model from which the measure is derived produces default probabilities that are far too small when applied to real data. We use...... simulations to investigate the robustness of the distance-to-default measure to different model specifications. Overall we find distance-to-default to be robust to a number of deviations from the simple Merton model that involve different asset value dynamics and different default triggering mechanisms....... A notable exception is a model with stochastic volatility of assets. In this case both the ranking of firms and the estimated default probabilities using distance-to-default perform significantly worse. We therefore propose a volatility adjustment of the distance-to-default measure, that significantly...
Implications of physical symmetries in adaptive image classifiers
DEFF Research Database (Denmark)
Sams, Thomas; Hansen, Jonas Lundbek
2000-01-01
It is demonstrated that rotational invariance and reflection symmetry of image classifiers lead to a reduction in the number of free parameters in the classifier. When used in adaptive detectors, e.g. neural networks, this may be used to decrease the number of training samples necessary to learn...... a given classification task, or to improve generalization of the neural network. Notably, the symmetrization of the detector does not compromise the ability to distinguish objects that break the symmetry. (C) 2000 Elsevier Science Ltd. All rights reserved....
Distance Education in Technological Age
Directory of Open Access Journals (Sweden)
R .C. SHARMA
2005-04-01
Full Text Available Distance Education in Technological AgeRomesh Verma (Editor, New Delhi: Anmol Publications, 2005, ISBN 81-261-2210-2, pp. 419 Reviewed by R C SHARMARegional DirectorIndira Gandhi National Open University-INDIA The advancements in information and communication technologies have brought significant changes in the way the open and distance learning are provided to the learners. The impact of such changes is quite visible in both developed and developing countries. Switching over to online mode, joining hands with private initiatives and making a presence in foreign waters, are some of the hallmarks of the open and distance education (ODE institutions in developing countries. The compilation of twenty six essays on themes as applicable to ODE has resulted in the book, Distance Education in Technological Age. These essays follow a progressive style of narration, starting from describing conceptual framework of distance education, how the distance education was emerged on the global scene and in India, and then goes on to discuss emergence of online distance education and research aspects in ODE. The initial four chapters provide a detailed account of historical development and growth of distance education in India and State Open University and National Open University Model in India . Student support services are pivot to any distance education and much of its success depends on how well the support services are provided. These are discussed from national and international perspective. The issues of collaborative learning, learning on demand, life long learning, learning-unlearning and re-learning model and strategic alliances have also given due space by the authors. An assortment of technologies like communication technology, domestic technology, information technology, mass media and entertainment technology, media technology and educational technology give an idea of how these technologies are being adopted in the open universities. The study
The SME gauge sector with minimum length
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
The SME gauge sector with minimum length
Energy Technology Data Exchange (ETDEWEB)
Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)
2017-12-15
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)
Akibat Hukum Bagi Bank Bila Kewajiban Modal Inti Minimum Tidak Terpenuhi
Directory of Open Access Journals (Sweden)
Indira Retno Aryatie
2012-02-01
Full Text Available As an implementation of the Indonesian Banking Architecture policy, the government issues Bank Indonesia Regulation No. 9/16/ PBI/2007 on Minimum Tier One Capital that increases the minimum capital to 100 billion rupiah. This writing discusses the legal complication that a bank will face should it fail to fulfil the minimum ratio. Sebagai tindak lanjut dari kebijakan Arsitektur Perbankan Indonesia, pemerintah mengeluarkan Peraturan Bank Indonesia 9/16/PBI/2007 tentang Kewajiban Modal Inti Minimum Bank yang menaikkan modal inti minimum bank umum menjadi 100 miliar rupiah. Tulisan ini membahas akibat hukum yang akan dialami bank bila kewajiban modal minimum tersebut gagal dipenuhi.
36 CFR 1256.70 - What controls access to national security-classified information?
2010-07-01
... national security-classified information? 1256.70 Section 1256.70 Parks, Forests, and Public Property... HISTORICAL MATERIALS Access to Materials Containing National Security-Classified Information § 1256.70 What controls access to national security-classified information? (a) The declassification of and public access...
The impact of minimum wage adjustments on Vietnamese wage inequality
DEFF Research Database (Denmark)
Hansen, Henrik; Rand, John; Torm, Nina
Using Vietnamese Labour Force Survey data we analyse the impact of minimum wage changes on wage inequality. Minimum wages serve to reduce local wage inequality in the formal sectors by decreasing the gap between the median wages and the lower tail of the local wage distributions. In contrast, local...... wage inequality is increased in the informal sectors. Overall, the minimum wages decrease national wage inequality. Our estimates indicate a decrease in the wage distribution Gini coefficient of about 2 percentage points and an increase in the 10/50 wage ratio of 5-7 percentage points caused...... by the adjustment of the minimum wages from 2011to 2012 that levelled the minimum wage across economic sectors....
Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.
Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J
2011-12-01
The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.
Risk control and the minimum significant risk
International Nuclear Information System (INIS)
Seiler, F.A.; Alvarez, J.L.
1996-01-01
Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limit to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented
Partial distance correlation with methods for dissimilarities
Székely, Gábor J.; Rizzo, Maria L.
2014-01-01
Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions, and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covari...
Making Distance Visible: Assembling Nearness in an Online Distance Learning Programme
Directory of Open Access Journals (Sweden)
Jen Ross
2013-09-01
Full Text Available Online distance learners are in a particularly complex relationship with the educational institutions they belong to (Bayne, Gallagher, & Lamb, 2012. For part-time distance students, arrivals and departures can be multiple and invisible as students take courses, take breaks, move into independent study phases of a programme, find work or family commitments overtaking their study time, experience personal upheaval or loss, and find alignments between their professional and academic work. These comings and goings indicate a fluid and temporary assemblage of engagement, not a permanent or stable state of either “presence” or “distance”. This paper draws from interview data from the “New Geographies of Learning” project, a research project exploring the notions of space and institution for the MSc in Digital Education at the University of Edinburgh, and from literature on distance learning and online community. The concept of nearness emerged from the data analyzing the comings and goings of students on a fully online programme. It proposes that “nearness” to a distance programme is a temporary assemblage of people, circumstances, and technologies. This state is difficult to establish and impossible to sustain in an uninterrupted way over the long period of time that many are engaged in part-time study. Interruptions and subsequent returns should therefore be seen as normal in the practice of studying as an online distance learner, and teachers and institutions should work to help students develop resilience in negotiating various states of nearness. Four strategies for increasing this resilience are proposed: recognising nearness as effortful; identifying affinities; valuing perspective shifts; and designing openings.
42 CFR 84.197 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.197... Cartridge Respirators § 84.197 Respirator containers; minimum requirements. Respirators shall be equipped with a substantial, durable container bearing markings which show the applicant's name, the type and...
42 CFR 84.174 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.174... Air-Purifying Particulate Respirators § 84.174 Respirator containers; minimum requirements. (a) Except..., durable container bearing markings which show the applicant's name, the type of respirator it contains...
Optimization of short amino acid sequences classifier
Barcz, Aleksy; Szymański, Zbigniew
This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers. SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the SPDE algorithm to the task of selecting classifiers with best performance measures values.