Gönen, Mehmet
2014-03-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Flexible manifold embedding: a framework for semi-supervised and unsupervised dimension reduction.
Nie, Feiping; Xu, Dong; Tsang, Ivor Wai-Hung; Zhang, Changshui
2010-07-01
We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F(0) = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F(0). Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.
Central subspace dimensionality reduction using covariance operators.
Kim, Minyoung; Pavlovic, Vladimir
2011-04-01
We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Hongchao Song
2017-01-01
Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
Cao, Peng; Liu, Xiaoli; Bao, Hang; Yang, Jinzhu; Zhao, Dazhe
2015-01-01
The false-positive reduction (FPR) is a crucial step in the computer aided detection system for the breast. The issues of imbalanced data distribution and the limitation of labeled samples complicate the classification procedure. To overcome these challenges, we propose oversampling and semi-supervised learning methods based on the restricted Boltzmann machines (RBMs) to solve the classification of imbalanced data with a few labeled samples. To evaluate the proposed method, we conducted a comprehensive performance study and compared its results with the commonly used techniques. Experiments on benchmark dataset of DDSM demonstrate the effectiveness of the RBMs based oversampling and semi-supervised learning method in terms of geometric mean (G-mean) for false positive reduction in Breast CAD.
An alternative dimensional reduction prescription
International Nuclear Information System (INIS)
Edelstein, J.D.; Giambiagi, J.J.; Nunez, C.; Schaposnik, F.A.
1995-08-01
We propose an alternative dimensional reduction prescription which in respect with Green functions corresponds to drop the extra spatial coordinate. From this, we construct the dimensionally reduced Lagrangians both for scalars and fermions, discussing bosonization and supersymmetry in the particular 2-dimensional case. We argue that our proposal is in some situations more physical in the sense that it maintains the form of the interactions between particles thus preserving the dynamics corresponding to the higher dimensional space. (author). 12 refs
Fermion masses from dimensional reduction
International Nuclear Information System (INIS)
Kapetanakis, D.; Zoupanos, G.
1990-01-01
We consider the fermion masses in gauge theories obtained from ten dimensions through dimensional reduction on coset spaces. We calculate the general fermion mass matrix and we apply the mass formula in illustrative examples. (orig.)
Fermion masses from dimensional reduction
Energy Technology Data Exchange (ETDEWEB)
Kapetanakis, D. (National Research Centre for the Physical Sciences Democritos, Athens (Greece)); Zoupanos, G. (European Organization for Nuclear Research, Geneva (Switzerland))
1990-10-11
We consider the fermion masses in gauge theories obtained from ten dimensions through dimensional reduction on coset spaces. We calculate the general fermion mass matrix and we apply the mass formula in illustrative examples. (orig.).
Dimensional Reduction and Hadronic Processes
International Nuclear Information System (INIS)
Signer, Adrian; Stoeckinger, Dominik
2008-01-01
We consider the application of regularization by dimensional reduction to NLO corrections of hadronic processes. The general collinear singularity structure is discussed, the origin of the regularization-scheme dependence is identified and transition rules to other regularization schemes are derived.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Dimensional reduction in quantum gravity
Energy Technology Data Exchange (ETDEWEB)
Hooft, G [Rijksuniversiteit Utrecht (Netherlands). Inst. voor Theoretische Fysica
1994-12-31
The requirement that physical phenomena associated with gravitational collapse should be duly reconciled with the postulates of quantum mechanics implies that at a Planckian scale our world is not 3+1 dimensional. Rather, the observable degrees of freedom can best be described as if they were Boolean variables defined on a two- dimensional lattice, evolving with time. This observation, deduced from not much more than unitarity, entropy and counting arguments, implies severe restrictions on possible models of quantum gravity. Using cellular automata as an example it is argued that this dimensional reduction implies more constraints than the freedom we have in constructing models. This is the main reason why so-far no completely consistent mathematical models of quantum black holes have been found. (author). 13 refs, 2 figs.
Dimensional reduction in anomaly mediation
International Nuclear Information System (INIS)
Boyda, Ed; Murayama, Hitoshi; Pierce, Aaron
2002-01-01
We offer a guide to dimensional reduction in theories with anomaly-mediated supersymmetry breaking. Evanescent operators proportional to ε arise in the bare Lagrangian when it is reduced from d=4 to d=4-2ε dimensions. In the course of a detailed diagrammatic calculation, we show that inclusion of these operators is crucial. The evanescent operators conspire to drive the supersymmetry-breaking parameters along anomaly-mediation trajectories across heavy particle thresholds, guaranteeing the ultraviolet insensitivity
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Reduction of infinite dimensional equations
Directory of Open Access Journals (Sweden)
Zhongding Li
2006-02-01
Full Text Available In this paper, we use the general Legendre transformation to show the infinite dimensional integrable equations can be reduced to a finite dimensional integrable Hamiltonian system on an invariant set under the flow of the integrable equations. Then we obtain the periodic or quasi-periodic solution of the equation. This generalizes the results of Lax and Novikov regarding the periodic or quasi-periodic solution of the KdV equation to the general case of isospectral Hamiltonian integrable equation. And finally, we discuss the AKNS hierarchy as a special example.
The dimensional reduction in a multi-dimensional cosmology
International Nuclear Information System (INIS)
Demianski, M.; Golda, Z.A.; Heller, M.; Szydlowski, M.
1986-01-01
Einstein's field equations are solved for the case of the eleven-dimensional vacuum spacetime which is the product R x Bianchi V x T 7 , where T 7 is a seven-dimensional torus. Among all possible solutions, the authors identify those in which the macroscopic space expands and the microscopic space contracts to a finite size. The solutions with this property are 'typical' within the considered class. They implement the idea of a purely dynamical dimensional reduction. (author)
Coset space dimensional reduction of gauge theories
Energy Technology Data Exchange (ETDEWEB)
Kapetanakis, D. (Physik Dept., Technische Univ. Muenchen, Garching (Germany)); Zoupanos, G. (CERN, Geneva (Switzerland))
1992-10-01
We review the attempts to construct unified theories defined in higher dimensions which are dimensionally reduced over coset spaces. We employ the coset space dimensional reduction scheme, which permits the detailed study of the resulting four-dimensional gauge theories. In the context of this scheme we present the difficulties and the suggested ways out in the attempts to describe the observed interactions in a realistic way. (orig.).
Coset space dimensional reduction of gauge theories
International Nuclear Information System (INIS)
Kapetanakis, D.; Zoupanos, G.
1992-01-01
We review the attempts to construct unified theories defined in higher dimensions which are dimensionally reduced over coset spaces. We employ the coset space dimensional reduction scheme, which permits the detailed study of the resulting four-dimensional gauge theories. In the context of this scheme we present the difficulties and the suggested ways out in the attempts to describe the observed interactions in a realistic way. (orig.)
Robust Semi-Supervised Manifold Learning Algorithm for Classification
Directory of Open Access Journals (Sweden)
Mingxia Chen
2018-01-01
Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.
Dimensional reduction of a generalized flux problem
International Nuclear Information System (INIS)
Moroz, A.
1992-01-01
In this paper, a generalized flux problem with Abelian and non-Abelian fluxes is considered. In the Abelian case we shall show that the generalized flux problem for tight-binding models of noninteracting electrons on either 2n- or (2n + 1)-dimensional lattice can always be reduced to an n-dimensional hopping problem. A residual freedom in this reduction enables one to identify equivalence classes of hopping Hamiltonians which have the same spectrum. In the non-Abelian case, the reduction is not possible in general unless the flux tensor factorizes into an Abelian one times are element of the corresponding algebra
Multichannel transfer function with dimensionality reduction
Kim, Han Suk
2010-01-17
The design of transfer functions for volume rendering is a difficult task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel. In this paper, we propose a new method for transfer function design. Our new method provides a framework to combine multiple approaches and pushes the boundary of gradient-based transfer functions to multiple channels, while still keeping the dimensionality of transfer functions to a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. The high-dimensional data of the domain is reduced by applying recently developed nonlinear dimensionality reduction algorithms. In this paper, we used Isomap as well as a traditional algorithm, Principle Component Analysis (PCA). Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. In this publication we report on the impact of the dimensionality reduction algorithms on transfer function design for confocal microscopy data.
Pole masses of quarks in dimensional reduction
International Nuclear Information System (INIS)
Avdeev, L.V.; Kalmykov, M.Yu.
1997-01-01
Pole masses of quarks in quantum chromodynamics are calculated to the two-loop order in the framework of the regularization by dimensional reduction. For the diagram with a light quark loop, the non-Euclidean asymptotic expansion is constructed with the external momentum on the mass shell of a heavy quark
General dimensional reduction of ten-dimensional supergravity and superstring
International Nuclear Information System (INIS)
Ferrara, S.; Porrati, M.
1986-01-01
Dimensional reductions of supergravity theories are shown to yield to specific glasses of four-dimensional no-scale models with N=4, 2 or 1 residual supersymmetry. N=1 ''maximal'' supergravity lagrangian, corresponding to the ''untwisted'' sector of orbifold compactification of superstrings, contains nine families and has a no-scale structure based on the Kaehler manifold [SU(3, 3+3n)/SU(3)xSU(3+3n)]x[SU(1, 1)/U(1)]. The quantum consistency of the resulting theories give information on the non Kaluza-Klein (string) ''twisted'' sector. (orig.)
Dimensionality reduction with unsupervised nearest neighbors
Kramer, Oliver
2013-01-01
This book is devoted to a novel approach for dimensionality reduction based on the famous nearest neighbor method that is a powerful classification and regression approach. It starts with an introduction to machine learning concepts and a real-world application from the energy domain. Then, unsupervised nearest neighbors (UNN) is introduced as efficient iterative method for dimensionality reduction. Various UNN models are developed step by step, reaching from a simple iterative strategy for discrete latent spaces to a stochastic kernel-based algorithm for learning submanifolds with independent parameterizations. Extensions that allow the embedding of incomplete and noisy patterns are introduced. Various optimization approaches are compared, from evolutionary to swarm-based heuristics. Experimental comparisons to related methodologies taking into account artificial test data sets and also real-world data demonstrate the behavior of UNN in practical scenarios. The book contains numerous color figures to illustr...
Dimensional reduction from entanglement in Minkowski space
International Nuclear Information System (INIS)
Brustein, Ram; Yarom, Amos
2005-01-01
Using a quantum field theoretic setting, we present evidence for dimensional reduction of any sub-volume of Minkowksi space. First, we show that correlation functions of a class of operators restricted to a sub-volume of D-dimensional Minkowski space scale as its surface area. A simple example of such area scaling is provided by the energy fluctuations of a free massless quantum field in its vacuum state. This is reminiscent of area scaling of entanglement entropy but applies to quantum expectation values in a pure state, rather than to statistical averages over a mixed state. We then show, in a specific case, that fluctuations in the bulk have a lower-dimensional representation in terms of a boundary theory at high temperature. (author)
Dimensional reduction for D3-brane moduli
International Nuclear Information System (INIS)
Cownden, Brad; Frey, Andrew R.; Marsh, M.C. David; Underwood, Bret
2016-01-01
Warped string compactifications are central to many attempts to stabilize moduli and connect string theory with cosmology and particle phenomenology. We present a first-principles derivation of the low-energy 4D effective theory from dimensional reduction of a D3-brane in a warped Calabi-Yau compactification of type IIB string theory with imaginary self-dual 3-form flux, including effects of D3-brane motion beyond the probe approximation, and find the metric on the moduli space of brane positions, the universal volume modulus, and axions descending from the 4-form potential. As D3-branes may be considered as carrying either electric or magnetic charges for the self-dual 5-form field strength, we present calculations in both duality frames. Our results are consistent with, but extend significantly, earlier results on the low-energy effective theory arising from D3-branes in string compactifications.
Human semi-supervised learning.
Gibson, Bryan R; Rogers, Timothy T; Zhu, Xiaojin
2013-01-01
Most empirical work in human categorization has studied learning in either fully supervised or fully unsupervised scenarios. Most real-world learning scenarios, however, are semi-supervised: Learners receive a great deal of unlabeled information from the world, coupled with occasional experiences in which items are directly labeled by a knowledgeable source. A large body of work in machine learning has investigated how learning can exploit both labeled and unlabeled data provided to a learner. Using equivalences between models found in human categorization and machine learning research, we explain how these semi-supervised techniques can be applied to human learning. A series of experiments are described which show that semi-supervised learning models prove useful for explaining human behavior when exposed to both labeled and unlabeled data. We then discuss some machine learning models that do not have familiar human categorization counterparts. Finally, we discuss some challenges yet to be addressed in the use of semi-supervised models for modeling human categorization. Copyright © 2013 Cognitive Science Society, Inc.
Denoising and dimensionality reduction of genomic data
Capobianco, Enrico
2005-05-01
Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical
Semi-supervised consensus clustering for gene expression data analysis
Wang, Yunli; Pan, Youlian
2014-01-01
Background Simple clustering methods such as hierarchical clustering and k-means are widely used for gene expression data analysis; but they are unable to deal with noise and high dimensionality associated with the microarray gene expression data. Consensus clustering appears to improve the robustness and quality of clustering results. Incorporating prior knowledge in clustering process (semi-supervised clustering) has been shown to improve the consistency between the data partitioning and do...
Cosmological string solutions by dimensional reduction
International Nuclear Information System (INIS)
Behrndt, K.; Foerste, S.
1993-12-01
We obtain cosmological four dimensional solutions of the low energy effective string theory by reducing a five dimensional black hole, and black hole-de Sitter solution of the Einstein gravity down to four dimensions. The appearance of a cosmological constant in the five dimensional Einstein-Hilbert produces a special dilaton potential in the four dimensional effective string action. Cosmological scenarios implement by our solutions are discussed
Discrete symmetries and coset space dimensional reduction
International Nuclear Information System (INIS)
Kapetanakis, D.; Zoupanos, G.
1989-01-01
We consider the discrete symmetries of all the six-dimensional coset spaces and we apply them in gauge theories defined in ten dimensions which are dimensionally reduced over these homogeneous spaces. Particular emphasis is given in the consequences of the discrete symmetries on the particle content as well as on the symmetry breaking a la Hosotani of the resulting four-dimensional theory. (orig.)
On dimensional reduction over coset spaces
International Nuclear Information System (INIS)
Kapetanakis, D.; Zoupanos, G.
1990-01-01
Gauge theories defined in higher dimensions can be dimensionally reduced over coset spaces giving definite predictions for the resulting four-dimensional theory. We present the most interesting features of these theories as well as an attempt to construct a model with realistic low energy behaviour within this framework. (author)
Coupled Semi-Supervised Learning
2010-05-01
Additionally, specify the expected category of each relation argument to enable type-checking. Subsystem components and the KI can benefit from methods that...confirm that our coupled semi-supervised learning approaches can scale to hun- dreds of predicates and can benefit from using a diverse set of...organization yes California Institute of Technology vegetable food yes carrots vehicle item yes airplanes vertebrate animal yes videoGame product yes
A Semisupervised Cascade Classification Algorithm
Directory of Open Access Journals (Sweden)
Stamatis Karlos
2016-01-01
Full Text Available Classification is one of the most important tasks of data mining techniques, which have been adopted by several modern applications. The shortage of enough labeled data in the majority of these applications has shifted the interest towards using semisupervised methods. Under such schemes, the use of collected unlabeled data combined with a clearly smaller set of labeled examples leads to similar or even better classification accuracy against supervised algorithms, which use labeled examples exclusively during the training phase. A novel approach for increasing semisupervised classification using Cascade Classifier technique is presented in this paper. The main characteristic of Cascade Classifier strategy is the use of a base classifier for increasing the feature space by adding either the predicted class or the probability class distribution of the initial data. The classifier of the second level is supplied with the new dataset and extracts the decision for each instance. In this work, a self-trained NB∇C4.5 classifier algorithm is presented, which combines the characteristics of Naive Bayes as a base classifier and the speed of C4.5 for final classification. We performed an in-depth comparison with other well-known semisupervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique has better accuracy in most cases.
Multichannel transfer function with dimensionality reduction
Kim, Han Suk; Schulze, Jü rgen P.; Cone, Angela C.; Sosinsky, Gina E.; Martone, Maryann E.
2010-01-01
. Our new method provides a framework to combine multiple approaches and pushes the boundary of gradient-based transfer functions to multiple channels, while still keeping the dimensionality of transfer functions to a manageable level, i.e., a maximum
Dimensionality reduction of collective motion by principal manifolds
Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.
2015-01-01
While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.
Stochastic confinement and dimensional reduction. 1
International Nuclear Information System (INIS)
Ambjoern, J.; Olesen, P.; Peterson, C.
1984-03-01
By Monte Carlo calculations on a 16 4 lattice the authors investigate four dimensional SU(2) lattice guage theory with respect to the conjecture that at large distances this theory reduces approximately to two dimensional SU(2) lattice gauge theory. Good numerical evidence is found for this conjecture. As a by-product the SU(2) string tension is also measured and good agreement is found with scaling. The 'adjoint string tension' is also found to have a reasonable scaling behaviour. (Auth.)
Stochastic confinement and dimensional reduction. Pt. 1
International Nuclear Information System (INIS)
Ambjoern, J.; Olesen, P.; Peterson, C.
1984-01-01
By Monte Carlo calculations on a 12 4 lattice we investigate four-dimensional SU(2) lattice gauge theory with respect to the conjecture that at large distances this theory reduces approximately to two-dimensional SU(2) lattice gauge theory. We find good numerical evidence for this conjecture. As a by-product we also measure the SU(2) string tension and find reasonable agreement with scaling. The 'adjoint string tension' is also found to have a reasonable scaling behaviour. (orig.)
Optimistic semi-supervised least squares classification
DEFF Research Database (Denmark)
Krijthe, Jesse H.; Loog, Marco
2017-01-01
The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples. In this work we study a simple self-learning approach to semi-supervised learning applied to the least squares classifier. We show that a soft-label and a hard-label variant ...
Parallel Framework for Dimensionality Reduction of Large-Scale Datasets
Directory of Open Access Journals (Sweden)
Sai Kiranmayee Samudrala
2015-01-01
Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.
A regularized approach for geodesic-based semisupervised multimanifold learning.
Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun
2014-05-01
Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.
Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
Dimensional reduction near the deconfinement transition
International Nuclear Information System (INIS)
Kurkela, A.
2009-01-01
It is expected that incorporating the center symmetry in the conventional dimensionally reduced effective theory for high-temperature SU(N) Yang-Mills theory, EQCD, will considerably extend its applicability towards the deconfinement transition. In this talk, I will discuss the construction of such center-symmetric effective theories and present results from their lattice simulations in the case of two colors. The simulations demonstrate that unlike EQCD, the new center symmetric theory undergoes a second order confining phase transition in complete analogy with the full theory. I will also describe the perturbative and non-perturbative matching of the parameters of the effective theory, and outline ways to further improve its description of the physics near the deconfinement transition. (author)
Dimensionality reduction of quality of life indicators
Directory of Open Access Journals (Sweden)
Andrea Jindrová
2012-01-01
Full Text Available Selecting indicators for assessing the quality of life at the regional level is not unambigous. Currently, there are no precisely defined indicators that would give comprehensive information about the quality of life on a local level. In this paper we focus on the determination (selection of groups of indicators that can be interpreted, on the basis of studied literature, as factors characterizing the quality of life. Furthermore, on the application of methods to reduce the dimensionality of these indicators, from the source of the database CULS KROK, which provides statistics on the regional and districts level. To reduce the number of indicators and the subsequent creation of derived variables that capture the relationships between selected indicators multivariate statistical analysis methods, especially method of principal components and factor analysis were used. This paper also indicates the methodology grant project “Methodological Approaches to assess Subjective Aspects of the life quality in regions of the Czech Republic”.
Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised
In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...
Loog, M.
2011-01-01
A rather simple semi-supervised version of the equally simple nearest mean classifier is presented. However simple, the proposed approach is of practical interest as the nearest mean classifier remains a relevant tool in biomedical applications or other areas dealing with relatively high-dimensional
Method of dimensionality reduction in contact mechanics and friction
Popov, Valentin L
2015-01-01
This book describes for the first time a simulation method for the fast calculation of contact properties and friction between rough surfaces in a complete form. In contrast to existing simulation methods, the method of dimensionality reduction (MDR) is based on the exact mapping of various types of three-dimensional contact problems onto contacts of one-dimensional foundations. Within the confines of MDR, not only are three dimensional systems reduced to one-dimensional, but also the resulting degrees of freedom are independent from another. Therefore, MDR results in an enormous reduction of the development time for the numerical implementation of contact problems as well as the direct computation time and can ultimately assume a similar role in tribology as FEM has in structure mechanics or CFD methods, in hydrodynamics. Furthermore, it substantially simplifies analytical calculation and presents a sort of “pocket book edition” of the entirety contact mechanics. Measurements of the rheology of bodies in...
Wang, Jim Jing-Yan; Gao, Xin
2014-01-01
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.
Wang, Jim Jing-Yan
2014-07-06
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.
Semi-supervised clustering methods.
Bair, Eric
2013-01-01
Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as "semi-supervised clustering" methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided.
Metric dimensional reduction at singularities with implications to Quantum Gravity
International Nuclear Information System (INIS)
Stoica, Ovidiu Cristinel
2014-01-01
A series of old and recent theoretical observations suggests that the quantization of gravity would be feasible, and some problems of Quantum Field Theory would go away if, somehow, the spacetime would undergo a dimensional reduction at high energy scales. But an identification of the deep mechanism causing this dimensional reduction would still be desirable. The main contribution of this article is to show that dimensional reduction effects are due to General Relativity at singularities, and do not need to be postulated ad-hoc. Recent advances in understanding the geometry of singularities do not require modification of General Relativity, being just non-singular extensions of its mathematics to the limit cases. They turn out to work fine for some known types of cosmological singularities (black holes and FLRW Big-Bang), allowing a choice of the fundamental geometric invariants and physical quantities which remain regular. The resulting equations are equivalent to the standard ones outside the singularities. One consequence of this mathematical approach to the singularities in General Relativity is a special, (geo)metric type of dimensional reduction: at singularities, the metric tensor becomes degenerate in certain spacetime directions, and some properties of the fields become independent of those directions. Effectively, it is like one or more dimensions of spacetime just vanish at singularities. This suggests that it is worth exploring the possibility that the geometry of singularities leads naturally to the spontaneous dimensional reduction needed by Quantum Gravity. - Highlights: • The singularities we introduce are described by finite geometric/physical objects. • Our singularities are accompanied by dimensional reduction effects. • They affect the metric, the measure, the topology, the gravitational DOF (Weyl = 0). • Effects proposed in other approaches to Quantum Gravity are obtained naturally. • The geometric dimensional reduction obtained
Effective Image Database Search via Dimensionality Reduction
DEFF Research Database (Denmark)
Dahl, Anders Bjorholm; Aanæs, Henrik
2008-01-01
Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction, vocabul......Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction......, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building...... of the visual vocabulary is typically done using k-means. We investigate a clustering algorithm based on the leader follower principle (LF-clustering), in which the number of clusters is not fixed. The adaptive nature of LF-clustering is shown to improve the quality of the visual vocabulary using this...
Adaptive Sampling for Nonlinear Dimensionality Reduction Based on Manifold Learning
DEFF Research Database (Denmark)
Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan
2017-01-01
We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space that is approxi...... to detect and fill up gaps in the sampling in the embedding space. The performance of the proposed manifold filling method will be illustrated by numerical experiments, where we consider nonlinear parameter-dependent steady-state Navier-Stokes flows in the transonic regime.......We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space...
A Tannakian approach to dimensional reduction of principal bundles
Álvarez-Cónsul, Luis; Biswas, Indranil; García-Prada, Oscar
2017-08-01
Let P be a parabolic subgroup of a connected simply connected complex semisimple Lie group G. Given a compact Kähler manifold X, the dimensional reduction of G-equivariant holomorphic vector bundles over X × G / P was carried out in Álvarez-Cónsul and García-Prada (2003). This raises the question of dimensional reduction of holomorphic principal bundles over X × G / P. The method of Álvarez-Cónsul and García-Prada (2003) is special to vector bundles; it does not generalize to principal bundles. In this paper, we adapt to equivariant principal bundles the Tannakian approach of Nori, to describe the dimensional reduction of G-equivariant principal bundles over X × G / P, and to establish a Hitchin-Kobayashi type correspondence. In order to be able to apply the Tannakian theory, we need to assume that X is a complex projective manifold.
Construction of N=8 supergravity theories by dimensional reduction
International Nuclear Information System (INIS)
Boucher, W.
1985-01-01
In this paper I ask which N=8 supergravity theories in four dimensions can be obtained by dimensional reduction of the N=1 supergravity theory in eleven dimensions. Several years ago Scherk and Schwarz produced a particular class of N = 8 theories by giving a dimensional reduction scheme on the restricted class of coset spaces, G/H, with dim H=0 (and therefore dim G=7). I generalize their considerations by looking at arbitrary (seven-dimensional) coset spaces. Also, instead of giving a particular ansatz which happens to work, I set about the distinctly more difficult task of determining all ansatzes which produce N=8 theories. The basic ingredient of my dimensional reduction scheme is the demand that certain symmetries, including supersymmetry, be truncated consistently. I find the surprising result that the only N=8 theories obtainable within the contexts of my scheme are those theories already written down by Scherk and Schwarz. In particular dim H=0 and dim G=7. Independently of these considerations, I prove that any dimensional reduction scheme which consistently truncates supersymmetry must also be consistent with the equations of motion. I discuss Lorentz-invariant solutions of the theories of Scherk and Schwarz, pointing out that since the ansatz of Scherk and Schwarz consistently truncates supersymmetry, any solution of these theories is also a solution of the N=1 supergravity theory in eleven dimensions and, hence, in particular that there is a Freund-Rubin-type ansatz for these theories. However I demonstrate that for most gauge groups the ansatz must be trivial which implies that for these theories the cosmological constant of any Lorentz-invariant solution must be zero (classically). Finally, I make some comparisons with work by Manton on dimensional reduction. (orig.)
Directory of Open Access Journals (Sweden)
Zhi He
2017-10-01
Full Text Available Classification of hyperspectral image (HSI is an important research topic in the remote sensing community. Significant efforts (e.g., deep learning have been concentrated on this task. However, it is still an open issue to classify the high-dimensional HSI with a limited number of training samples. In this paper, we propose a semi-supervised HSI classification method inspired by the generative adversarial networks (GANs. Unlike the supervised methods, the proposed HSI classification method is semi-supervised, which can make full use of the limited labeled samples as well as the sufficient unlabeled samples. Core ideas of the proposed method are twofold. First, the three-dimensional bilateral filter (3DBF is adopted to extract the spectral-spatial features by naturally treating the HSI as a volumetric dataset. The spatial information is integrated into the extracted features by 3DBF, which is propitious to the subsequent classification step. Second, GANs are trained on the spectral-spatial features for semi-supervised learning. A GAN contains two neural networks (i.e., generator and discriminator trained in opposition to one another. The semi-supervised learning is achieved by adding samples from the generator to the features and increasing the dimension of the classifier output. Experimental results obtained on three benchmark HSI datasets have confirmed the effectiveness of the proposed method , especially with a limited number of labeled samples.
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
Perturbative QCD lagrangian at large distances and stochastic dimensionality reduction
International Nuclear Information System (INIS)
Shintani, M.
1986-10-01
We construct a Lagrangian for perturbative QCD at large distances within the covariant operator formalism which explains the color confinement of quarks and gluons while maintaining unitarity of the S-matrix. It is also shown that when interactions are switched off, the mechanism of stochastic dimensionality reduction is operative in the system due to exact super-Lorentz symmetries. (orig.)
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
N-Dimensional LLL Reduction Algorithm with Pivoted Reflection
Directory of Open Access Journals (Sweden)
Zhongliang Deng
2018-01-01
Full Text Available The Lenstra-Lenstra-Lovász (LLL lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO communication systems and carrier phase positioning in global navigation satellite system (GNSS to solve the integer least squares (ILS problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL, expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm.
Superfluid hydrodynamics of polytropic gases: dimensional reduction and sound velocity
International Nuclear Information System (INIS)
Bellomo, N; Mazzarella, G; Salasnich, L
2014-01-01
Motivated by the fact that two-component confined fermionic gases in Bardeen–Cooper–Schrieffer–Bose–Einstein condensate (BCS–BEC) crossover can be described through an hydrodynamical approach, we study these systems—both in the cigar-shaped configuration and in the disc-shaped one—by using a polytropic Lagrangian density. We start from the Popov Lagrangian density and obtain, after a dimensional reduction process, the equations that control the dynamics of such systems. By solving these equations we study the sound velocity as a function of the density by analyzing how the dimensionality affects this velocity. (paper)
Semi-supervised clustering methods
Bair, Eric
2013-01-01
Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830
A sparse grid based method for generative dimensionality reduction of high-dimensional data
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Kantowski-Sachs multidimensional cosmological models and dynamical dimensional reduction
International Nuclear Information System (INIS)
Demianski, M.; Rome Univ.; Golda, Z.A.; Heller, M.; Szydlowski, M.
1988-01-01
Einstein's field equations are solved for a multidimensional spacetime (KS) x Tsup(m), where (KS) is a four-dimensional Kantowski-Sachs spacetime and Tsup(m) is an m-dimensional torus. Among all possible vacuum solutions there is a large class of spacetimes in which the macroscopic space expands and the microscopic space contracts to a finite volume. We also consider a non-vacuum case and we explicitly solve the field equations for the matter satisfying the Zel'dovich equation of state. In non-vacuum models, with matter satisfying an equation of state p = γρ, O ≤ γ < 1, at a sufficiently late stage of evolution the microspace always expands and the dynamical dimensional reduction does not occur. (author)
One-loop dimensional reduction of the linear σ model
International Nuclear Information System (INIS)
Malbouisson, A.P.C.; Silva-Neto, M.B.; Svaiter, N.F.
1997-05-01
We perform the dimensional reduction of the linear σ model at one-loop level. The effective of the reduced theory obtained from the integration over the nonzero Matsubara frequencies is exhibited. Thermal mass and coupling constant renormalization constants are given, as well as the thermal renormalization group which controls the dependence of the counterterms on the temperature. We also recover, for the reduced theory, the vacuum instability of the model for large N. (author)
Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.
Chen, Ke; Wang, Shihai
2011-01-01
Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.
Moment constrained semi-supervised LDA
DEFF Research Database (Denmark)
Loog, Marco
2012-01-01
This BNAIC compressed contribution provides a summary of the work originally presented at the First IAPR Workshop on Partially Supervised Learning and published in [5]. It outlines the idea behind supervised and semi-supervised learning and highlights the major shortcoming of many current methods...
Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential Evolution
Satish Gajawada; Durga Toshniwal
2012-01-01
Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have beensolved by using DE based clustering methods but these methods may fail to find clusters hidden insubspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed inliterature to find subspace clusters that are present in subspaces of dataset. In this paper we proposeVINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE opt...
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray
Directory of Open Access Journals (Sweden)
Lan Shu
2008-07-01
Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLEÃ¢Â€Â™s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.
Supervised linear dimensionality reduction with robust margins for object recognition
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
Object-based Dimensionality Reduction in Land Surface Phenology Classification
Directory of Open Access Journals (Sweden)
Brian E. Bunker
2016-11-01
Full Text Available Unsupervised classification or clustering of multi-decadal land surface phenology provides a spatio-temporal synopsis of natural and agricultural vegetation response to environmental variability and anthropogenic activities. Notwithstanding the detailed temporal information available in calibrated bi-monthly normalized difference vegetation index (NDVI and comparable time series, typical pre-classification workflows average a pixel’s bi-monthly index within the larger multi-decadal time series. While this process is one practical way to reduce the dimensionality of time series with many hundreds of image epochs, it effectively dampens temporal variation from both intra and inter-annual observations related to land surface phenology. Through a novel application of object-based segmentation aimed at spatial (not temporal dimensionality reduction, all 294 image epochs from a Moderate Resolution Imaging Spectroradiometer (MODIS bi-monthly NDVI time series covering the northern Fertile Crescent were retained (in homogenous landscape units as unsupervised classification inputs. Given the inherent challenges of in situ or manual image interpretation of land surface phenology classes, a cluster validation approach based on transformed divergence enabled comparison between traditional and novel techniques. Improved intra-annual contrast was clearly manifest in rain-fed agriculture and inter-annual trajectories showed increased cluster cohesion, reducing the overall number of classes identified in the Fertile Crescent study area from 24 to 10. Given careful segmentation parameters, this spatial dimensionality reduction technique augments the value of unsupervised learning to generate homogeneous land surface phenology units. By combining recent scalable computational approaches to image segmentation, future work can pursue new global land surface phenology products based on the high temporal resolution signatures of vegetation index time series.
Semisupervised Community Detection by Voltage Drops
Directory of Open Access Journals (Sweden)
Min Ji
2016-01-01
Full Text Available Many applications show that semisupervised community detection is one of the important topics and has attracted considerable attention in the study of complex network. In this paper, based on notion of voltage drops and discrete potential theory, a simple and fast semisupervised community detection algorithm is proposed. The label propagation through discrete potential transmission is accomplished by using voltage drops. The complexity of the proposal is OV+E for the sparse network with V vertices and E edges. The obtained voltage value of a vertex can be reflected clearly in the relationship between the vertex and community. The experimental results on four real networks and three benchmarks indicate that the proposed algorithm is effective and flexible. Furthermore, this algorithm is easily applied to graph-based machine learning methods.
Semi-supervised Learning for Phenotyping Tasks.
Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K
2015-01-01
Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.
A SURVEY OF SEMI-SUPERVISED LEARNING
Amrita Sadarangani *, Dr. Anjali Jivani
2016-01-01
Semi Supervised Learning involves using both labeled and unlabeled data to train a classifier or for clustering. Semi supervised learning finds usage in many applications, since labeled data can be hard to find in many cases. Currently, a lot of research is being conducted in this area. This paper discusses the different algorithms of semi supervised learning and then their advantages and limitations are compared. The differences between supervised classification and semi-supervised classific...
Classification of gene expression data: A hubness-aware semi-supervised approach.
Buza, Krisztian
2016-04-01
Classification of gene expression data is the common denominator of various biomedical recognition tasks. However, obtaining class labels for large training samples may be difficult or even impossible in many cases. Therefore, semi-supervised classification techniques are required as semi-supervised classifiers take advantage of unlabeled data. Gene expression data is high-dimensional which gives rise to the phenomena known under the umbrella of the curse of dimensionality, one of its recently explored aspects being the presence of hubs or hubness for short. Therefore, hubness-aware classifiers have been developed recently, such as Naive Hubness-Bayesian k-Nearest Neighbor (NHBNN). In this paper, we propose a semi-supervised extension of NHBNN which follows the self-training schema. As one of the core components of self-training is the certainty score, we propose a new hubness-aware certainty score. We performed experiments on publicly available gene expression data. These experiments show that the proposed classifier outperforms its competitors. We investigated the impact of each of the components (classification algorithm, semi-supervised technique, hubness-aware certainty score) separately and showed that each of these components are relevant to the performance of the proposed approach. Our results imply that our approach may increase classification accuracy and reduce computational costs (i.e., runtime). Based on the promising results presented in the paper, we envision that hubness-aware techniques will be used in various other biomedical machine learning tasks. In order to accelerate this process, we made an implementation of hubness-aware machine learning techniques publicly available in the PyHubs software package (http://www.biointelligence.hu/pyhubs) implemented in Python, one of the most popular programming languages of data science. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Semi-supervised Learning with Deep Generative Models
Kingma, D.P.; Rezende, D.J.; Mohamed, S.; Welling, M.
2014-01-01
The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and
Projected estimators for robust semi-supervised classification
Krijthe, J.H.; Loog, M.
2017-01-01
For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the
International Nuclear Information System (INIS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Semi-supervised detection of intracranial pressure alarms using waveform dynamics
International Nuclear Information System (INIS)
Scalzo, Fabien; Hu, Xiao
2013-01-01
Patient monitoring systems in intensive care units (ICU) are usually set to trigger alarms when abnormal values are detected. Alarms are generated by threshold-crossing rules that lead to high false alarm rates. This is a recognized issue that causes alarm fatigue, waste of human resources, and increased patient risks. Recently developed smart alarm models require alarms to be validated by experts during the training phase. The manual annotation process involved is time-consuming and virtually impossible to achieve for the thousands of alarms recorded in the ICU every week. To tackle this problem, we investigate in this study if the use of semi-supervised learning methods, that can naturally integrate unlabeled data samples in the model, can be used to improve the accuracy of the alarm detection. As a proof of concept, the detection system is evaluated on intracranial pressure (ICP) signal alarms. Specific morphological and trending features are extracted from the ICP signal waveform to capture the dynamic of the signal prior to alarms. This study is based on a comprehensive dataset of 4791 manually labeled alarms recorded from 108 neurosurgical patients. A comparative analysis is provided between kernel spectral regression (SR-KDA) and support vector machine (SVM) both modified for the semi-supervised setting. Results obtained during the experimental evaluations indicate that the two models can significantly reduce false alarms using unlabeled samples; especially in the presence of a restrained number of labeled examples. At a true alarm recognition rate of 99%, the false alarm reduction rates improved from 9% (supervised) to 27% (semi-supervised) for SR-KDA, and from 3% (supervised) to 16% (semi-supervised) for SVM. (paper)
Graph-based semi-supervised learning
Subramanya, Amarnag
2014-01-01
While labeled data is expensive to prepare, ever increasing amounts of unlabeled data is becoming widely available. In order to adapt to this phenomenon, several semi-supervised learning (SSL) algorithms, which learn from labeled as well as unlabeled data, have been developed. In a separate line of work, researchers have started to realize that graphs provide a natural way to represent data in a variety of domains. Graph-based SSL algorithms, which bring together these two lines of work, have been shown to outperform the state-of-the-art in many applications in speech processing, computer visi
Joint Sparse Recovery With Semisupervised MUSIC
Wen, Zaidao; Hou, Biao; Jiao, Licheng
2017-05-01
Discrete multiple signal classification (MUSIC) with its low computational cost and mild condition requirement becomes a significant noniterative algorithm for joint sparse recovery (JSR). However, it fails in rank defective problem caused by coherent or limited amount of multiple measurement vectors (MMVs). In this letter, we provide a novel sight to address this problem by interpreting JSR as a binary classification problem with respect to atoms. Meanwhile, MUSIC essentially constructs a supervised classifier based on the labeled MMVs so that its performance will heavily depend on the quality and quantity of these training samples. From this viewpoint, we develop a semisupervised MUSIC (SS-MUSIC) in the spirit of machine learning, which declares that the insufficient supervised information in the training samples can be compensated from those unlabeled atoms. Instead of constructing a classifier in a fully supervised manner, we iteratively refine a semisupervised classifier by exploiting the labeled MMVs and some reliable unlabeled atoms simultaneously. Through this way, the required conditions and iterations can be greatly relaxed and reduced. Numerical experimental results demonstrate that SS-MUSIC can achieve much better recovery performances than other MUSIC extended algorithms as well as some typical greedy algorithms for JSR in terms of iterations and recovery probability.
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
Dimensional reduction in field theory and hidden symmetries in extended supergravity
International Nuclear Information System (INIS)
Kremmer, E.
1985-01-01
Dimensional reduction in field theories is discussed both in theories which do not include gravity and in gravity theories. In particular, 11-dimensional supergravity and its reduction to 4 dimensions is considered. Hidden symmetries of supergravity with N=8 in 4 dimensions, global E 7 and local SU(8)-invariances in particular are detected. The hidden symmmetries permit to interpret geometrically the scalar fields
Water-Induced Dimensionality Reduction in Metal-Halide Perovskites
Turedi, Bekir; Lee, Kwangjae; Dursun, Ibrahim; Alamer, Badriah Jaber; Wu, Zhennan; Alarousu, Erkki; Mohammed, Omar F.; Cho, Namchul; Bakr, Osman
2018-01-01
. Here we employ water to directly transform films of the three-dimensional (3D) perovskite CsPbBr3 to stable two-dimensional (2D) perovskite-related CsPb2Br5. A sequential dissolution-recrystallization process governs this water induced transformation
Cross-Domain Semi-Supervised Learning Using Feature Formulation.
Xingquan Zhu
2011-12-01
Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.
Discriminative semi-supervised feature selection via manifold regularization.
Xu, Zenglin; King, Irwin; Lyu, Michael Rung-Tsong; Jin, Rong
2010-07-01
Feature selection has attracted a huge amount of interest in both research and application communities of data mining. We consider the problem of semi-supervised feature selection, where we are given a small amount of labeled examples and a large amount of unlabeled examples. Since a small number of labeled samples are usually insufficient for identifying the relevant features, the critical problem arising from semi-supervised feature selection is how to take advantage of the information underneath the unlabeled data. To address this problem, we propose a novel discriminative semi-supervised feature selection method based on the idea of manifold regularization. The proposed approach selects features through maximizing the classification margin between different classes and simultaneously exploiting the geometry of the probability distribution that generates both labeled and unlabeled data. In comparison with previous semi-supervised feature selection algorithms, our proposed semi-supervised feature selection method is an embedded feature selection method and is able to find more discriminative features. We formulate the proposed feature selection method into a convex-concave optimization problem, where the saddle point corresponds to the optimal solution. To find the optimal solution, the level method, a fairly recent optimization method, is employed. We also present a theoretic proof of the convergence rate for the application of the level method to our problem. Empirical evaluation on several benchmark data sets demonstrates the effectiveness of the proposed semi-supervised feature selection method.
Regular graph construction for semi-supervised learning
International Nuclear Information System (INIS)
Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang
2014-01-01
Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN
Perturbative QCD Lagrangian at large distances and stochastic dimensionality reduction. Pt. 2
International Nuclear Information System (INIS)
Shintani, M.
1986-11-01
Using the method of stochastic dimensional reduction, we derive a four-dimensional quantum effective Lagrangian for the classical Yang-Mills system coupled to the Gaussian white noise. It is found that the Lagrangian coincides with the perturbative QCD at large distances constructed in our previous paper. That formalism is based on the local covariant operator formalism which maintains the unitarity of the S-matrix. Furthermore, we show the non-perturbative equivalence between super-Lorentz invariant sectors of the effective Lagrangian and two dimensional QCD coupled to the adjoint pseudo-scalars. This implies that stochastic dimensionality reduction by two is approximately operative in QCD at large distances. (orig.)
Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions
Wang, Jim Jing-Yan; Almasri, Islam; Shi, Yuexiang; Gao, Xin
2014-01-01
of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue
Water-Induced Dimensionality Reduction in Metal-Halide Perovskites
Turedi, Bekir
2018-03-30
Metal-halide perovskite materials are highly attractive materials for optoelectronic applications. However, the instability of perovskite materials caused by moisture and heat-induced degradation impairs future prospects of using these materials. Here we employ water to directly transform films of the three-dimensional (3D) perovskite CsPbBr3 to stable two-dimensional (2D) perovskite-related CsPb2Br5. A sequential dissolution-recrystallization process governs this water induced transformation under PbBr2 rich condition. We find that these post-synthesized 2D perovskite-related material films exhibit excellent stability against humidity and high photoluminescence quantum yield. We believe that our results provide a new synthetic method to generate stable 2D perovskite-related materials that could be applicable for light emitting device applications.
On symmetry reduction and exact solutions of the linear one-dimensional Schroedinger equation
International Nuclear Information System (INIS)
Barannik, L.L.
1996-01-01
Symmetry reduction of the Schroedinger equation with potential is carried out on subalgebras of the Lie algebra which is the direct sum of the special Galilei algebra and one-dimensional algebra. Some new exact solutions are obtained
Dimensional reduction and BRST approach to the description of a Regge trajectory
International Nuclear Information System (INIS)
Pashnev, A.I.; Tsulaya, M.M.
1997-01-01
The local free field theory for Regge trajectory is described in the framework of the BRST-quantization method. The corresponding BRST-charge is constructed with the help of the method of dimensional reduction
Congruent reduction and mode conversion in 4-dimensional plasmas
International Nuclear Information System (INIS)
Friedland, L.; Kaufman, A.N.
1987-04-01
Standard eikonal theory reduces, to N=1, the order of the system of equations underlying wave propagation in inhomogeneous plasmas. The condition for this remarkable reducibility is that only one eigenvalue of the unreduced NxN dispersion matrix D(k,x) vanishes at a time. If, however, two or more eigenvalues of D become simultaneously small, the geometric optics reduction scheme becomes singular. These regions are associated with linear mode conversion, and are described by higher order systems. A new reduction scheme based on congruent transformations of D is developed, and it is shown that, in ''degenerate'' plasma regions, a partial reduction of order is possible. The method comprises a constructive step-by-step procedure, which, in the most frequent (doubly) degenerate case, yields a second order system, describing the pairwise mode conversion problems, the solution of which in general geometry has been found recently
Symmetries, integrals, and three-dimensional reductions of Plebanski's second heavenly equation
International Nuclear Information System (INIS)
Neyzi, F.; Sheftel, M. B.; Yazici, D.
2007-01-01
We study symmetries and conservation laws for Plebanski's second heavenly equation written as a first-order nonlinear evolutionary system which admits a multi-Hamiltonian structure. We construct an optimal system of one-dimensional subalgebras and all inequivalent three-dimensional symmetry reductions of the original four-dimensional system. We consider these two-component evolutionary systems in three dimensions as natural candidates for integrable systems
Coset Space Dimensional Reduction approach to the Standard Model
International Nuclear Information System (INIS)
Farakos, K.; Kapetanakis, D.; Koutsoumbas, G.; Zoupanos, G.
1988-01-01
We present a unified theory in ten dimensions based on the gauge group E 8 , which is dimensionally reduced to the Standard Mode SU 3c xSU 2 -LxU 1 , which breaks further spontaneously to SU 3L xU 1em . The model gives similar predictions for sin 2 θ w and proton decay as the minimal SU 5 G.U.T., while a natural choice of the coset space radii predicts light Higgs masses a la Coleman-Weinberg
SemiBoost: boosting for semi-supervised learning.
Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi
2009-11-01
Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.
Enhanced manifold regularization for semi-supervised classification.
Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong
2016-06-01
Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
One-dimensional reduction of viscous jets. II. Applications
Pitrou, Cyril
2018-04-01
In a companion paper [Phys. Rev. E 97, 043115 (2018), 10.1103/PhysRevE.97.043115], a formalism allowing to describe viscous fibers as one-dimensional objects was developed. We apply it to the special case of a viscous fluid torus. This allows to highlight the differences with the basic viscous string model and with its viscous rod model extension. In particular, an elliptic deformation of the torus section appears because of surface tension effects, and this cannot be described by viscous string nor viscous rod models. Furthermore, we study the Rayleigh-Plateau instability for periodic deformations around the perfect torus, and we show that the instability is not sufficient to lead to the torus breakup in several droplets before it collapses to a single spherical drop. Conversely, a rotating torus is dynamically attracted toward a stationary solution, around which the instability can develop freely and split the torus in multiple droplets.
One-dimensional reduction of viscous jets. I. Theory
Pitrou, Cyril
2018-04-01
We build a general formalism to describe thin viscous jets as one-dimensional objects with an internal structure. We present in full generality the steps needed to describe the viscous jets around their central line, and we argue that the Taylor expansion of all fields around that line is conveniently expressed in terms of symmetric trace-free tensors living in the two dimensions of the fiber sections. We recover the standard results of axisymmetric jets and we report the first and second corrections to the lowest order description, also allowing for a rotational component around the axis of symmetry. When applied to generally curved fibers, the lowest order description corresponds to a viscous string model whose sections are circular. However, when including the first corrections, we find that curved jets generically develop elliptic sections. Several subtle effects imply that the first corrections cannot be described by a rod model since it amounts to selectively discard some corrections. However, in a fast rotating frame, we find that the dominant effects induced by inertial and Coriolis forces should be correctly described by rod models. For completeness, we also recover the constitutive relations for forces and torques in rod models and exhibit a missing term in the lowest order expression of viscous torque. Given that our method is based on tensors, the complexity of all computations has been beaten down by using an appropriate tensor algebra package such as xAct, allowing us to obtain a one-dimensional description of curved viscous jets with all the first order corrections consistently included. Finally, we find a description for straight fibers with elliptic sections as a special case of these results, and recover that ellipticity is dynamically damped by surface tension. An application to toroidal viscous fibers is presented in the companion paper [Pitrou, Phys. Rev. E 97, 043116 (2018), 10.1103/PhysRevE.97.043116].
Center-vortex dominance after dimensional reduction of SU(2) lattice gauge theory
Gattnar, J.; Langfeld, K.; Schafke, A.; Reinhardt, H.
2000-01-01
The high-temperature phase of SU(2) Yang-Mills theory is addressed by means of dimensional reduction with a special emphasis on the properties of center vortices. For this purpose, the vortex vacuum which arises from center projection is studied in pure 3-dimensional Yang-Mills theory as well as in the 3-dimensional adjoint Higgs model which describes the high temperature phase of the 4-dimensional SU(2) gauge theory. We find center-dominance within the numerical accuracy of 10%.
Projected estimators for robust semi-supervised classification
DEFF Research Database (Denmark)
Krijthe, Jesse H.; Loog, Marco
2017-01-01
For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the procedure...... specifically, we prove that, measured on the labeled and unlabeled training data, this semi-supervised procedure never gives a lower quadratic loss than the supervised alternative. To our knowledge this is the first approach that offers such strong, albeit conservative, guarantees for improvement over...... the supervised solution. The characteristics of our approach are explicated using benchmark datasets to further understand the similarities and differences between the quadratic loss criterion used in the theoretical results and the classification accuracy typically considered in practice....
Pacharawongsakda, Eakasit; Theeramunkong, Thanaruk
2013-12-01
Predicting protein subcellular location is one of major challenges in Bioinformatics area since such knowledge helps us understand protein functions and enables us to select the targeted proteins during drug discovery process. While many computational techniques have been proposed to improve predictive performance for protein subcellular location, they have several shortcomings. In this work, we propose a method to solve three main issues in such techniques; i) manipulation of multiplex proteins which may exist or move between multiple cellular compartments, ii) handling of high dimensionality in input and output spaces and iii) requirement of sufficient labeled data for model training. Towards these issues, this work presents a new computational method for predicting proteins which have either single or multiple locations. The proposed technique, namely iFLAST-CORE, incorporates the dimensionality reduction in the feature and label spaces with co-training paradigm for semi-supervised multi-label classification. For this purpose, the Singular Value Decomposition (SVD) is applied to transform the high-dimensional feature space and label space into the lower-dimensional spaces. After that, due to limitation of labeled data, the co-training regression makes use of unlabeled data by predicting the target values in the lower-dimensional spaces of unlabeled data. In the last step, the component of SVD is used to project labels in the lower-dimensional space back to those in the original space and an adaptive threshold is used to map a numeric value to a binary value for label determination. A set of experiments on viral proteins and gram-negative bacterial proteins evidence that our proposed method improve the classification performance in terms of various evaluation metrics such as Aiming (or Precision), Coverage (or Recall) and macro F-measure, compared to the traditional method that uses only labeled data.
Energy Technology Data Exchange (ETDEWEB)
Marquard, P.; Mihaila, L.; Steinhauser, M. [Karlsruhe Univ. (T.H.) (Germany). Inst. fuer Theoretische Teilchenphysik; Piclum, J.H. [Karlsruhe Univ. (T.H.) (Germany). Inst. fuer Theoretische Teilchenphysik]|[Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik
2007-02-15
We compute the relation between the pole quark mass and the minimally subtracted quark mass in the framework of QCD applying dimensional reduction as a regularization scheme. Special emphasis is put on the evanescent couplings and the renormalization of the {epsilon}-scalar mass. As a by-product we obtain the three-loop on-shell renormalization constants Z{sub m}{sup OS} and Z{sub 2}{sup OS} in dimensional regularization and thus provide the first independent check of the analytical results computed several years ago. (orig.)
Sharpening the weak gravity conjecture with dimensional reduction
International Nuclear Information System (INIS)
Heidenreich, Ben; Reece, Matthew; Rudelius, Tom
2016-01-01
We investigate the behavior of the Weak Gravity Conjecture (WGC) under toroidal compactification and RG flows, finding evidence that WGC bounds for single photons become weaker in the infrared. By contrast, we find that a photon satisfying the WGC will not necessarily satisfy it after toroidal compactification when black holes charged under the Kaluza-Klein photons are considered. Doing so either requires an infinite number of states of different charges to satisfy the WGC in the original theory or a restriction on allowed compactification radii. These subtleties suggest that if the Weak Gravity Conjecture is true, we must seek a stronger form of the conjecture that is robust under compactification. We propose a “Lattice Weak Gravity Conjecture” that meets this requirement: a superextremal particle should exist for every charge in the charge lattice. The perturbative heterotic string satisfies this conjecture. We also use compactification to explore the extent to which the WGC applies to axions. We argue that gravitational instanton solutions in theories of axions coupled to dilaton-like fields are analogous to extremal black holes, motivating a WGC for axions. This is further supported by a match between the instanton action and that of wrapped black branes in a higher-dimensional UV completion.
Yuan, Fang; Wang, Guangyi; Wang, Xiaowei
2017-03-01
In this paper, smooth curve models of meminductor and memcapacitor are designed, which are generalized from a memristor. Based on these models, a new five-dimensional chaotic oscillator that contains a meminductor and memcapacitor is proposed. By dimensionality reducing, this five-dimensional system can be transformed into a three-dimensional system. The main work of this paper is to give the comparisons between the five-dimensional system and its dimensionality reduction model. To investigate dynamics behaviors of the two systems, equilibrium points and stabilities are analyzed. And the bifurcation diagrams and Lyapunov exponent spectrums are used to explore their properties. In addition, digital signal processing technologies are used to realize this chaotic oscillator, and chaotic sequences are generated by the experimental device, which can be used in encryption applications.
Improving Semi-Supervised Learning with Auxiliary Deep Generative Models
DEFF Research Database (Denmark)
Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae
Deep generative models based upon continuous variational distributions parameterized by deep networks give state-of-the-art performance. In this paper we propose a framework for extending the latent representation with extra auxiliary variables in order to make the variational distribution more...... expressive for semi-supervised learning. By utilizing the stochasticity of the auxiliary variable we demonstrate how to train discriminative classifiers resulting in state-of-the-art performance within semi-supervised learning exemplified by an 0.96% error on MNIST using 100 labeled data points. Furthermore...
Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information
Jamshidpour, N.; Homayouni, S.; Safari, A.
2017-09-01
Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.
GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION
Directory of Open Access Journals (Sweden)
N. Jamshidpour
2017-09-01
Full Text Available Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.
Information-theoretic semi-supervised metric learning via entropy regularization.
Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi
2014-08-01
We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
Directory of Open Access Journals (Sweden)
Ryan C Williamson
2016-12-01
Full Text Available Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction-shared dimensionality and percent shared variance-with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure.
Anisotropic inflation in a 5D standing wave braneworld and effective dimensional reduction
Energy Technology Data Exchange (ETDEWEB)
Gogberashvili, Merab, E-mail: gogber@gmail.com [Andronikashvili Institute of Physics, 6 Tamarashvili St., Tbilisi 0177, Georgia (United States); Javakhishvili State University, 3 Chavchavadze Ave., Tbilisi 0128, Georgia (United States); Herrera-Aguilar, Alfredo, E-mail: aha@fis.unam.mx [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, Apdo. Postal 48-3, 62251 Cuernavaca, Morelos (Mexico); Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Edificio C-3, Ciudad Universitaria, CP 58040, Morelia, Michoacán (Mexico); Malagón-Morejón, Dagoberto, E-mail: malagon@fis.unam.mx [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, Apdo. Postal 48-3, 62251 Cuernavaca, Morelos (Mexico); Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Edificio C-3, Ciudad Universitaria, CP 58040, Morelia, Michoacán (Mexico); Mora-Luna, Refugio Rigel, E-mail: rigel@ifm.umich.mx [Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Edificio C-3, Ciudad Universitaria, CP 58040, Morelia, Michoacán (Mexico)
2013-10-01
We investigate a cosmological solution within the framework of a 5D standing wave braneworld model generated by gravity coupled to a massless scalar phantom-like field. By obtaining a full exact solution of the model we found a novel dynamical mechanism in which the anisotropic nature of the primordial metric gives rise to (i) inflation along certain spatial dimensions, and (ii) deflation and a shrinking reduction of the number of spatial dimensions along other directions. This dynamical mechanism can be relevant for dimensional reduction in string and other higher-dimensional theories in the attempt of getting a 4D isotropic expanding space–time.
Anisotropic inflation in a 5D standing wave braneworld and effective dimensional reduction
International Nuclear Information System (INIS)
Gogberashvili, Merab; Herrera-Aguilar, Alfredo; Malagón-Morejón, Dagoberto; Mora-Luna, Refugio Rigel
2013-01-01
We investigate a cosmological solution within the framework of a 5D standing wave braneworld model generated by gravity coupled to a massless scalar phantom-like field. By obtaining a full exact solution of the model we found a novel dynamical mechanism in which the anisotropic nature of the primordial metric gives rise to (i) inflation along certain spatial dimensions, and (ii) deflation and a shrinking reduction of the number of spatial dimensions along other directions. This dynamical mechanism can be relevant for dimensional reduction in string and other higher-dimensional theories in the attempt of getting a 4D isotropic expanding space–time
A finite-dimensional reduction method for slightly supercritical elliptic problems
Directory of Open Access Journals (Sweden)
Riccardo Molle
2004-01-01
Full Text Available We describe a finite-dimensional reduction method to find solutions for a class of slightly supercritical elliptic problems. A suitable truncation argument allows us to work in the usual Sobolev space even in the presence of supercritical nonlinearities: we modify the supercritical term in such a way to have subcritical approximating problems; for these problems, the finite-dimensional reduction can be obtained applying the methods already developed in the subcritical case; finally, we show that, if the truncation is realized at a sufficiently large level, then the solutions of the approximating problems, given by these methods, also solve the supercritical problems when the parameter is small enough.
International Nuclear Information System (INIS)
Del Frate, F.; Iapaolo, M.; Casadio, S.; Godin-Beekmann, S.; Petitdidier, M.
2005-01-01
Dimensionality reduction can be of crucial importance in the application of inversion schemes to atmospheric remote sensing data. In this study the problem of dimensionality reduction in the retrieval of ozone concentration profiles from the radiance measurements provided by the instrument Global Ozone Monitoring Experiment (GOME) on board of ESA satellite ERS-2 is considered. By means of radiative transfer modelling, neural networks and pruning algorithms, a complete procedure has been designed to extract the GOME spectral ranges most crucial for the inversion. The quality of the resulting retrieval algorithm has been evaluated by comparing its performance to that yielded by other schemes and co-located profiles obtained with lidar measurements
Supersymmetry and the Parisi-Sourlas dimensional reduction: A rigorous proof
International Nuclear Information System (INIS)
Klein, A.; Landau, L.J.; Perez, J.F.
1984-01-01
Functional integrals that are formally related to the average correlation functions of a classical field theory in the presence of random external sources are given a rigorous meaning. Their dimensional reduction to the Schwinger functions of the corresponding quantum field theory in two fewer dimensions is proven. This is done by reexpressing those functional integrals as expectations of a supersymmetric field theory. The Parisi-Sourlas dimensional reduction of a supersymmetric field theory to a usual quantum field theory in two fewer dimensions is proven. (orig.)
Semi-supervised rail defect detection from imbalanced image data
Hajizadeh, S.; Nunez Vicencio, Alfredo; Tax, D.M.J.; Acarman, Tankut
2016-01-01
Rail defect detection by video cameras has recently gained much attention in both
academia and industry. Rail image data has two properties. It is highly imbalanced towards the non-defective class and it has a large number of unlabeled data samples available for semisupervised learning
Semi-Supervised Priors for Microblog Language Identification
Carter, S.; Tsagkias, E.; Weerkamp, W.; Boscarino, C.; Hofmann, K.; Jijkoun, V.; Meij, E.; de Rijke, M.; Weerkamp, W.
2011-01-01
Offering access to information in microblog posts requires successful language identification. Language identification on sparse and noisy data can be challenging. In this paper we explore the performance of a state-of-the-art n-gram-based language identifier, and we introduce two semi-supervised
A Semisupervised Support Vector Machines Algorithm for BCI Systems
Qin, Jianzhao; Li, Yuanqing; Sun, Wei
2007-01-01
As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141
Semi-supervised Eigenvectors for Locally-biased Learning
DEFF Research Database (Denmark)
Hansen, Toke Jansen; Mahoney, Michael W.
2012-01-01
In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks "nearby" that pre-specified target region. Locally-biased problems of t...
Safe semi-supervised learning based on weighted likelihood.
Kawakita, Masanori; Takeuchi, Jun'ichi
2014-05-01
We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')
Label Information Guided Graph Construction for Semi-Supervised Learning.
Zhuang, Liansheng; Zhou, Zihan; Gao, Shenghua; Yin, Jingwen; Lin, Zhouchen; Ma, Yi
2017-09-01
In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.
Semi-supervised and unsupervised extreme learning machines.
Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng
2014-12-01
Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.
ANALYSIS OF IMPACT ON COMPOSITE STRUCTURES WITH THE METHOD OF DIMENSIONALITY REDUCTION
Directory of Open Access Journals (Sweden)
Valentin L. Popov
2015-04-01
Full Text Available In the present paper, we discuss the impact of rigid profiles on continua with non-local criteria for plastic yield. For the important case of media whose hardness is inversely proportional to the indentation radius, we suggest a rigorous treatment based on the method of dimensionality reduction (MDR and study the example of indentation by a conical profile.
Some remarks on dimensional reduction of Gauge theories and model building
International Nuclear Information System (INIS)
Rudolph, G.; Karl-Marx-Universitaet, Leipzig; Volobujev, I.P.
1989-01-01
We study the group-theoretical aspect of dimensional reduction of pure gauge theories and propose a method of solving the constraint equations for scalar fields. We show that there are possibilities of model building which differ from those commonly used. In particular, we give examples in which the resulting potential is not of Higgs type. (orig.)
The N=4 supersymmetric E8 gauge theory and coset space dimensional reduction
International Nuclear Information System (INIS)
Olive, D.; West, P.
1983-01-01
Reasons are given to suggest that the N=4 supersymmetric E 8 gauge theory be considered as a serious candidate for a physical theory. The symmetries of this theory are broken by a scheme based on coset space dimensional reduction. The resulting theory possesses four conventional generations of low-mass fermions together with their mirror particles. (orig.)
Ultraviolet finiteness of N = 8 supergravity, spontaneously broken by dimensional reduction
International Nuclear Information System (INIS)
Sezgin, E.; Nieuwenhuizen, P. van
1982-06-01
The one-loop corrections to scalar-scalar scattering in N = 8 supergravity with 4 masses from dimensional reduction, are finite. We discuss various mechanisms that cancel the cosmological constant and infra-red divergences due to finite but non-vanishing tadpoles. (author)
Dimensional reduction of 10d heterotic string effective lagrangian with higher derivative terms
International Nuclear Information System (INIS)
Lalak, Z.; Pawelczyk, J.
1989-11-01
Dimensional reduction of the 10d Supergravity-Yang-Mills theories containing up to four derivatives is described. Unexpected nondiagonal corrections to 4d gauge kinetic function and negative contributions to scalar potential are found. We analyzed the general structure of the resulting lagrangian and discuss the possible phenomenological consequences. (author)
Dimensional reduction in Bose-Einstein-condensed alkali-metal vapors
International Nuclear Information System (INIS)
Salasnich, L.; Reatto, L.; Parola, A.
2004-01-01
We investigate the effects of dimensional reduction in atomic Bose-Einstein condensates (BECs) induced by a strong harmonic confinement in the cylindric radial direction or in the cylindric axial direction. The former case corresponds to a transition from three dimensions (3D) to 1D in cigar-shaped BECs, while the latter case corresponds to a transition from 3D to 2D in disk-shaped BECs. We analyze the first sound velocity in axially homogeneous cigar-shaped BECs and in radially homogeneous disk-shaped BECs. We consider also the dimensional reduction in a BEC confined by a harmonic potential both in the radial direction and in the axial direction. By using a variational approach, we calculate monopole and quadrupole collective oscillations of the BEC. We find that the frequencies of these collective oscillations are related to the dimensionality and to the repulsive or attractive interatomic interaction
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Efficient Computation of Entropy Gradient for Semi-Supervised Conditional Random Fields
National Research Council Canada - National Science Library
Mann, Gideon S; McCallum, Andrew
2007-01-01
Entropy regularization is a straightforward and successful method of semi-supervised learning that augments the traditional conditional likelihood objective function with an additional term that aims...
Restoration of dimensional reduction in the random-field Ising model at five dimensions
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D equality at all studied dimensions.
Statistical mechanics of semi-supervised clustering in sparse graphs
International Nuclear Information System (INIS)
Ver Steeg, Greg; Galstyan, Aram; Allahverdyan, Armen E
2011-01-01
We theoretically study semi-supervised clustering in sparse graphs in the presence of pair-wise constraints on the cluster assignments of nodes. We focus on bi-cluster graphs and study the impact of semi-supervision for varying constraint density and overlap between the clusters. Recent results for unsupervised clustering in sparse graphs indicate that there is a critical ratio of within-cluster and between-cluster connectivities below which clusters cannot be recovered with better than random accuracy. The goal of this paper is to examine the impact of pair-wise constraints on the clustering accuracy. Our results suggest that the addition of constraints does not provide automatic improvement over the unsupervised case. When the density of the constraints is sufficiently small, their only impact is to shift the detection threshold while preserving the criticality. Conversely, if the density of (hard) constraints is above the percolation threshold, the criticality is suppressed and the detection threshold disappears
Semi-Supervised Multiple Feature Analysis for Action Recognition
2013-11-26
in saving la- beling costs while simultaneously achieving good performance. Most semi-supervised learning methods assume that nearby points are likely...3, 5, 10 and 15) per category in the training set, thus resulting in , , , and randomly la- beled videos, with the remaining training videos unlabeled...with the increase of la- beled training samples, the performance of all algorithms rises. Meanwhile, the performance differences between our method and
Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform
Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah
2017-02-01
Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.
Reduction formalism for dimensionally regulated one-loop N-point integrals
International Nuclear Information System (INIS)
Binoth, T.; Guillet, J.Ph.; Heinrich, G.
2000-01-01
We consider one-loop scalar and tensor integrals with an arbitrary number of external legs relevant for multi-parton processes in massless theories. We present a procedure to reduce N-point scalar functions with generic 4-dimensional external momenta to box integrals in (4-2ε) dimensions. We derive a formula valid for arbitrary N and give an explicit expression for N=6. Further a tensor reduction method for N-point tensor integrals is presented. We prove that generically higher dimensional integrals contribute only to order ε for N≥5. The tensor reduction can be solved iteratively such that any tensor integral is expressible in terms of scalar integrals. Explicit formulas are given up to N=6
Dimensionality Reduction Methods: Comparative Analysis of methods PCA, PPCA and KPCA
Directory of Open Access Journals (Sweden)
Jorge Arroyo-Hernández
2016-01-01
Full Text Available The dimensionality reduction methods are algorithms mapping the set of data in subspaces derived from the original space, of fewer dimensions, that allow a description of the data at a lower cost. Due to their importance, they are widely used in processes associated with learning machine. This article presents a comparative analysis of PCA, PPCA and KPCA dimensionality reduction methods. A reconstruction experiment of worm-shape data was performed through structures of landmarks located in the body contour, with methods having different number of main components. The results showed that all methods can be seen as alternative processes. Nevertheless, thanks to the potential for analysis in the features space and the method for calculation of its preimage presented, KPCA offers a better method for recognition process and pattern extraction
Directory of Open Access Journals (Sweden)
Fubiao Feng
2017-03-01
Full Text Available Recently, graph embedding has drawn great attention for dimensionality reduction in hyperspectral imagery. For example, locality preserving projection (LPP utilizes typical Euclidean distance in a heat kernel to create an affinity matrix and projects the high-dimensional data into a lower-dimensional space. However, the Euclidean distance is not sufficiently correlated with intrinsic spectral variation of a material, which may result in inappropriate graph representation. In this work, a graph-based discriminant analysis with spectral similarity (denoted as GDA-SS measurement is proposed, which fully considers curves changing description among spectral bands. Experimental results based on real hyperspectral images demonstrate that the proposed method is superior to traditional methods, such as supervised LPP, and the state-of-the-art sparse graph-based discriminant analysis (SGDA.
Semi-supervised learning for ordinal Kernel Discriminant Analysis.
Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C
2016-12-01
Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.
N=2-Maxwell-Chern-Simons model with anomalous magnetic moment coupling via dimensional reduction
International Nuclear Information System (INIS)
Christiansen, H.R.; Cunha, M.S.; Helayel Neto, Jose A.; Manssur, L.R.U; Nogueira, A.L.M.A.
1998-02-01
An N=1-supersymmetric version of the Cremmer-Scherk-Kalb-Ramond model with non-minimal coupling to matter is built up both in terms of superfields and in a component field formalism. By adopting a dimensional reduction procedure, the N=2-D=3 counterpart of the model comes out, with two main features: a genuine (diagonal) Chern-Simons term and an anomalous magnetic moment coupling between matter and the gauge potential. (author)
Use of dimensionality reduction for structural mapping of hip joint osteoarthritis data
International Nuclear Information System (INIS)
Theoharatos, C; Fotopoulos, S; Boniatis, I; Panayiotakis, G; Panagiotopoulos, E
2009-01-01
A visualization-based, computer-oriented, classification scheme is proposed for assessing the severity of hip osteoarthritis (OA) using dimensionality reduction techniques. The introduced methodology tries to cope with the confined ability of physicians to structurally organize the entire available set of medical data into semantically similar categories and provide the capability to make visual observations among the ensemble of data using low-dimensional biplots. In this work, 18 pelvic radiographs of patients with verified unilateral hip OA are evaluated by experienced physicians and assessed into Normal, Mild and Severe following the Kellgren and Lawrence scale. Two regions of interest corresponding to radiographic hip joint spaces are determined and representative features are extracted using a typical texture analysis technique. The structural organization of all hip OA data is accomplished using distance and topology preservation-based dimensionality reduction techniques. The resulting map is a low-dimensional biplot that reflects the intrinsic organization of the ensemble of available data and which can be directly accessed by the physician. The conceivable visualization scheme can potentially reveal critical data similarities and help the operator to visually estimate their initial diagnosis. In addition, it can be used to detect putative clustering tendencies, examine the presence of data similarities and indicate the existence of possible false alarms in the initial perceptual evaluation
Kusratmoko, Eko; Wibowo, Adi; Cholid, Sofyan; Pin, Tjiong Giok
2017-07-01
This paper presents the results of applications of participatory three dimensional mapping (P3DM) method for fqcilitating the people of Cibanteng' village to compile a landslide disaster risk reduction program. Physical factors, as high rainfall, topography, geology and land use, and coupled with the condition of demographic and social-economic factors, make up the Cibanteng region highly susceptible to landslides. During the years 2013-2014 has happened 2 times landslides which caused economic losses, as a result of damage to homes and farmland. Participatory mapping is one part of the activities of community-based disaster risk reduction (CBDRR)), because of the involvement of local communities is a prerequisite for sustainable disaster risk reduction. In this activity, participatory mapping method are done in two ways, namely participatory two-dimensional mapping (P2DM) with a focus on mapping of disaster areas and participatory three-dimensional mapping (P3DM) with a focus on the entire territory of the village. Based on the results P3DM, the ability of the communities in understanding the village environment spatially well-tested and honed, so as to facilitate the preparation of the CBDRR programs. Furthermore, the P3DM method can be applied to another disaster areas, due to it becomes a medium of effective dialogue between all levels of involved communities.
Liu, Jing; Zhao, Songzheng; Wang, Gang
2018-01-01
With the development of Web 2.0 technology, social media websites have become lucrative but under-explored data sources for extracting adverse drug events (ADEs), which is a serious health problem. Besides ADE, other semantic relation types (e.g., drug indication and beneficial effect) could hold between the drug and adverse event mentions, making ADE relation extraction - distinguishing ADE relationship from other relation types - necessary. However, conducting ADE relation extraction in social media environment is not a trivial task because of the expertise-dependent, time-consuming and costly annotation process, and the feature space's high-dimensionality attributed to intrinsic characteristics of social media data. This study aims to develop a framework for ADE relation extraction using patient-generated content in social media with better performance than that delivered by previous efforts. To achieve the objective, a general semi-supervised ensemble learning framework, SSEL-ADE, was developed. The framework exploited various lexical, semantic, and syntactic features, and integrated ensemble learning and semi-supervised learning. A series of experiments were conducted to verify the effectiveness of the proposed framework. Empirical results demonstrate the effectiveness of each component of SSEL-ADE and reveal that our proposed framework outperforms most of existing ADE relation extraction methods The SSEL-ADE can facilitate enhanced ADE relation extraction performance, thereby providing more reliable support for pharmacovigilance. Moreover, the proposed semi-supervised ensemble methods have the potential of being applied to effectively deal with other social media-based problems. Copyright © 2017 Elsevier B.V. All rights reserved.
Qing Ye; Hao Pan; Changhua Liu
2015-01-01
A novel semisupervised extreme learning machine (ELM) with clustering discrimination manifold regularization (CDMR) framework named CDMR-ELM is proposed for semisupervised classification. By using unsupervised fuzzy clustering method, CDMR framework integrates clustering discrimination of both labeled and unlabeled data with twinning constraints regularization. Aiming at further improving the classification accuracy and efficiency, a new multiobjective fruit fly optimization algorithm (MOFOA)...
Directory of Open Access Journals (Sweden)
Mingwei Leng
2013-01-01
Full Text Available The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.
Directory of Open Access Journals (Sweden)
Sathya Kumar Devireddy
2014-01-01
Full Text Available Objective: The aim was to assess the accuracy of three-dimensional anatomical reductions achieved by open method of treatment in cases of displaced unilateral mandibular subcondylar fractures using preoperative (pre op and postoperative (post op computed tomography (CT scans. Materials and Methods: In this prospective study, 10 patients with unilateral sub condylar fractures confirmed by an orthopantomogram were included. A pre op and post op CT after 1 week of surgical procedure was taken in axial, coronal and sagittal plane along with three-dimensional reconstruction. Standard anatomical parameters, which undergo changes due to fractures of the mandibular condyle were measured in pre and post op CT scans in three planes and statistically analysed for the accuracy of the reduction comparing the following variables: (a Pre op fractured and nonfractured side (b post op fractured and nonfractured side (c pre op fractured and post op fractured side. P < 0.05 was considered as significant. Results: Three-dimensional anatomical reduction was possible in 9 out of 10 cases (90%. The statistical analysis of each parameter in three variables revealed (P < 0.05 that there was a gross change in the dimensions of the parameters obtained in pre op fractured and nonfractured side. When these parameters were assessed in post op CT for the three variables there was no statistical difference between the post op fractured side and non fractured side. The same parameters were analysed for the three variables in pre op fractured and post op fractured side and found significant statistical difference suggesting a considerable change in the dimensions of the fractured side post operatively. Conclusion: The statistical and clinical results in our study emphasised that it is possible to fix the condyle in three-dimensional anatomical positions with open method of treatment and avoid post op degenerative joint changes. CT is the ideal imaging tool and should be used on
Directory of Open Access Journals (Sweden)
Ross S Williamson
2015-04-01
Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
2013-01-01
Background The structured organization of cells in the brain plays a key role in its functional efficiency. This delicate organization is the consequence of unique molecular identity of each cell gradually established by precise spatiotemporal gene expression control during development. Currently, studies on the molecular-structural association are beginning to reveal how the spatiotemporal gene expression patterns are related to cellular differentiation and structural development. Results In this article, we aim at a global, data-driven study of the relationship between gene expressions and neuroanatomy in the developing mouse brain. To enable visual explorations of the high-dimensional data, we map the in situ hybridization gene expression data to a two-dimensional space by preserving both the global and the local structures. Our results show that the developing brain anatomy is largely preserved in the reduced gene expression space. To provide a quantitative analysis, we cluster the reduced data into groups and measure the consistency with neuroanatomy at multiple levels. Our results show that the clusters in the low-dimensional space are more consistent with neuroanatomy than those in the original space. Conclusions Gene expression patterns and developing brain anatomy are closely related. Dimensionality reduction and visual exploration facilitate the study of this relationship. PMID:23845024
Lifetime of rho meson in correlation with magnetic-dimensional reduction
Energy Technology Data Exchange (ETDEWEB)
Kawaguchi, Mamiya [Nagoya University, Department of Physics, Nagoya (Japan); Matsuzaki, Shinya [Nagoya University, Department of Physics, Nagoya (Japan); Nagoya University, Institute for Advanced Research, Nagoya (Japan)
2017-04-15
It is naively expected that in a strong magnetic configuration, the Landau quantization ceases the neutral rho meson to decay to the charged pion pair, so the neutral rho meson will be long-lived. To closely access this naive observation, we explicitly compute the charged pion loop in the magnetic field at the one-loop level, to evaluate the magnetic dependence of the lifetime for the neutral rho meson as well as its mass. Due to the dimensional reduction induced by the magnetic field (violation of the Lorentz invariance), the polarization (spin s{sub z} = 0, ±1) modes of the rho meson, as well as the corresponding pole mass and width, are decomposed in a nontrivial manner compared to the vacuum case. To see the significance of the reduction effect, we simply take the lowest Landau level approximation to analyze the spin-dependent rho masses and widths. We find that the ''fate'' of the rho meson may be more complicated because of the magnetic-dimensional reduction: as the magnetic field increases, the rho width for the spin s{sub z} = 0 starts to develop, reaches a peak, then vanishes at the critical magnetic field to which the folklore refers. On the other side, the decay rates of the other rhos for s{sub z} = ±1 monotonically increase as the magnetic field develops. The correlation between the polarization dependence and the Landau level truncation is also addressed. (orig.)
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K
2015-01-01
This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.
Denoising by semi-supervised kernel PCA preimaging
DEFF Research Database (Denmark)
Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai
2014-01-01
Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...
International Nuclear Information System (INIS)
Fiziev, P P; Shirkov, D V
2012-01-01
The paper presents a generalization and further development of our recent publications, where solutions of the Klein–Fock–Gordon equation defined on a few particular D = (2 + 1)-dimensional static spacetime manifolds were considered. The latter involve toy models of two-dimensional spaces with axial symmetry, including dimensional reduction to the one-dimensional space as a singular limiting case. Here, the non-static models of space geometry with axial symmetry are under consideration. To make these models closer to physical reality, we define a set of ‘admissible’ shape functions ρ(t, z) as the (2 + 1)-dimensional Einstein equation solutions in the vacuum spacetime, in the presence of the Λ-term and for the spacetime filled with the standard ‘dust’. It is curious that in the last case the Einstein equations reduce to the well-known Monge–Ampère equation, thus enabling one to obtain the general solution of the Cauchy problem, as well as a set of other specific solutions involving one arbitrary function. A few explicit solutions of the Klein–Fock–Gordon equation in this set are given. An interesting qualitative feature of these solutions relates to the dimensional reduction points, their classification and time behavior. In particular, these new entities could provide us with novel insight into the nature of P- and T-violations and of the Big Bang. A short comparison with other attempts to utilize the dimensional reduction of the spacetime is given. (paper)
Wideband radar cross section reduction using two-dimensional phase gradient metasurfaces
Energy Technology Data Exchange (ETDEWEB)
Li, Yongfeng; Qu, Shaobo; Wang, Jiafu; Chen, Hongya [College of Science, Air Force Engineering University, Xi' an, Shaanxi 710051 (China); Zhang, Jieqiu [College of Science, Air Force Engineering University, Xi' an, Shaanxi 710051 (China); Electronic Materials Research Laboratory, Key Laboratory of Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Xu, Zhuo [Electronic Materials Research Laboratory, Key Laboratory of Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Zhang, Anxue [School of Electronics and Information Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)
2014-06-02
Phase gradient metasurface (PGMs) are artificial surfaces that can provide pre-defined in-plane wave-vectors to manipulate the directions of refracted/reflected waves. In this Letter, we propose to achieve wideband radar cross section (RCS) reduction using two-dimensional (2D) PGMs. A 2D PGM was designed using a square combination of 49 split-ring sub-unit cells. The PGM can provide additional wave-vectors along the two in-plane directions simultaneously, leading to either surface wave conversion, deflected reflection, or diffuse reflection. Both the simulation and experiment results verified the wide-band, polarization-independent, high-efficiency RCS reduction induced by the 2D PGM.
Wideband radar cross section reduction using two-dimensional phase gradient metasurfaces
International Nuclear Information System (INIS)
Li, Yongfeng; Qu, Shaobo; Wang, Jiafu; Chen, Hongya; Zhang, Jieqiu; Xu, Zhuo; Zhang, Anxue
2014-01-01
Phase gradient metasurface (PGMs) are artificial surfaces that can provide pre-defined in-plane wave-vectors to manipulate the directions of refracted/reflected waves. In this Letter, we propose to achieve wideband radar cross section (RCS) reduction using two-dimensional (2D) PGMs. A 2D PGM was designed using a square combination of 49 split-ring sub-unit cells. The PGM can provide additional wave-vectors along the two in-plane directions simultaneously, leading to either surface wave conversion, deflected reflection, or diffuse reflection. Both the simulation and experiment results verified the wide-band, polarization-independent, high-efficiency RCS reduction induced by the 2D PGM.
Maximum margin semi-supervised learning with irrelevant data.
Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R
2015-10-01
Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright
Semi-supervised morphosyntactic classification of Old Icelandic.
Urban, Kryztof; Tangherlini, Timothy R; Vijūnas, Aurelijus; Broadwell, Peter M
2014-01-01
We present IceMorph, a semi-supervised morphosyntactic analyzer of Old Icelandic. In addition to machine-read corpora and dictionaries, it applies a small set of declension prototypes to map corpus words to dictionary entries. A web-based GUI allows expert users to modify and augment data through an online process. A machine learning module incorporates prototype data, edit-distance metrics, and expert feedback to continuously update part-of-speech and morphosyntactic classification. An advantage of the analyzer is its ability to achieve competitive classification accuracy with minimum training data.
Directory of Open Access Journals (Sweden)
N.R. Sakthivel
2014-03-01
Full Text Available Bearing fault, Impeller fault, seal fault and cavitation are the main causes of breakdown in a mono block centrifugal pump and hence, the detection and diagnosis of these mechanical faults in a mono block centrifugal pump is very crucial for its reliable operation. Based on a continuous acquisition of signals with a data acquisition system, it is possible to classify the faults. This is achieved by the extraction of features from the measured data and employing data mining approaches to explore the structural information hidden in the signals acquired. In the present study, statistical features derived from the vibration data are used as the features. In order to increase the robustness of the classifier and to reduce the data processing load, dimensionality reduction is necessary. In this paper dimensionality reduction is performed using traditional dimensionality reduction techniques and nonlinear dimensionality reduction techniques. The effectiveness of each dimensionality reduction technique is also verified using visual analysis. The reduced feature set is then classified using a decision tree. The results obtained are compared with those generated by classifiers such as Naïve Bayes, Bayes Net and kNN. The effort is to bring out the better dimensionality reduction technique–classifier combination.
Directory of Open Access Journals (Sweden)
Dai Hongying
2013-01-01
Full Text Available Abstract Background Multifactor Dimensionality Reduction (MDR has been widely applied to detect gene-gene (GxG interactions associated with complex diseases. Existing MDR methods summarize disease risk by a dichotomous predisposing model (high-risk/low-risk from one optimal GxG interaction, which does not take the accumulated effects from multiple GxG interactions into account. Results We propose an Aggregated-Multifactor Dimensionality Reduction (A-MDR method that exhaustively searches for and detects significant GxG interactions to generate an epistasis enriched gene network. An aggregated epistasis enriched risk score, which takes into account multiple GxG interactions simultaneously, replaces the dichotomous predisposing risk variable and provides higher resolution in the quantification of disease susceptibility. We evaluate this new A-MDR approach in a broad range of simulations. Also, we present the results of an application of the A-MDR method to a data set derived from Juvenile Idiopathic Arthritis patients treated with methotrexate (MTX that revealed several GxG interactions in the folate pathway that were associated with treatment response. The epistasis enriched risk score that pooled information from 82 significant GxG interactions distinguished MTX responders from non-responders with 82% accuracy. Conclusions The proposed A-MDR is innovative in the MDR framework to investigate aggregated effects among GxG interactions. New measures (pOR, pRR and pChi are proposed to detect multiple GxG interactions.
Rhythmic dynamics and synchronization via dimensionality reduction: application to human gait.
Directory of Open Access Journals (Sweden)
Jie Zhang
Full Text Available Reliable characterization of locomotor dynamics of human walking is vital to understanding the neuromuscular control of human locomotion and disease diagnosis. However, the inherent oscillation and ubiquity of noise in such non-strictly periodic signals pose great challenges to current methodologies. To this end, we exploit the state-of-the-art technology in pattern recognition and, specifically, dimensionality reduction techniques, and propose to reconstruct and characterize the dynamics accurately on the cycle scale of the signal. This is achieved by deriving a low-dimensional representation of the cycles through global optimization, which effectively preserves the topology of the cycles that are embedded in a high-dimensional Euclidian space. Our approach demonstrates a clear advantage in capturing the intrinsic dynamics and probing the subtle synchronization patterns from uni/bivariate oscillatory signals over traditional methods. Application to human gait data for healthy subjects and diabetics reveals a significant difference in the dynamics of ankle movements and ankle-knee coordination, but not in knee movements. These results indicate that the impaired sensory feedback from the feet due to diabetes does not influence the knee movement in general, and that normal human walking is not critically dependent on the feedback from the peripheral nervous system.
Gao, Yang; Wang, Xuesong; Cheng, Yuhu; Wang, Z Jane
2015-08-01
To take full advantage of hyperspectral information, to avoid data redundancy and to address the curse of dimensionality concern, dimensionality reduction (DR) becomes particularly important to analyze hyperspectral data. Exploring the tensor characteristic of hyperspectral data, a DR algorithm based on class-aware tensor neighborhood graph and patch alignment is proposed here. First, hyperspectral data are represented in the tensor form through a window field to keep the spatial information of each pixel. Second, using a tensor distance criterion, a class-aware tensor neighborhood graph containing discriminating information is obtained. In the third step, employing the patch alignment framework extended to the tensor space, we can obtain global optimal spectral-spatial information. Finally, the solution of the tensor subspace is calculated using an iterative method and low-dimensional projection matrixes for hyperspectral data are obtained accordingly. The proposed method effectively explores the spectral and spatial information in hyperspectral data simultaneously. Experimental results on 3 real hyperspectral datasets show that, compared with some popular vector- and tensor-based DR algorithms, the proposed method can yield better performance with less tensor training samples required.
Ray, S. Saha
2018-04-01
In this paper, the symmetry analysis and similarity reduction of the (2+1)-dimensional Bogoyavlensky-Konopelchenko (B-K) equation are investigated by means of the geometric approach of an invariance group, which is equivalent to the classical Lie symmetry method. Using the extended Harrison and Estabrook’s differential forms approach, the infinitesimal generators for (2+1)-dimensional B-K equation are obtained. Firstly, the vector field associated with the Lie group of transformation is derived. Then the symmetry reduction and the corresponding explicit exact solution of (2+1)-dimensional B-K equation is obtained.
International Nuclear Information System (INIS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-01-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification. (paper)
Krivov, Sergei V
2011-07-01
Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.
Active link selection for efficient semi-supervised community detection
Yang, Liang; Jin, Di; Wang, Xiao; Cao, Xiaochun
2015-01-01
Several semi-supervised community detection algorithms have been proposed recently to improve the performance of traditional topology-based methods. However, most of them focus on how to integrate supervised information with topology information; few of them pay attention to which information is critical for performance improvement. This leads to large amounts of demand for supervised information, which is expensive or difficult to obtain in most fields. For this problem we propose an active link selection framework, that is we actively select the most uncertain and informative links for human labeling for the efficient utilization of the supervised information. We also disconnect the most likely inter-community edges to further improve the efficiency. Our main idea is that, by connecting uncertain nodes to their community hubs and disconnecting the inter-community edges, one can sharpen the block structure of adjacency matrix more efficiently than randomly labeling links as the existing methods did. Experiments on both synthetic and real networks demonstrate that our new approach significantly outperforms the existing methods in terms of the efficiency of using supervised information. It needs ~13% of the supervised information to achieve a performance similar to that of the original semi-supervised approaches. PMID:25761385
Semi-Supervised Multitask Learning for Scene Recognition.
Lu, Xiaoqiang; Li, Xuelong; Mou, Lichao
2015-09-01
Scene recognition has been widely studied to understand visual information from the level of objects and their relationships. Toward scene recognition, many methods have been proposed. They, however, encounter difficulty to improve the accuracy, mainly due to two limitations: 1) lack of analysis of intrinsic relationships across different scales, say, the initial input and its down-sampled versions and 2) existence of redundant features. This paper develops a semi-supervised learning mechanism to reduce the above two limitations. To address the first limitation, we propose a multitask model to integrate scene images of different resolutions. For the second limitation, we build a model of sparse feature selection-based manifold regularization (SFSMR) to select the optimal information and preserve the underlying manifold structure of data. SFSMR coordinates the advantages of sparse feature selection and manifold regulation. Finally, we link the multitask model and SFSMR, and propose the semi-supervised learning method to reduce the two limitations. Experimental results report the improvements of the accuracy in scene recognition.
MULTI-LABEL ASRS DATASET CLASSIFICATION USING SEMI-SUPERVISED SUBSPACE CLUSTERING
National Aeronautics and Space Administration — MULTI-LABEL ASRS DATASET CLASSIFICATION USING SEMI-SUPERVISED SUBSPACE CLUSTERING MOHAMMAD SALIM AHMED, LATIFUR KHAN, NIKUNJ OZA, AND MANDAVA RAJESWARI Abstract....
The supersymmetric Adler-Bardeen theorem and regularization by dimensional reduction
International Nuclear Information System (INIS)
Ensign, P.; Mahanthappa, K.T.
1987-01-01
We examine the subtraction scheme dependence of the anomaly of the supersymmetric, gauge singlet axial current in pure and coupled supersymmetric Yang-Mills theories. Preserving supersymmetry and gauge invariance explicitly by using supersymmetric background field theory and dimensional reduction, we show that only the one-loop value of the axial anomaly is subtraction scheme independent, and that one can always define a subtraction scheme in which the Adler-Bardeen theorem is satisfied to all orders in perturbation theory. In general this subtraction scheme may be non-minimal, but in both the pure and the coupled theories, the Adler-Bardeen theorem is satisfied to two loops in minimal subtraction. (orig.)
Prototype Vector Machine for Large Scale Semi-Supervised Learning
Energy Technology Data Exchange (ETDEWEB)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.
Semi-Supervised Learning to Identify UMLS Semantic Relations.
Luo, Yuan; Uzuner, Ozlem
2014-01-01
The UMLS Semantic Network is constructed by experts and requires periodic expert review to update. We propose and implement a semi-supervised approach for automatically identifying UMLS semantic relations from narrative text in PubMed. Our method analyzes biomedical narrative text to collect semantic entity pairs, and extracts multiple semantic, syntactic and orthographic features for the collected pairs. We experiment with seeded k-means clustering with various distance metrics. We create and annotate a ground truth corpus according to the top two levels of the UMLS semantic relation hierarchy. We evaluate our system on this corpus and characterize the learning curves of different clustering configuration. Using KL divergence consistently performs the best on the held-out test data. With full seeding, we obtain macro-averaged F-measures above 70% for clustering the top level UMLS relations (2-way), and above 50% for clustering the second level relations (7-way).
Semi-Supervised Generation with Cluster-aware Generative Models
DEFF Research Database (Denmark)
Maaløe, Lars; Fraccaro, Marco; Winther, Ole
2017-01-01
Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Clust...... a log-likelihood of −79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods.......Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster...
Ye, Fei; Marchetti, P. A.; Su, Z. B.; Yu, L.
2017-09-01
The relation between braid and exclusion statistics is examined in one-dimensional systems, within the framework of Chern-Simons statistical transmutation in gauge invariant form with an appropriate dimensional reduction. If the matter action is anomalous, as for chiral fermions, a relation between braid and exclusion statistics can be established explicitly for both mutual and nonmutual cases. However, if it is not anomalous, the exclusion statistics of emergent low energy excitations is not necessarily connected to the braid statistics of the physical charged fields of the system. Finally, we also discuss the bosonization of one-dimensional anyonic systems through T-duality. Dedicated to the memory of Mario Tonin.
International Nuclear Information System (INIS)
Ye, Fei; Marchetti, P A; Su, Z B; Yu, L
2017-01-01
The relation between braid and exclusion statistics is examined in one-dimensional systems, within the framework of Chern–Simons statistical transmutation in gauge invariant form with an appropriate dimensional reduction. If the matter action is anomalous, as for chiral fermions, a relation between braid and exclusion statistics can be established explicitly for both mutual and nonmutual cases. However, if it is not anomalous, the exclusion statistics of emergent low energy excitations is not necessarily connected to the braid statistics of the physical charged fields of the system. Finally, we also discuss the bosonization of one-dimensional anyonic systems through T-duality. (paper)
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Kaczmarek, J.
2002-01-01
Elementary processes responsible for phenomena in material are frequently related to scale close to atomic one. Therefore atomistic simulations are important for material sciences. On the other hand continuum mechanics is widely applied in mechanics of materials. It seems inevitable that both methods will gradually integrate. A multiscale method of integration of these approaches called collection of dynamical systems with dimensional reduction is introduced in this work. The dimensional reduction procedure realizes transition between various scale models from an elementary dynamical system (EDS) to a reduced dynamical system (RDS). Mappings which transform variables and forces, skeletal dynamical system (SDS) and a set of approximation and identification methods are main components of this procedure. The skeletal dynamical system is a set of dynamical systems parameterized by some constants and has variables related to the dimensionally reduced model. These constants are identified with the aid of solutions of the elementary dynamical system. As a result we obtain a dimensionally reduced dynamical system which describes phenomena in an averaged way in comparison with the EDS. Concept of integration of atomistic simulations with continuum mechanics consists in using a dynamical system describing evolution of atoms as an elementary dynamical system. Then, we introduce a continuum skeletal dynamical system within the dimensional reduction procedure. In order to construct such a system we have to modify a continuum mechanics formulation to some degree. Namely, we formalize scale of averaging for continuum theory and as a result we consider continuum with finite-dimensional fields only. Then, realization of dimensional reduction is possible. A numerical example of realization of the dimensional reduction procedure is shown. We consider a one dimensional chain of atoms interacting by Lennard-Jones potential. Evolution of this system is described by an elementary
A Novel Four-Dimensional Energy-Saving and Emission-Reduction System and Its Linear Feedback Control
Directory of Open Access Journals (Sweden)
Minggang Wang
2012-01-01
Full Text Available This paper reports a new four-dimensional energy-saving and emission-reduction chaotic system. The system is obtained in accordance with the complicated relationship between energy saving and emission reduction, carbon emission, economic growth, and new energy development. The dynamics behavior of the system will be analyzed by means of Lyapunov exponents and equilibrium points. Linear feedback control methods are used to suppress chaos to unstable equilibrium. Numerical simulations are presented to show these results.
Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A
2007-01-01
The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.
A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction
Directory of Open Access Journals (Sweden)
ZHAO Jiaojiao
2015-05-01
Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.
Dimensionality reduction for the quantitative evaluation of a smartphone-based Timed Up and Go test.
Palmerini, Luca; Mellone, Sabato; Rocchi, Laura; Chiari, Lorenzo
2011-01-01
The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone.
Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias
2018-05-16
There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.
M-Isomap: Orthogonal Constrained Marginal Isomap for Nonlinear Dimensionality Reduction.
Zhang, Zhao; Chow, Tommy W S; Zhao, Mingbo
2013-02-01
Isomap is a well-known nonlinear dimensionality reduction (DR) method, aiming at preserving geodesic distances of all similarity pairs for delivering highly nonlinear manifolds. Isomap is efficient in visualizing synthetic data sets, but it usually delivers unsatisfactory results in benchmark cases. This paper incorporates the pairwise constraints into Isomap and proposes a marginal Isomap (M-Isomap) for manifold learning. The pairwise Cannot-Link and Must-Link constraints are used to specify the types of neighborhoods. M-Isomap computes the shortest path distances over constrained neighborhood graphs and guides the nonlinear DR through separating the interclass neighbors. As a result, large margins between both interand intraclass clusters are delivered and enhanced compactness of intracluster points is achieved at the same time. The validity of M-Isomap is examined by extensive simulations over synthetic, University of California, Irvine, and benchmark real Olivetti Research Library, YALE, and CMU Pose, Illumination, and Expression databases. The data visualization and clustering power of M-Isomap are compared with those of six related DR methods. The visualization results show that M-Isomap is able to deliver more separate clusters. Clustering evaluations also demonstrate that M-Isomap delivers comparable or even better results than some state-of-the-art DR algorithms.
Zhang, Lianbin; Chen, Guoying; Hedhili, Mohamed N.; Zhang, Hongnan; Wang, Peng
2012-01-01
In this study, three-dimensional (3D) graphene assemblies are prepared from graphene oxide (GO) by a facile in situ reduction-assembly method, using a novel, low-cost, and environment-friendly reducing medium which is a combination of oxalic acid
Kas, Recep; Hummadi, Khalid Khazzal; Kortlever, Ruud; de Wit, Patrick; Milbrat, Alexander; Luiten-Olieman, Maria W.J.; Benes, Nieck Edwin; Koper, Marc T.M.; Mul, Guido
2016-01-01
Aqueous-phase electrochemical reduction of carbon dioxide requires an active, earth-abundant electrocatalyst, as well as highly efficient mass transport. Here we report the design of a porous hollow fibre copper electrode with a compact three-dimensional geometry, which provides a large area,
Dimensionality Reduction and Information-Theoretic Divergence Between Sets of Ladar Images
National Research Council Canada - National Science Library
Gray, David M; Principe, Jose C
2008-01-01
... can be exploited while circumventing many of the problems associated with the so-called "curse of dimensionality." In this study, PCA techniques are used to find a low-dimensional sub-space representation of LADAR image sets...
Dimensional reduction of exceptional E6,E8 gauge groups and flavour chirality
International Nuclear Information System (INIS)
Koca, M.
1984-01-01
Ten-dimensional Yang - Mills gauge theories based on the exceptional groups E 6 and E 8 are reduced to four-dimensional flavour-chiral Yang - Mills - Higgs theories where the extra six dimensions are identified with the compact G 2 /SU(3) and SO(7)/SO(6) coset spaces. A ten-dimensional E 8 theory leads to three families of SU(5), one of which lies in the 144-dimensional representation of SO(10)
Directory of Open Access Journals (Sweden)
Valentin L. Popov
2014-04-01
Full Text Available The Method of Dimensionality Reduction (MDR is a method of calculation and simulation of contacts of elastic and viscoelastic bodies. It consists essentially of two simple steps: (a substitution of the three-dimensional continuum by a uniquely defined one-dimensional linearly elastic or viscoelastic foundation (Winkler foundation and (b transformation of the three-dimensional profile of the contacting bodies by means of the MDR-transformation. As soon as these two steps are completed, the contact problem can be considered to be solved. For axial symmetric contacts, only a small calculation by hand is required which does not exceed elementary calculus and will not be a barrier for any practically-oriented engineer. Alternatively, the MDR can be implemented numerically, which is almost trivial due to the independence of the foundation elements. In spite of their simplicity, all the results are exact. The present paper is a short practical guide to the MDR.
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation
Energy Technology Data Exchange (ETDEWEB)
Akhbardeh, Alireza; Jacobs, Michael A. [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States) and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States)
2012-04-15
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation
International Nuclear Information System (INIS)
Akhbardeh, Alireza; Jacobs, Michael A.
2012-01-01
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B 1 inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both
DEFF Research Database (Denmark)
Eckardt, Henrik; Lind, Dennis; Toendevold, Erik
2015-01-01
was evaluated on reconstructed coronal and sagittal images of the acetabulum. Results - The fracture severity and patient characteristics were similar in the 2 groups. In the 3D group, 46 of 72 patients (0.6) had a perfect result after open reduction and internal fixation, and in the control group, 17 of 42 (0...
Directory of Open Access Journals (Sweden)
Arehart Eric
2009-03-01
Full Text Available Abstract Background The fidelity of DNA replication serves as the nidus for both genetic evolution and genomic instability fostering disease. Single nucleotide polymorphisms (SNPs constitute greater than 80% of the genetic variation between individuals. A new theory regarding DNA replication fidelity has emerged in which selectivity is governed by base-pair geometry through interactions between the selected nucleotide, the complementary strand, and the polymerase active site. We hypothesize that specific nucleotide combinations in the flanking regions of SNP fragments are associated with mutation. Results We modeled the relationship between DNA sequence and observed polymorphisms using the novel multifactor dimensionality reduction (MDR approach. MDR was originally developed to detect synergistic interactions between multiple SNPs that are predictive of disease susceptibility. We initially assembled data from the Broad Institute as a pilot test for the hypothesis that flanking region patterns associate with mutagenesis (n = 2194. We then confirmed and expanded our inquiry with human SNPs within coding regions and their flanking sequences collected from the National Center for Biotechnology Information (NCBI database (n = 29967 and a control set of sequences (coding region not associated with SNP sites randomly selected from the NCBI database (n = 29967. We discovered seven flanking region pattern associations in the Broad dataset which reached a minimum significance level of p ≤ 0.05. Significant models (p Conclusion The present study represents the first use of this computational methodology for modeling nonlinear patterns in molecular genetics. MDR was able to identify distinct nucleotide patterning around sites of mutations dependent upon the observed nucleotide change. We discovered one flanking region set that included five nucleotides clustered around a specific type of SNP site. Based on the strongly associated patterns identified in
A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis
Directory of Open Access Journals (Sweden)
Huanhuan Li
2017-08-01
Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our
A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.
Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon
2017-08-04
The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with
Directory of Open Access Journals (Sweden)
Enrico eChiovetto
2013-02-01
Full Text Available A long standing hypothesis in the neuroscience community is that the CNS generates the muscle activities to accomplish movements by combining a relatively small number of stereotyped patterns of muscle activations, often referred to as muscle synergies. Different definitions of synergies have been given in the literature. The most well-known are those of synchronous, time-varying and temporal muscle synergies. Each one of them is based on a different mathematical model used to factor some EMG array recordings collected during the execution of variety of motor tasks into a well-determined spatial, temporal or spatio-temporal organization. This plurality of definitions and their separate application to complex tasks have so far complicated the comparison and interpretation of the results obtained across studies, and it has always remained unclear why and when one synergistic decomposition should be preferred to another one. By using well-understood motor tasks such as elbow flexions and extensions, we aimed in this study to clarify better what are the motor features characterized by each kind of decomposition and to assess whether, when and why one of them should be preferred to the others. We found that three temporal synergies, each one of them accounting for specific temporal phases of the movements could account for the majority of the data variation. Similar performances could be achieved by two synchronous synergies, encoding the agonist-antagonist nature of the two muscles considered, and by two time-varying muscle synergies, encoding each one a task-related feature of the elbow movements, specifically their direction. Our findings support the notion that each EMG decomposition provides a set of well-interpretable muscle synergies, identifying reduction of dimensionality in different aspects of the movements. Taken together, our findings suggest that all decompositions are not equivalent and may imply different neurophysiological substrates
Directory of Open Access Journals (Sweden)
Zhang Jing
2016-01-01
Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.
Directory of Open Access Journals (Sweden)
Chunyang Wang
2015-01-01
Full Text Available Study on land use/cover can reflect changing rules of population, economy, agricultural structure adjustment, policy, and traffic and provide better service for the regional economic development and urban evolution. The study on fine land use/cover assessment using hyperspectral image classification is a focal growing area in many fields. Semisupervised learning method which takes a large number of unlabeled samples and minority labeled samples, improving classification and predicting the accuracy effectively, has been a new research direction. In this paper, we proposed improving fine land use/cover assessment based on semisupervised hyperspectral classification method. The test analysis of study area showed that the advantages of semisupervised classification method could improve the high precision overall classification and objective assessment of land use/cover results.
Active semi-supervised learning method with hybrid deep belief networks.
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
2014-01-01
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.
Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo
2016-08-26
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
A novel semisupervised extreme learning machine (ELM) with clustering discrimination manifold regularization (CDMR) framework named CDMR-ELM is proposed for semisupervised classification. By using unsupervised fuzzy clustering method, CDMR framework integrates clustering discrimination of both labeled and unlabeled data with twinning constraints regularization. Aiming at further improving the classification accuracy and efficiency, a new multiobjective fruit fly optimization algorithm (MOFOA) is developed to optimize crucial parameters of CDME-ELM. The proposed MOFOA is implemented with two objectives: simultaneously minimizing the number of hidden nodes and mean square error (MSE). The results of experiments on actual datasets show that the proposed semisupervised classifier can obtain better accuracy and efficiency with relatively few hidden nodes compared with other state-of-the-art classifiers.
Directory of Open Access Journals (Sweden)
Qing Ye
2015-01-01
Full Text Available A novel semisupervised extreme learning machine (ELM with clustering discrimination manifold regularization (CDMR framework named CDMR-ELM is proposed for semisupervised classification. By using unsupervised fuzzy clustering method, CDMR framework integrates clustering discrimination of both labeled and unlabeled data with twinning constraints regularization. Aiming at further improving the classification accuracy and efficiency, a new multiobjective fruit fly optimization algorithm (MOFOA is developed to optimize crucial parameters of CDME-ELM. The proposed MOFOA is implemented with two objectives: simultaneously minimizing the number of hidden nodes and mean square error (MSE. The results of experiments on actual datasets show that the proposed semisupervised classifier can obtain better accuracy and efficiency with relatively few hidden nodes compared with other state-of-the-art classifiers.
Visual Vehicle Tracking Based on Deep Representation and Semisupervised Learning
Directory of Open Access Journals (Sweden)
Yingfeng Cai
2017-01-01
Full Text Available Discriminative tracking methods use binary classification to discriminate between the foreground and background and have achieved some useful results. However, the use of labeled training samples is insufficient for them to achieve accurate tracking. Hence, discriminative classifiers must use their own classification results to update themselves, which may lead to feedback-induced tracking drift. To overcome these problems, we propose a semisupervised tracking algorithm that uses deep representation and transfer learning. Firstly, a 2D multilayer deep belief network is trained with a large amount of unlabeled samples. The nonlinear mapping point at the top of this network is subtracted as the feature dictionary. Then, this feature dictionary is utilized to transfer train and update a deep tracker. The positive samples for training are the tracked vehicles, and the negative samples are the background images. Finally, a particle filter is used to estimate vehicle position. We demonstrate experimentally that our proposed vehicle tracking algorithm can effectively restrain drift while also maintaining the adaption of vehicle appearance. Compared with similar algorithms, our method achieves a better tracking success rate and fewer average central-pixel errors.
Building an Arabic Sentiment Lexicon Using Semi-supervised Learning
Directory of Open Access Journals (Sweden)
Fawaz H.H. Mahyoub
2014-12-01
Full Text Available Sentiment analysis is the process of determining a predefined sentiment from text written in a natural language with respect to the entity to which it is referring. A number of lexical resources are available to facilitate this task in English. One such resource is the SentiWordNet, which assigns sentiment scores to words found in the English WordNet. In this paper, we present an Arabic sentiment lexicon that assigns sentiment scores to the words found in the Arabic WordNet. Starting from a small seed list of positive and negative words, we used semi-supervised learning to propagate the scores in the Arabic WordNet by exploiting the synset relations. Our algorithm assigned a positive sentiment score to more than 800, a negative score to more than 600 and a neutral score to more than 6000 words in the Arabic WordNet. The lexicon was evaluated by incorporating it into a machine learning-based classifier. The experiments were conducted on several Arabic sentiment corpora, and we were able to achieve a 96% classification accuracy.
Semi-Supervised Learning for Classification of Protein Sequence Data
Directory of Open Access Journals (Sweden)
Brian R. King
2008-01-01
Full Text Available Protein sequence data continue to become available at an exponential rate. Annotation of functional and structural attributes of these data lags far behind, with only a small fraction of the data understood and labeled by experimental methods. Classification methods that are based on semi-supervised learning can increase the overall accuracy of classifying partly labeled data in many domains, but very few methods exist that have shown their effect on protein sequence classification. We show how proven methods from text classification can be applied to protein sequence data, as we consider both existing and novel extensions to the basic methods, and demonstrate restrictions and differences that must be considered. We demonstrate comparative results against the transductive support vector machine, and show superior results on the most difficult classification problems. Our results show that large repositories of unlabeled protein sequence data can indeed be used to improve predictive performance, particularly in situations where there are fewer labeled protein sequences available, and/or the data are highly unbalanced in nature.
Contaminant source identification using semi-supervised machine learning
International Nuclear Information System (INIS)
Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan
2017-01-01
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).
On the dimensional reduction of a gravitational theory containing higher-derivative terms
International Nuclear Information System (INIS)
Pollock, M.D.
1990-02-01
From the higher-dimensional gravitational theory L-circumflex=R-circumflex-2Λ-circumflex-α-circumflex 1 R-circumflex 2 =α-circumflex 2 R-circumflex AB R-circumflex AB -α-circumflex 3 R-circumflex ABCD R-circumflex ABCD , we derive the effective four-dimensional Lagrangian L. (author). 12 refs
Neuroanatomical heterogeneity of schizophrenia revealed by semi-supervised machine learning methods.
Honnorat, Nicolas; Dong, Aoyan; Meisenzahl-Lechner, Eva; Koutsouleris, Nikolaos; Davatzikos, Christos
2017-12-20
Schizophrenia is associated with heterogeneous clinical symptoms and neuroanatomical alterations. In this work, we aim to disentangle the patterns of neuroanatomical alterations underlying a heterogeneous population of patients using a semi-supervised clustering method. We apply this strategy to a cohort of patients with schizophrenia of varying extends of disease duration, and we describe the neuroanatomical, demographic and clinical characteristics of the subtypes discovered. We analyze the neuroanatomical heterogeneity of 157 patients diagnosed with Schizophrenia, relative to a control population of 169 subjects, using a machine learning method called CHIMERA. CHIMERA clusters the differences between patients and a demographically-matched population of healthy subjects, rather than clustering patients themselves, thereby specifically assessing disease-related neuroanatomical alterations. Voxel-Based Morphometry was conducted to visualize the neuroanatomical patterns associated with each group. The clinical presentation and the demographics of the groups were then investigated. Three subgroups were identified. The first two differed substantially, in that one involved predominantly temporal-thalamic-peri-Sylvian regions, whereas the other involved predominantly frontal regions and the thalamus. Both subtypes included primarily male patients. The third pattern was a mix of these two and presented milder neuroanatomic alterations and comprised a comparable number of men and women. VBM and statistical analyses suggest that these groups could correspond to different neuroanatomical dimensions of schizophrenia. Our analysis suggests that schizophrenia presents distinct neuroanatomical variants. This variability points to the need for a dimensional neuroanatomical approach using data-driven, mathematically principled multivariate pattern analysis methods, and should be taken into account in clinical studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Combination of supervised and semi-supervised regression models for improved unbiased estimation
DEFF Research Database (Denmark)
Arenas-Garía, Jeronimo; Moriana-Varo, Carlos; Larsen, Jan
2010-01-01
In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised and semisupervi......In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised...
Nicolini, Paolo; Frezzato, Diego
2013-06-21
Simplification of chemical kinetics description through dimensional reduction is particularly important to achieve an accurate numerical treatment of complex reacting systems, especially when stiff kinetics are considered and a comprehensive picture of the evolving system is required. To this aim several tools have been proposed in the past decades, such as sensitivity analysis, lumping approaches, and exploitation of time scales separation. In addition, there are methods based on the existence of the so-called slow manifolds, which are hyper-surfaces of lower dimension than the one of the whole phase-space and in whose neighborhood the slow evolution occurs after an initial fast transient. On the other hand, all tools contain to some extent a degree of subjectivity which seems to be irremovable. With reference to macroscopic and spatially homogeneous reacting systems under isothermal conditions, in this work we shall adopt a phenomenological approach to let self-emerge the dimensional reduction from the mathematical structure of the evolution law. By transforming the original system of polynomial differential equations, which describes the chemical evolution, into a universal quadratic format, and making a direct inspection of the high-order time-derivatives of the new dynamic variables, we then formulate a conjecture which leads to the concept of an "attractiveness" region in the phase-space where a well-defined state-dependent rate function ω has the simple evolution ω[over dot]=-ω(2) along any trajectory up to the stationary state. This constitutes, by itself, a drastic dimensional reduction from a system of N-dimensional equations (being N the number of chemical species) to a one-dimensional and universal evolution law for such a characteristic rate. Step-by-step numerical inspections on model kinetic schemes are presented. In the companion paper [P. Nicolini and D. Frezzato, J. Chem. Phys. 138, 234102 (2013)] this outcome will be naturally related to the
Energy Technology Data Exchange (ETDEWEB)
Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.; Hornegger, Joachim; Zhu Lei; Strobel, Norbert; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Department of Radiology, Stanford University, Stanford, California 94305 (United States) and Center for Medical Image Science and Visualization, Linkoeping University, Linkoeping (Sweden); Pattern Recognition Laboratory, Department of Computer Science, Friedrich-Alexander University of Erlangen-Nuremberg, 91054, Erlangen (Germany); Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Siemens AG Healthcare, Forchheim 91301 (Germany); Department of Radiology, Stanford University, Stanford, California 94305 (United States)
2011-11-15
Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-ray views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8
International Nuclear Information System (INIS)
Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.; Hornegger, Joachim; Zhu Lei; Strobel, Norbert; Fahrig, Rebecca
2011-01-01
Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-ray views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold
Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane
2013-01-01
We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR's constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to ide...
Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia
2016-03-01
Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.
Dimensional reduction of a general advection–diffusion equation in 2D channels
Kalinay, Pavol; Slanina, František
2018-06-01
Diffusion of point-like particles in a two-dimensional channel of varying width is studied. The particles are driven by an arbitrary space dependent force. We construct a general recurrence procedure mapping the corresponding two-dimensional advection-diffusion equation onto the longitudinal coordinate x. Unlike the previous specific cases, the presented procedure enables us to find the one-dimensional description of the confined diffusion even for non-conservative (vortex) forces, e.g. caused by flowing solvent dragging the particles. We show that the result is again the generalized Fick–Jacobs equation. Despite of non existing scalar potential in the case of vortex forces, the effective one-dimensional scalar potential, as well as the corresponding quasi-equilibrium and the effective diffusion coefficient can be always found.
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
Energy Technology Data Exchange (ETDEWEB)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
arXiv Supersymmetric gauged matrix models from dimensional reduction on a sphere
Closset, Cyril; Seong, Rak-Kyeong
2018-05-04
It was recently proposed that $ \\mathcal{N} $ = 1 supersymmetric gauged matrix models have a duality of order four — that is, a quadrality — reminiscent of infrared dualities of SQCD theories in higher dimensions. In this note, we show that the zero-dimensional quadrality proposal can be inferred from the two-dimensional Gadde-Gukov-Putrov triality. We consider two-dimensional $ \\mathcal{N} $ = (0, 2) SQCD compactified on a sphere with the half-topological twist. For a convenient choice of R-charge, the zero-mode sector on the sphere gives rise to a simple $ \\mathcal{N} $ = 1 gauged matrix model. Triality on the sphere then implies a triality relation for the supersymmetric matrix model, which can be completed to the full quadrality.
Semi-supervised adaptation in ssvep-based brain-computer interface using tri-training
DEFF Research Database (Denmark)
Bender, Thomas; Kjaer, Troels W.; Thomsen, Carsten E.
2013-01-01
This paper presents a novel and computationally simple tri-training based semi-supervised steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI). It is implemented with autocorrelation-based features and a Naïve-Bayes classifier (NBC). The system uses nine characters...
Semi-Supervised Multi-View Ensemble Learning Based On Extracting Cross-View Correlation
Directory of Open Access Journals (Sweden)
ZALL, R.
2016-05-01
Full Text Available Correlated information between different views incorporate useful for learning in multi view data. Canonical correlation analysis (CCA plays important role to extract these information. However, CCA only extracts the correlated information between paired data and cannot preserve correlated information between within-class samples. In this paper, we propose a two-view semi-supervised learning method called semi-supervised random correlation ensemble base on spectral clustering (SS_RCE. SS_RCE uses a multi-view method based on spectral clustering which takes advantage of discriminative information in multiple views to estimate labeling information of unlabeled samples. In order to enhance discriminative power of CCA features, we incorporate the labeling information of both unlabeled and labeled samples into CCA. Then, we use random correlation between within-class samples from cross view to extract diverse correlated features for training component classifiers. Furthermore, we extend a general model namely SSMV_RCE to construct ensemble method to tackle semi-supervised learning in the presence of multiple views. Finally, we compare the proposed methods with existing multi-view feature extraction methods using multi-view semi-supervised ensembles. Experimental results on various multi-view data sets are presented to demonstrate the effectiveness of the proposed methods.
Multiclass semi-supervised learning for animal behavior recognition from accelerometer data
Tanha, J.; van Someren, M.; de Bakker, M.; Bouten, W.; Shamoun-Baranes, J.; Afsarmanesh, H.
2012-01-01
In this paper we present a new Multiclass semi-supervised learning algorithm that uses a base classifier in combination with a similarity function applied to all data to find a classifier that maximizes the margin and consistency over all data. A novel multiclass loss function is presented and used
Evaluation of Semi-supervised Learning for Classification of Protein Crystallization Imagery.
Sigdel, Madhav; Dinç, İmren; Dinç, Semih; Sigdel, Madhu S; Pusey, Marc L; Aygün, Ramazan S
2014-03-01
In this paper, we investigate the performance of two wrapper methods for semi-supervised learning algorithms for classification of protein crystallization images with limited labeled images. Firstly, we evaluate the performance of semi-supervised approach using self-training with naïve Bayesian (NB) and sequential minimum optimization (SMO) as the base classifiers. The confidence values returned by these classifiers are used to select high confident predictions to be used for self-training. Secondly, we analyze the performance of Yet Another Two Stage Idea (YATSI) semi-supervised learning using NB, SMO, multilayer perceptron (MLP), J48 and random forest (RF) classifiers. These results are compared with the basic supervised learning using the same training sets. We perform our experiments on a dataset consisting of 2250 protein crystallization images for different proportions of training and test data. Our results indicate that NB and SMO using both self-training and YATSI semi-supervised approaches improve accuracies with respect to supervised learning. On the other hand, MLP, J48 and RF perform better using basic supervised learning. Overall, random forest classifier yields the best accuracy with supervised learning for our dataset.
Semi-supervised prediction of gene regulatory networks using machine learning algorithms.
Patel, Nihir; Wang, Jason T L
2015-10-01
Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.
A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification.
Peikari, Mohammad; Salama, Sherine; Nofech-Mozes, Sharon; Martel, Anne L
2018-05-08
Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.
Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.
Park, Chihyun; Ahn, Jaegyoon; Kim, Hyunjin; Park, Sanghyun
2014-01-01
The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.
Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.
Directory of Open Access Journals (Sweden)
Chihyun Park
Full Text Available BACKGROUND: The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. RESULTS: In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. CONCLUSIONS: The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.
Two-dimensional dynamics of elasto-inertial turbulence and its role in polymer drag reduction
Sid, S.; Terrapon, V. E.; Dubief, Y.
2018-02-01
The goal of the present study is threefold: (i) to demonstrate the two-dimensional nature of the elasto-inertial instability in elasto-inertial turbulence (EIT), (ii) to identify the role of the bidimensional instability in three-dimensional EIT flows, and (iii) to establish the role of the small elastic scales in the mechanism of self-sustained EIT. Direct numerical simulations of viscoelastic fluid flows are performed in both two- and three-dimensional straight periodic channels using the Peterlin finitely extensible nonlinear elastic model (FENE-P). The Reynolds number is set to Reτ=85 , which is subcritical for two-dimensional flows but beyond the transition for three-dimensional ones. The polymer properties selected correspond to those of typical dilute polymer solutions, and two moderate Weissenberg numbers, Wiτ=40 ,100 , are considered. The simulation results show that sustained turbulence can be observed in two-dimensional subcritical flows, confirming the existence of a bidimensional elasto-inertial instability. The same type of instability is also observed in three-dimensional simulations where both Newtonian and elasto-inertial turbulent structures coexist. Depending on the Wi number, one type of structure can dominate and drive the flow. For large Wi values, the elasto-inertial instability tends to prevail over the Newtonian turbulence. This statement is supported by (i) the absence of typical Newtonian near-wall vortices and (ii) strong similarities between two- and three-dimensional flows when considering larger Wi numbers. The role of small elastic scales is investigated by introducing global artificial diffusion (GAD) in the hyperbolic transport equation for polymers. The aim is to measure how the flow reacts when the smallest elastic scales are progressively filtered out. The study results show that the introduction of large polymer diffusion in the system strongly damps a significant part of the elastic scales that are necessary to feed
Ico, G; Myung, A; Kim, B S; Myung, N V; Nam, J
2018-02-08
Despite the significant potential of organic piezoelectric materials in the electro-mechanical or mechano-electrical applications that require light and flexible material properties, the intrinsically low piezoelectric performance as compared to traditional inorganic materials has limited their full utilization. In this study, we demonstrate that dimensional reduction of poly(vinylidene fluoride trifluoroethylene) (P(VDF-TrFE)) at the nanoscale by electrospinning, combined with an appropriate thermal treatment, induces a transformative enhancement in piezoelectric performance. Specifically, the piezoelectric coefficient (d 33 ) reached up to -108 pm V -1 , approaching that of inorganic counterparts. Electrospun mats composed of thermo-treated 30 nm nanofibers with a thickness of 15 μm produced a consistent peak-to-peak voltage of 38.5 V and a power output of 74.1 μW at a strain of 0.26% while sustaining energy production over 10k repeated actuations. The exceptional piezoelectric performance was realized by the enhancement of piezoelectric dipole alignment and the materialization of flexoelectricity, both from the synergistic effects of dimensional reduction and thermal treatment. Our findings suggest that dimensionally controlled and thermally treated electrospun P(VDF-TrFE) nanofibers provide an opportunity to exploit their flexibility and durability for mechanically challenging applications while matching the piezoelectric performance of brittle, inorganic piezoelectric materials.
Rydzewski, J; Nowak, W
2016-04-12
In this work we propose an application of a nonlinear dimensionality reduction method to represent the high-dimensional configuration space of the ligand-protein dissociation process in a manner facilitating interpretation. Rugged ligand expulsion paths are mapped into 2-dimensional space. The mapping retains the main structural changes occurring during the dissociation. The topological similarity of the reduced paths may be easily studied using the Fréchet distances, and we show that this measure facilitates machine learning classification of the diffusion pathways. Further, low-dimensional configuration space allows for identification of residues active in transport during the ligand diffusion from a protein. The utility of this approach is illustrated by examination of the configuration space of cytochrome P450cam involved in expulsing camphor by means of enhanced all-atom molecular dynamics simulations. The expulsion trajectories are sampled and constructed on-the-fly during molecular dynamics simulations using the recently developed memetic algorithms [ Rydzewski, J.; Nowak, W. J. Chem. Phys. 2015 , 143 ( 12 ), 124101 ]. We show that the memetic algorithms are effective for enforcing the ligand diffusion and cavity exploration in the P450cam-camphor complex. Furthermore, we demonstrate that machine learning techniques are helpful in inspecting ligand diffusion landscapes and provide useful tools to examine structural changes accompanying rare events.
Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions
Wang, Jim Jing-Yan
2014-05-23
Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.
Chavez Chavez, Gustavo Ivan; Turkiyyah, George; Zampini, Stefano; Keyes, David E.
2017-01-01
and the cyclic reduction method. The setup and application phases of the preconditioner achieve log-linear complexity in memory footprint and number of operations, and numerical experiments exhibit good weak and strong scalability at large processor counts in a
Sponberg, Simon; Daniel, Thomas L; Fairhall, Adrienne L
2015-04-01
What are the features of movement encoded by changing motor commands? Do motor commands encode movement independently or can they be represented in a reduced set of signals (i.e. synergies)? Motor encoding poses a computational and practical challenge because many muscles typically drive movement, and simultaneous electrophysiology recordings of all motor commands are typically not available. Moreover, during a single locomotor period (a stride or wingstroke) the variation in movement may have high dimensionality, even if only a few discrete signals activate the muscles. Here, we apply the method of partial least squares (PLS) to extract the encoded features of movement based on the cross-covariance of motor signals and movement. PLS simultaneously decomposes both datasets and identifies only the variation in movement that relates to the specific muscles of interest. We use this approach to explore how the main downstroke flight muscles of an insect, the hawkmoth Manduca sexta, encode torque during yaw turns. We simultaneously record muscle activity and turning torque in tethered flying moths experiencing wide-field visual stimuli. We ask whether this pair of muscles acts as a muscle synergy (a single linear combination of activity) consistent with their hypothesized function of producing a left-right power differential. Alternatively, each muscle might individually encode variation in movement. We show that PLS feature analysis produces an efficient reduction of dimensionality in torque variation within a wingstroke. At first, the two muscles appear to behave as a synergy when we consider only their wingstroke-averaged torque. However, when we consider the PLS features, the muscles reveal independent encoding of torque. Using these features we can predictably reconstruct the variation in torque corresponding to changes in muscle activation. PLS-based feature analysis provides a general two-sided dimensionality reduction that reveals encoding in high dimensional
Directory of Open Access Journals (Sweden)
Simon Sponberg
2015-04-01
Full Text Available What are the features of movement encoded by changing motor commands? Do motor commands encode movement independently or can they be represented in a reduced set of signals (i.e. synergies? Motor encoding poses a computational and practical challenge because many muscles typically drive movement, and simultaneous electrophysiology recordings of all motor commands are typically not available. Moreover, during a single locomotor period (a stride or wingstroke the variation in movement may have high dimensionality, even if only a few discrete signals activate the muscles. Here, we apply the method of partial least squares (PLS to extract the encoded features of movement based on the cross-covariance of motor signals and movement. PLS simultaneously decomposes both datasets and identifies only the variation in movement that relates to the specific muscles of interest. We use this approach to explore how the main downstroke flight muscles of an insect, the hawkmoth Manduca sexta, encode torque during yaw turns. We simultaneously record muscle activity and turning torque in tethered flying moths experiencing wide-field visual stimuli. We ask whether this pair of muscles acts as a muscle synergy (a single linear combination of activity consistent with their hypothesized function of producing a left-right power differential. Alternatively, each muscle might individually encode variation in movement. We show that PLS feature analysis produces an efficient reduction of dimensionality in torque variation within a wingstroke. At first, the two muscles appear to behave as a synergy when we consider only their wingstroke-averaged torque. However, when we consider the PLS features, the muscles reveal independent encoding of torque. Using these features we can predictably reconstruct the variation in torque corresponding to changes in muscle activation. PLS-based feature analysis provides a general two-sided dimensionality reduction that reveals encoding in
Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang
2017-12-01
Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.
Sponberg, Simon; Daniel, Thomas L.; Fairhall, Adrienne L.
2015-01-01
What are the features of movement encoded by changing motor commands? Do motor commands encode movement independently or can they be represented in a reduced set of signals (i.e. synergies)? Motor encoding poses a computational and practical challenge because many muscles typically drive movement, and simultaneous electrophysiology recordings of all motor commands are typically not available. Moreover, during a single locomotor period (a stride or wingstroke) the variation in movement may have high dimensionality, even if only a few discrete signals activate the muscles. Here, we apply the method of partial least squares (PLS) to extract the encoded features of movement based on the cross-covariance of motor signals and movement. PLS simultaneously decomposes both datasets and identifies only the variation in movement that relates to the specific muscles of interest. We use this approach to explore how the main downstroke flight muscles of an insect, the hawkmoth Manduca sexta, encode torque during yaw turns. We simultaneously record muscle activity and turning torque in tethered flying moths experiencing wide-field visual stimuli. We ask whether this pair of muscles acts as a muscle synergy (a single linear combination of activity) consistent with their hypothesized function of producing a left-right power differential. Alternatively, each muscle might individually encode variation in movement. We show that PLS feature analysis produces an efficient reduction of dimensionality in torque variation within a wingstroke. At first, the two muscles appear to behave as a synergy when we consider only their wingstroke-averaged torque. However, when we consider the PLS features, the muscles reveal independent encoding of torque. Using these features we can predictably reconstruct the variation in torque corresponding to changes in muscle activation. PLS-based feature analysis provides a general two-sided dimensionality reduction that reveals encoding in high dimensional
International Nuclear Information System (INIS)
Fang, Guochang; Tian, Lixin; Sun, Mei; Fu, Min
2012-01-01
A novel three-dimensional energy-saving and emission-reduction chaotic system is proposed, which has not yet been reported in present literature. The system is established in accordance with the complicated relationship between energy-saving and emission-reduction, carbon emissions and economic growth. The dynamic behavior of the system is analyzed by means of Lyapunov exponents and bifurcation diagrams. With undetermined coefficient method, expressions of homoclinic orbits of the system are obtained. The Šilnikov theorem guarantees that the system has Smale horseshoes and the horseshoes chaos. Artificial neural network (ANN) is used to identify the quantitative coefficients in the simulation models according to the statistical data of China, and an empirical study of the real system is carried out with the results in perfect agreement with actual situation. It is found that the sooner and more perfect energy-saving and emission-reduction is started, the easier and sooner the maximum of the carbon emissions will be achieved so as to reduce carbon emissions and energy intensity. Numerical simulations are presented to demonstrate the results. -- Highlights: ► Use non-linear dynamical method to model the energy-saving and emission-reduction system. ► The energy-saving and emission-reduction attractor is obtained. ► Identify the unknown parameters of the energy-saving and emission-reduction system based on the statistical data. ► Evaluating the achievements of energy-saving and emission-reduction by the time-varying energy intensity calculation formula. ► Some statistical results based on the statistical data in China are presented, which are vivid and adherent to the reality.
Green functions and dimensional reduction of quantum fields on product manifolds
International Nuclear Information System (INIS)
Haba, Z
2008-01-01
We discuss Euclidean Green functions on product manifolds P=N x M. We show that if M is compact and N is not compact then the Euclidean field on P can be approximated by its zero mode which is a Euclidean field on N. We estimate the remainder of this approximation. We show that for large distances on N the remainder is small. If P=R D-1 x S β , where S β is a circle of radius β, then the result reduces to the well-known approximation of the D-dimensional finite temperature quantum field theory by (D - 1)-dimensional one in the high-temperature limit. Analytic continuation of Euclidean fields is discussed briefly
Mittal, Yogesh; Varghese, K George; Mohan, S; Jayakumar, N; Chhag, Somil
2016-03-01
Three dimensional titanium plating system was developed by Farmand in 1995 to meet the requirements of semi rigid fixation with lesser complication. The purpose of this in vivo prospective study was to evaluate and compare the clinical effectiveness of three dimensional and two dimensional Titanium miniplates for open reduction and fixation of mandibular parasymphysis fracture. Thirty patients with non-comminuted mandibular parasymphysis fractures were divided randomly into two equal groups and were treated with 2 mm 3D and 2D miniplate system respectively. All patients were systematically monitored at 1st, 2nd, 3rd, 6th week, 3rd and 6th month postoperatively. The outcome parameters recorded were severity of pain, infection, mobility, occlusion derangement, paresthesia and implant failure. The data so collected was analyzed using independent t test and Chi square test (α = .05). The results showed that one patient in each group had post-operative infection, occlusion derangement and mobility (p > .05). In Group A, one patient had paresthesia while in Group B, two patients had paresthesia (p > .05). None of the patients in both the groups had implant failure. There was no statistically significant difference between 3D and 2D miniplate system in all the recorded parameters at all the follow-ups (p > .05). 3D miniplates were found to be better than 2D miniplates in terms of cost, ease of surgery and operative time. However, 3D miniplates were unfavorable for cases where fracture line was oblique and in close proximity to mental foramen, where they were difficult to adapt and more chances for tooth-root damage and inadvertent injury to the mental nerve due to traction.
Tsukagoshi, Yuta; Kamada, Hiroshi; Kamegaya, Makoto; Takeuchi, Ryoko; Nakagawa, Shogo; Tomaru, Yohei; Tanaka, Kenta; Onishi, Mio; Nishino, Tomofumi; Yamazaki, Masashi
2018-05-02
Previous reports on patients with developmental dysplasia of the hip (DDH) showed that the prereduced femoral head was notably smaller and more nonspherical than the intact head, with growth failure observed at the proximal posteromedial area. We evaluated the shape of the femoral head cartilage in patients with DDH before and after reduction, with size and sphericity assessed using 3-dimensional (3D) magnetic resonance imaging (MRI). We studied 10 patients with unilateral DDH (all female) who underwent closed reduction. Patients with avascular necrosis of the femoral head on the plain radiograph 1 year after reduction were excluded. 3D MRI was performed before reduction and after reduction, at 2 years of age. 3D-image analysis software was used to reconstruct the multiplanes. After setting the axial, coronal, and sagittal planes in the software (based on the femoral shaft and neck axes), the smallest sphere that included the femoral head cartilage was drawn, the diameter was measured, and the center of the sphere was defined as the femoral head center. We measured the distance between the center and cartilage surface every 30 degrees on the 3 reconstructed planes. Sphericity of the femoral head was calculated using a ratio (the distance divided by each radius) and compared between prereduction and postreduction. The mean patient age was 7±3 and 26±3 months at the first and second MRI, respectively. The mean duration between the reduction and second MRI was 18±3 months. The femoral head diameter was 26.7±1.5 and 26.0±1.6 mm on the diseased and intact sides, respectively (P=0.069). The ratios of the posteromedial area on the axial plane and the proximoposterior area on the sagittal plane after reduction were significantly larger than before reduction (P<0.01). We demonstrated that the size of the reduced femoral head was nearly equal to that of the intact femoral head and that the growth failure area of the head before reduction, in the proximal posteromedial
On reduction and exact solutions of nonlinear many-dimensional Schroedinger equations
International Nuclear Information System (INIS)
Barannik, A.F.; Marchenko, V.A.; Fushchich, V.I.
1991-01-01
With the help of the canonical decomposition of an arbitrary subalgebra of the orthogonal algebra AO(n) the rank n and n-1 maximal subalgebras of the extended isochronous Galileo algebra, the rank n maximal subalgebras of the generalized extended classical Galileo algebra AG(a,n) the extended special Galileo algebra AG(2,n) and the extended whole Galileo algebra AG(3,n) are described. By using the rank n subalgebras, ansatze reducing the many dimensional Schroedinger equations to ordinary differential equations is found. With the help of the reduced equation solutions exact solutions of the Schroedinger equation are considered
Dimensional reduction of the Standard Model coupled to a new singlet scalar field
Energy Technology Data Exchange (ETDEWEB)
Brauner, Tomáš [Faculty of Science and Technology, University of Stavanger,N-4036 Stavanger (Norway); Tenkanen, Tuomas V.I. [Department of Physics and Helsinki Institute of Physics,P.O. Box 64, FI-00014 University of Helsinki (Finland); Tranberg, Anders [Faculty of Science and Technology, University of Stavanger,N-4036 Stavanger (Norway); Vuorinen, Aleksi [Department of Physics and Helsinki Institute of Physics,P.O. Box 64, FI-00014 University of Helsinki (Finland); Weir, David J. [Faculty of Science and Technology, University of Stavanger,N-4036 Stavanger (Norway); Department of Physics and Helsinki Institute of Physics,P.O. Box 64, FI-00014 University of Helsinki (Finland)
2017-03-01
We derive an effective dimensionally reduced theory for the Standard Model augmented by a real singlet scalar. We treat the singlet as a superheavy field and integrate it out, leaving an effective theory involving only the Higgs and SU(2){sub L}×U(1){sub Y} gauge fields, identical to the one studied previously for the Standard Model. This opens up the possibility of efficiently computing the order and strength of the electroweak phase transition, numerically and nonperturbatively, in this extension of the Standard Model. Understanding the phase diagram is crucial for models of electroweak baryogenesis and for studying the production of gravitational waves at thermal phase transitions.
Algorithm for statistical noise reduction in three-dimensional ion implant simulations
International Nuclear Information System (INIS)
Hernandez-Mangas, J.M.; Arias, J.; Jaraiz, M.; Bailon, L.; Barbolla, J.
2001-01-01
As integrated circuit devices scale into the deep sub-micron regime, ion implantation will continue to be the primary means of introducing dopant atoms into silicon. Different types of impurity profiles such as ultra-shallow profiles and retrograde profiles are necessary for deep submicron devices in order to realize the desired device performance. A new algorithm to reduce the statistical noise in three-dimensional ion implant simulations both in the lateral and shallow/deep regions of the profile is presented. The computational effort in BCA Monte Carlo ion implant simulation is also reduced
Directory of Open Access Journals (Sweden)
Ernestina Martel
2018-06-01
Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.
International Nuclear Information System (INIS)
Ma, Yanjiao; Wang, Hui; Feng, Hanqing; Ji, Shan; Mao, Xuefeng; Wang, Rongfang
2014-01-01
Graphical abstract: Three-dimentional Fe, N-doped carbon foams prepared by two steps exhibited comparable catalytic activity for oxygen reduction reaction to commercial Pt/C due to the unique structure and the synergistic effect of Fe and N atoms. - Highlights: • Three-dimensional Fe, N-doped carbon foam (3D-CF) were prepared. • 3D-CF exhibits comparable catalytic activity to Pt/C for oxygen reduction reaction. • The enhanced activity of 3D-CF results of its unique structure. - Abstract: Three-dimensional (3D) Fe, N-doped carbon foams (3D-CF) as efficient cathode catalysts for the oxygen reduction reaction (ORR) in alkaline solution are reported. The 3D-CF exhibit interconnected hierarchical pore structure. In addition, Fe, N-doped carbon without porous strucuture (Fe-N-C) and 3D N-doped carbon without Fe (3D-CF’) are prepared to verify the electrocatalytic activity of 3D-CF. The electrocatalytic performance of as-prepared 3D-CF for ORR shows that the onset potential on 3D-CF electrode positively shifts about 41 mV than those of 3D-CF’ and Fe-N-C respectively. In addition, the onset potential on 3D-CF electrode for ORR is about 27 mV more negative than that on commercial Pt/C electrode. 3D-CF also show better methanol tolerance and durability than commercial Pt/C catalyst. These results show that to synthesize 3D hierarchical pores with high specific surface area is an efficient way to improve the ORR performance
Dimensional reduction of a Lorentz and CPT-violating Maxwell-Chern-Simons model
Energy Technology Data Exchange (ETDEWEB)
Belich, H. Jr.; Helayel Neto, J.A. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil). Coordenacao de Teoria de Campos e Particulas; Grupo de Fisica Teorica Jose Leite Lopes, Petropolis, RJ (Brazil); E-mails: belich@cbpf.br; helayel@cbpf.br; Ferreira, M.M. Jr. [Grupo de Fisica Teorica Jose Leite Lopes, Petropolis, RJ (Brazil); Maranhao Univ., Sao Luiz, MA (Brazil). Dept. de Fisica]. E-mail: manojr@cbpf.br; Orlando, M.T.D. [Grupo de Fisica Teorica Jose Leite Lopes, Petropolis, RJ (Brazil); Espirito Santo Univ., Vitoria, ES (Brazil). Dept. de Fisica e Quimica; E-mail: orlando@cce.ufes.br
2003-01-01
Taking as starting point a Lorentz and CPT non-invariant Chern-Simons-like model defined in 1+3 dimensions, we proceed realizing its dimensional to D = 1+2. One then obtains a new planar model, composed by the Maxwell-Chern-Simons (MCS) sector, a Klein-Gordon massless scalar field, and a coupling term that mixes the gauge field to the external vector, {nu}{sup {mu}}. In spite of breaking Lorentz invariance in the particle frame, this model may preserve the CPT symmetry for a single particular choice of {nu}{sup {mu}} . Analyzing the dispersion relations, one verifies that the reduced model exhibits stability, but the causality can be jeopardized by some modes. The unitary of the gauge sector is assured without any restriction , while the scalar sector is unitary only in the space-like case. (author)
Feature Space Dimensionality Reduction for Real-Time Vision-Based Food Inspection
Directory of Open Access Journals (Sweden)
Mai Moussa CHETIMA
2009-03-01
Full Text Available Machine vision solutions are becoming a standard for quality inspection in several manufacturing industries. In the processed-food industry where the appearance attributes of the product are essential to customer’s satisfaction, visual inspection can be reliably achieved with machine vision. But such systems often involve the extraction of a larger number of features than those actually needed to ensure proper quality control, making the process less efficient and difficult to tune. This work experiments with several feature selection techniques in order to reduce the number of attributes analyzed by a real-time vision-based food inspection system. Identifying and removing as much irrelevant and redundant information as possible reduces the dimensionality of the data and allows classification algorithms to operate faster. In some cases, accuracy on classification can even be improved. Filter-based and wrapper-based feature selectors are experimentally evaluated on different bakery products to identify the best performing approaches.
Reduction of the dimensionality and comparative analysis of multivariate radiological data
International Nuclear Information System (INIS)
Seddeek, M.K.; Kozae, A.M.; Sharshar, T.; Badran, H.M.
2009-01-01
Computational methods were used to reduce the dimensionality and to find clusters of multivariate data. The variables were the natural radioactivity contents and the texture characteristics of sand samples. The application of discriminate analysis revealed that samples with high negative values of the former score have the highest contamination with black sand. Principal component analysis (PCA) revealed that radioactivity concentrations alone are sufficient for the classification. Rough set analysis (RSA) showed that the concentration of 238 U, 226 Ra or 232 Th, combined with the concentration of 40 K, can specify the clusters and characteristics of the sand. Both PCA and RSA show that 238 U, 226 Ra and 232 Th behave similarly. RSA revealed that one or two of them can be omitted without degrading predictions.
Dimensional reduction of a Lorentz and CPT-violating Maxwell-Chern-Simons model
International Nuclear Information System (INIS)
Belich, H. Jr.; Helayel Neto, J.A.; Ferreira, M.M. Jr.; Maranhao Univ., Sao Luiz, MA; Orlando, M.T.D.; Espirito Santo Univ., Vitoria, ES
2003-01-01
Taking as starting point a Lorentz and CPT non-invariant Chern-Simons-like model defined in 1+3 dimensions, we proceed realizing its dimensional to D = 1+2. One then obtains a new planar model, composed by the Maxwell-Chern-Simons (MCS) sector, a Klein-Gordon massless scalar field, and a coupling term that mixes the gauge field to the external vector, ν μ . In spite of breaking Lorentz invariance in the particle frame, this model may preserve the CPT symmetry for a single particular choice of ν μ . Analyzing the dispersion relations, one verifies that the reduced model exhibits stability, but the causality can be jeopardized by some modes. The unitary of the gauge sector is assured without any restriction , while the scalar sector is unitary only in the space-like case. (author)
An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition
Directory of Open Access Journals (Sweden)
Jun Huang
2014-01-01
Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.
Semi-supervised eigenvectors for large-scale locally-biased learning
DEFF Research Database (Denmark)
Hansen, Toke Jansen; Mahoney, Michael W.
2014-01-01
improved scaling properties. We provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning; and we discuss the relationship between our results and recent machine learning algorithms that use global eigenvectors of the graph......In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks nearby that prespecified target region. For example, one might......-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities, thus limiting the applicability of eigenvector-based methods in situations where one is interested in very local properties of the data. In this paper, we address this issue by providing...
An Improved Semisupervised Outlier Detection Algorithm Based on Adaptive Feature Weighted Clustering
Directory of Open Access Journals (Sweden)
Tingquan Deng
2016-01-01
Full Text Available There exist already various approaches to outlier detection, in which semisupervised methods achieve encouraging superiority due to the introduction of prior knowledge. In this paper, an adaptive feature weighted clustering-based semisupervised outlier detection strategy is proposed. This method maximizes the membership degree of a labeled normal object to the cluster it belongs to and minimizes the membership degrees of a labeled outlier to all clusters. In consideration of distinct significance of features or components in a dataset in determining an object being an inlier or outlier, each feature is adaptively assigned different weights according to the deviation degrees between this feature of all objects and that of a certain cluster prototype. A series of experiments on a synthetic dataset and several real-world datasets are implemented to verify the effectiveness and efficiency of the proposal.
International Nuclear Information System (INIS)
Ikai, T.; Yoshimura, T.; Shinohara, A.; Takayama, T.; Sekine, T.
2006-01-01
Selenide capping hexatechnetium cluster complex [Tc 6 (μ 3 -Se) 8 CN 6 ] 4- (1) was prepared by the reactions of one-dimensional polymer complex [Tc 6 (μ 3 -Se) 8 Br 4 ] 2- and cyanides at high temperature. Similar reaction of sulfide capping hexatechnetium cluster complex, [Tc 6 (μ 3 -S) 8 Br 6 ] 4- with cyanide gave the terminal substituted complex [Tc 6 (μ 3 -S) 8 CN 6 ] 4- (2). The single-crystal X-ray analysis of 1 and 2, showed that the Tc-Tc bond lengths become longer with lager ionic radius of the face capping ligands in the order S -1 , and that of 2 showed it at 2119 cm -1 . Each of cyclic voltammogram of 1 and 2 showed a reversible one electron redox wave assignable to the Tc 6 III /Tc 5 III Tc IV process. These redox potentials shift to the positive about 0.4V compared to those of the Re cluster analogs. (author)
International Nuclear Information System (INIS)
Langner, Ulrich W.; Keall, Paul J.
2010-01-01
Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.
Alzheimer's Disease Early Diagnosis Using Manifold-Based Semi-Supervised Learning.
Khajehnejad, Moein; Saatlou, Forough Habibollahi; Mohammadzade, Hoda
2017-08-20
Alzheimer's disease (AD) is currently ranked as the sixth leading cause of death in the United States and recent estimates indicate that the disorder may rank third, just behind heart disease and cancer, as a cause of death for older people. Clearly, predicting this disease in the early stages and preventing it from progressing is of great importance. The diagnosis of Alzheimer's disease (AD) requires a variety of medical tests, which leads to huge amounts of multivariate heterogeneous data. It can be difficult and exhausting to manually compare, visualize, and analyze this data due to the heterogeneous nature of medical tests; therefore, an efficient approach for accurate prediction of the condition of the brain through the classification of magnetic resonance imaging (MRI) images is greatly beneficial and yet very challenging. In this paper, a novel approach is proposed for the diagnosis of very early stages of AD through an efficient classification of brain MRI images, which uses label propagation in a manifold-based semi-supervised learning framework. We first apply voxel morphometry analysis to extract some of the most critical AD-related features of brain images from the original MRI volumes and also gray matter (GM) segmentation volumes. The features must capture the most discriminative properties that vary between a healthy and Alzheimer-affected brain. Next, we perform a principal component analysis (PCA)-based dimension reduction on the extracted features for faster yet sufficiently accurate analysis. To make the best use of the captured features, we present a hybrid manifold learning framework which embeds the feature vectors in a subspace. Next, using a small set of labeled training data, we apply a label propagation method in the created manifold space to predict the labels of the remaining images and classify them in the two groups of mild Alzheimer's and normal condition (MCI/NC). The accuracy of the classification using the proposed method is 93
Tile-Based Semisupervised Classification of Large-Scale VHR Remote Sensing Images
Directory of Open Access Journals (Sweden)
Haikel Alhichri
2018-01-01
Full Text Available This paper deals with the problem of the classification of large-scale very high-resolution (VHR remote sensing (RS images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class. Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the classification accuracy. We present a novel semisupervised approach which exploits both the spectral and spatial relationships embedded in the remaining unlabelled tiles. In particular, we embed a spectral graph Laplacian in the hidden layer of the neural network. In addition, we apply regularization of the output labels using a spatial graph Laplacian and the random Walker algorithm. Experimental results obtained by testing the method on two large-scale images acquired by the IKONOS2 sensor reveal promising capabilities of this method in terms of classification accuracy even with less than ten training samples per class.
Chavez Chavez, Gustavo Ivan
2017-12-07
We present a robust and scalable preconditioner for the solution of large-scale linear systems that arise from the discretization of elliptic PDEs amenable to rank compression. The preconditioner is based on hierarchical low-rank approximations and the cyclic reduction method. The setup and application phases of the preconditioner achieve log-linear complexity in memory footprint and number of operations, and numerical experiments exhibit good weak and strong scalability at large processor counts in a distributed memory environment. Numerical experiments with linear systems that feature symmetry and nonsymmetry, definiteness and indefiniteness, constant and variable coefficients demonstrate the preconditioner applicability and robustness. Furthermore, it is possible to control the number of iterations via the accuracy threshold of the hierarchical matrix approximations and their arithmetic operations, and the tuning of the admissibility condition parameter. Together, these parameters allow for optimization of the memory requirements and performance of the preconditioner.
Stanescu, Ana; Caragea, Doina
2015-01-01
Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.
GMDH-Based Semi-Supervised Feature Selection for Electricity Load Classification Forecasting
Directory of Open Access Journals (Sweden)
Lintao Yang
2018-01-01
Full Text Available With the development of smart power grids, communication network technology and sensor technology, there has been an exponential growth in complex electricity load data. Irregular electricity load fluctuations caused by the weather and holiday factors disrupt the daily operation of the power companies. To deal with these challenges, this paper investigates a day-ahead electricity peak load interval forecasting problem. It transforms the conventional continuous forecasting problem into a novel interval forecasting problem, and then further converts the interval forecasting problem into the classification forecasting problem. In addition, an indicator system influencing the electricity load is established from three dimensions, namely the load series, calendar data, and weather data. A semi-supervised feature selection algorithm is proposed to address an electricity load classification forecasting issue based on the group method of data handling (GMDH technology. The proposed algorithm consists of three main stages: (1 training the basic classifier; (2 selectively marking the most suitable samples from the unclassified label data, and adding them to an initial training set; and (3 training the classification models on the final training set and classifying the test samples. An empirical analysis of electricity load dataset from four Chinese cities is conducted. Results show that the proposed model can address the electricity load classification forecasting problem more efficiently and effectively than the FW-Semi FS (forward semi-supervised feature selection and GMDH-U (GMDH-based semi-supervised feature selection for customer classification models.
Active learning for semi-supervised clustering based on locally linear propagation reconstruction.
Chang, Chin-Chun; Lin, Po-Yi
2015-03-01
The success of semi-supervised clustering relies on the effectiveness of side information. To get effective side information, a new active learner learning pairwise constraints known as must-link and cannot-link constraints is proposed in this paper. Three novel techniques are developed for learning effective pairwise constraints. The first technique is used to identify samples less important to cluster structures. This technique makes use of a kernel version of locally linear embedding for manifold learning. Samples neither important to locally linear propagation reconstructions of other samples nor on flat patches in the learned manifold are regarded as unimportant samples. The second is a novel criterion for query selection. This criterion considers not only the importance of a sample to expanding the space coverage of the learned samples but also the expected number of queries needed to learn the sample. To facilitate semi-supervised clustering, the third technique yields inferred must-links for passing information about flat patches in the learned manifold to semi-supervised clustering algorithms. Experimental results have shown that the learned pairwise constraints can capture the underlying cluster structures and proven the feasibility of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Scaling up graph-based semisupervised learning via prototype vector machines.
Zhang, Kai; Lan, Liang; Kwok, James T; Vucetic, Slobodan; Parvin, Bahram
2015-03-01
When the amount of labeled data are limited, semisupervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via l1 -regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning.
Dimensional reduction of U(1) x SU(2) Chern-Simons bosonization: Application to the t - J model
International Nuclear Information System (INIS)
Marchetti, P.A.
1996-09-01
We perform a dimensional reduction of the U(1) x SU(2) Chern-Simons bosonization and apply it to the t - J model, relevant for high T c superconductors. This procedure yields a decomposition of the electron field into a product of two ''semionic'' fields, i.e. fields obeying Abelian braid statistics with statistics parameter θ = 1/4, one carrying the charge and the other the spin degrees of freedom. A mean field theory is then shown to reproduce correctly the large distance behaviour of the correlation functions of the 1D t - J model at >> J. This result shows that to capture the essential physical properties of the model one needs a specific ''semionic'' form of spin-charge separation. (author). 31 refs
International Nuclear Information System (INIS)
Rogers, C; Schief, W K
2011-01-01
A 2+1-dimensional version of a non-isothermal gas dynamic system with origins in the work of Ovsiannikov and Dyson on spinning gas clouds is shown to admit a Hamiltonian reduction which is completely integrable when the adiabatic index γ = 2. This nonlinear dynamical subsystem is obtained via an elliptic vortex ansatz which is intimately related to the construction of a Lax pair in the integrable case. The general solution of the gas dynamic system is derived in terms of Weierstrass (elliptic) functions. The latter derivation makes use of a connection with a stationary nonlinear Schrödinger equation and a Steen–Ermakov–Pinney equation, the superposition principle of which is based on the classical Lamé equation
Directory of Open Access Journals (Sweden)
Tom Cattaert
Full Text Available We propose a novel multifactor dimensionality reduction method for epistasis detection in small or extended pedigrees, FAM-MDR. It combines features of the Genome-wide Rapid Association using Mixed Model And Regression approach (GRAMMAR with Model-Based MDR (MB-MDR. We focus on continuous traits, although the method is general and can be used for outcomes of any type, including binary and censored traits. When comparing FAM-MDR with Pedigree-based Generalized MDR (PGMDR, which is a generalization of Multifactor Dimensionality Reduction (MDR to continuous traits and related individuals, FAM-MDR was found to outperform PGMDR in terms of power, in most of the considered simulated scenarios. Additional simulations revealed that PGMDR does not appropriately deal with multiple testing and consequently gives rise to overly optimistic results. FAM-MDR adequately deals with multiple testing in epistasis screens and is in contrast rather conservative, by construction. Furthermore, simulations show that correcting for lower order (main effects is of utmost importance when claiming epistasis. As Type 2 Diabetes Mellitus (T2DM is a complex phenotype likely influenced by gene-gene interactions, we applied FAM-MDR to examine data on glucose area-under-the-curve (GAUC, an endophenotype of T2DM for which multiple independent genetic associations have been observed, in the Amish Family Diabetes Study (AFDS. This application reveals that FAM-MDR makes more efficient use of the available data than PGMDR and can deal with multi-generational pedigrees more easily. In conclusion, we have validated FAM-MDR and compared it to PGMDR, the current state-of-the-art MDR method for family data, using both simulations and a practical dataset. FAM-MDR is found to outperform PGMDR in that it handles the multiple testing issue more correctly, has increased power, and efficiently uses all available information.
Li, Fengwang; Xue, Mianqi; Li, Jiezhen; Ma, Xinlei; Chen, Lu; Zhang, Xueji; MacFarlane, Douglas R; Zhang, Jie
2017-11-13
Two-dimensional (2D) materials are known to be useful in catalysis. Engineering 3D bulk materials into the 2D form can enhance the exposure of the active edge sites, which are believed to be the origin of the high catalytic activity. Reported herein is the production of 2D "few-layer" antimony (Sb) nanosheets by cathodic exfoliation. Application of this 2D engineering method turns Sb, an inactive material for CO 2 reduction in its bulk form, into an active 2D electrocatalyst for reduction of CO 2 to formate with high efficiency. The high activity is attributed to the exposure of a large number of catalytically active edge sites. Moreover, this cathodic exfoliation process can be coupled with the anodic exfoliation of graphite in a single-compartment cell for in situ production of a few-layer Sb nanosheets and graphene composite. The observed increased activity of this composite is attributed to the strong electronic interaction between graphene and Sb. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhang, Lianbin
2012-01-01
In this study, three-dimensional (3D) graphene assemblies are prepared from graphene oxide (GO) by a facile in situ reduction-assembly method, using a novel, low-cost, and environment-friendly reducing medium which is a combination of oxalic acid (OA) and sodium iodide (NaI). It is demonstrated that the combination of a reducing acid, OA, and NaI is indispensable for effective reduction of GO in the current study and this unique combination (1) allows for tunable control over the volume of the thus-prepared graphene assemblies and (2) enables 3D graphene assemblies to be prepared from the GO suspension with a wide range of concentrations (0.1 to 4.5 mg mL-1). To the best of our knowledge, the GO concentration of 0.1 mg mL-1 is the lowest GO concentration ever reported for preparation of 3D graphene assemblies. The thus-prepared 3D graphene assemblies exhibit low density, highly porous structures, and electrically conducting properties. As a proof of concept, we show that by infiltrating a responsive polymer of polydimethylsiloxane (PDMS) into the as-resulted 3D conducting network of graphene, a conducting composite is obtained, which can be used as a sensing device for differentiating organic solvents with different polarity. © 2012 The Royal Society of Chemistry.
International Nuclear Information System (INIS)
Shao, Zhen; Yang, Shan-Lin; Gao, Fei
2014-01-01
Highlights: • A new stationary time series smoothing-based semiparametric model is established. • A novel semiparametric additive model based on piecewise smooth is proposed. • We model the uncertainty of data distribution for mid-term electricity forecasting. • We provide efficient long horizon simulation and extraction for external variables. • We provide stable and accurate density predictions for mid-term electricity demand. - Abstract: Accurate mid-term electricity demand forecasting is critical for efficient electric planning, budgeting and operating decisions. Mid-term electricity demand forecasting is notoriously complicated, since the demand is subject to a range of external drivers, such as climate change, economic development, which will exhibit monthly, seasonal, and annual complex variations. Conventional models are based on the assumption that original data is stable and normally distributed, which is generally insignificant in explaining actual demand pattern. This paper proposes a new semiparametric additive model that, in addition to considering the uncertainty of the data distribution, includes practical discussions covering the applications of the external variables. To effectively detach the multi-dimensional volatility of mid-term demand, a novel piecewise smooth method which allows reduction of the data dimensionality is developed. Besides, a semi-parametric procedure that makes use of bootstrap algorithm for density forecast and model estimation is presented. Two typical cases in China are presented to verify the effectiveness of the proposed methodology. The results suggest that both meteorological and economic variables play a critical role in mid-term electricity consumption prediction in China, while the extracted economic factor is adequate to reveal the potentially complex relationship between electricity consumption and economic fluctuation. Overall, the proposed model can be easily applied to mid-term demand forecasting, and
DEFF Research Database (Denmark)
Eckardt, Henrik; Lind, Marianne
2015-01-01
BACKGROUND: Operative treatment of displaced calcaneal fractures should restore joint congruence, but conventional fluoroscopy is unable to fully visualize the subtalar joint. We questioned whether intraoperative 3-dimensional (3D) imaging would aid in the reduction of calcaneal fractures......, resulting in improved articular congruence and implant positioning. METHOD: Sixty-two displaced calcaneal fractures were operated on using standard fluoroscopic views. When the surgeon had achieved a satisfactory reduction, an intraoperative 3D scan was conducted, malreductions or implant imperfections were...
Can Semi-Supervised Learning Explain Incorrect Beliefs about Categories?
Kalish, Charles W.; Rogers, Timothy T.; Lang, Jonathan; Zhu, Xiaojin
2011-01-01
Three experiments with 88 college-aged participants explored how unlabeled experiences--learning episodes in which people encounter objects without information about their category membership--influence beliefs about category structure. Participants performed a simple one-dimensional categorization task in a brief supervised learning phase, then…
Park, W S; Kim, K D; Shin, H K; Lee, S H
2007-01-01
Metal Artifact still remains one of the main drawbacks in craniofacial Three-Dimensional Computed Tomography (3D CT). In this study, we tried to test the efficacy of additional silicone dental impression materials as a "tooth shield" for the reduction of metal artifact caused by metal restorations and orthodontic appliances. 6 phantoms with 4 teeth were prepared for this in vitro study. Orthodontic bracket, bands and amalgam restorations were placed in each tooth to reproduce various intraoral conditions. Standardized silicone shields were fabricated and placed around the teeth. CT image acquisition was performed with and without silicone shields. Maximum value, mean, and standard deviation of Hounsfield Units (HU) were compared with the presence of silicone shields. In every situation, metal artifacts were reduced in quality and quantity when silicone shields are used. Amalgam restoration made most serious metal artifact. Silicone shields made by dental impression material might be effective way to reduce the metal artifact caused by dental restoration and orthodontic appliances. This will help more excellent 3D image from 3D CT in craniofacial area.
Weckwerth, Wolfram
2008-02-01
In recent years, genomics has been extended to functional genomics. Toward the characterization of organisms or species on the genome level, changes on the metabolite and protein level have been shown to be essential to assign functions to genes and to describe the dynamic molecular phenotype. Gas chromatography (GC) and liquid chromatography coupled to mass spectrometry (GC- and LC-MS) are well suited for the fast and comprehensive analysis of ultracomplex metabolite samples. For the integration of metabolite profiles with quantitative protein profiles, a high throughput (HTP) shotgun proteomics approach using LC-MS and label-free quantification of unique proteins in a complex protein digest is described. Multivariate statistics are applied to examine sample pattern recognition based on data-dimensionality reduction and biomarker identification in plant systems biology. The integration of the data reveal multiple correlative biomarkers providing evidence for an increase of information in such holistic approaches. With computational simulation of metabolic networks and experimental measurements, it can be shown that biochemical regulation is reflected by metabolite network dynamics measured in a metabolomics approach. Examples in molecular plant physiology are presented to substantiate the integrative approach.
Semi-Supervised Clustering for High-Dimensional and Sparse Features
Yan, Su
2010-01-01
Clustering is one of the most common data mining tasks, used frequently for data organization and analysis in various application domains. Traditional machine learning approaches to clustering are fully automated and unsupervised where class labels are unknown a priori. In real application domains, however, some "weak" form of side…
A semi-supervised approach using label propagation to support citation screening.
Kontonatsios, Georgios; Brockmeier, Austin J; Przybyła, Piotr; McNaught, John; Mu, Tingting; Goulermas, John Y; Ananiadou, Sophia
2017-08-01
Citation screening, an integral process within systematic reviews that identifies citations relevant to the underlying research question, is a time-consuming and resource-intensive task. During the screening task, analysts manually assign a label to each citation, to designate whether a citation is eligible for inclusion in the review. Recently, several studies have explored the use of active learning in text classification to reduce the human workload involved in the screening task. However, existing approaches require a significant amount of manually labelled citations for the text classification to achieve a robust performance. In this paper, we propose a semi-supervised method that identifies relevant citations as early as possible in the screening process by exploiting the pairwise similarities between labelled and unlabelled citations to improve the classification performance without additional manual labelling effort. Our approach is based on the hypothesis that similar citations share the same label (e.g., if one citation should be included, then other similar citations should be included also). To calculate the similarity between labelled and unlabelled citations we investigate two different feature spaces, namely a bag-of-words and a spectral embedding based on the bag-of-words. The semi-supervised method propagates the classification codes of manually labelled citations to neighbouring unlabelled citations in the feature space. The automatically labelled citations are combined with the manually labelled citations to form an augmented training set. For evaluation purposes, we apply our method to reviews from clinical and public health. The results show that our semi-supervised method with label propagation achieves statistically significant improvements over two state-of-the-art active learning approaches across both clinical and public health reviews. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Optimizing area under the ROC curve using semi-supervised learning.
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M
2015-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.
Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh
2017-06-01
Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.
Improving head and body pose estimation through semi-supervised manifold alignment
Heili, Alexandre
2014-10-27
In this paper, we explore the use of a semi-supervised manifold alignment method for domain adaptation in the context of human body and head pose estimation in videos. We build upon an existing state-of-the-art system that leverages on external labelled datasets for the body and head features, and on the unlabelled test data with weak velocity labels to do a coupled estimation of the body and head pose. While this previous approach showed promising results, the learning of the underlying manifold structure of the features in the train and target data and the need to align them were not explored despite the fact that the pose features between two datasets may vary according to the scene, e.g. due to different camera point of view or perspective. In this paper, we propose to use a semi-supervised manifold alignment method to bring the train and target samples closer within the resulting embedded space. To this end, we consider an adaptation set from the target data and rely on (weak) labels, given for example by the velocity direction whenever they are reliable. These labels, along with the training labels are used to bias the manifold distance within each manifold and to establish correspondences for alignment.
Multiple-Features-Based Semisupervised Clustering DDoS Detection Method
Directory of Open Access Journals (Sweden)
Yonghao Gu
2017-01-01
Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.
Visual texture perception via graph-based semi-supervised learning
Zhang, Qin; Dong, Junyu; Zhong, Guoqiang
2018-04-01
Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.
Directory of Open Access Journals (Sweden)
Chunjing Song
2017-11-01
Full Text Available Wireless local area network (WLAN fingerprint positioning is an indoor localization technique with high accuracy and low hardware requirements. However, collecting received signal strength (RSS samples for the fingerprint database is time-consuming and labor-intensive, hindering the use of this technique. The popular crowdsourcing sampling technique has been introduced to reduce the workload of sample collection, but has two challenges: one is the heterogeneity of devices, which can significantly affect the positioning accuracy; the other is the requirement of users’ intervention in traditional crowdsourcing, which reduces the practicality of the system. In response to these challenges, we have proposed a new WLAN indoor positioning strategy, which incorporates a new preprocessing method for RSS samples, the implicit crowdsourcing sampling technique, and a semi-supervised learning algorithm. First, implicit crowdsourcing does not require users’ intervention. The acquisition program silently collects unlabeled samples, the RSS samples, without information about the position. Secondly, to cope with the heterogeneity of devices, the preprocessing method maps all the RSS values of samples to a uniform range and discretizes them. Finally, by using a large number of unlabeled samples with some labeled samples, Co-Forest, the introduced semi-supervised learning algorithm, creates and repeatedly refines a random forest ensemble classifier that performs well for location estimation. The results of experiments conducted in a real indoor environment show that the proposed strategy reduces the demand for large quantities of labeled samples and achieves good positioning accuracy.
A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.
Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe
2012-04-01
We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.
Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.
Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie
2016-07-01
Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.
Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen
2017-01-01
An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.
A Novel Classification Algorithm Based on Incremental Semi-Supervised Support Vector Machine.
Directory of Open Access Journals (Sweden)
Fei Gao
Full Text Available For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a "soft-start" approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment.
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jiang, Yizhang; Wu, Dongrui; Deng, Zhaohong; Qian, Pengjiang; Wang, Jun; Wang, Guanjin; Chung, Fu-Lai; Choi, Kup-Sze; Wang, Shitong
2017-12-01
Recognition of epileptic seizures from offline EEG signals is very important in clinical diagnosis of epilepsy. Compared with manual labeling of EEG signals by doctors, machine learning approaches can be faster and more consistent. However, the classification accuracy is usually not satisfactory for two main reasons: the distributions of the data used for training and testing may be different, and the amount of training data may not be enough. In addition, most machine learning approaches generate black-box models that are difficult to interpret. In this paper, we integrate transductive transfer learning, semi-supervised learning and TSK fuzzy system to tackle these three problems. More specifically, we use transfer learning to reduce the discrepancy in data distribution between the training and testing data, employ semi-supervised learning to use the unlabeled testing data to remedy the shortage of training data, and adopt TSK fuzzy system to increase model interpretability. Two learning algorithms are proposed to train the system. Our experimental results show that the proposed approaches can achieve better performance than many state-of-the-art seizure classification algorithms.
A Novel Semi-Supervised Electronic Nose Learning Technique: M-Training
Directory of Open Access Journals (Sweden)
Pengfei Jia
2016-03-01
Full Text Available When an electronic nose (E-nose is used to distinguish different kinds of gases, the label information of the target gas could be lost due to some fault of the operators or some other reason, although this is not expected. Another fact is that the cost of getting the labeled samples is usually higher than for unlabeled ones. In most cases, the classification accuracy of an E-nose trained using labeled samples is higher than that of the E-nose trained by unlabeled ones, so gases without label information should not be used to train an E-nose, however, this wastes resources and can even delay the progress of research. In this work a novel multi-class semi-supervised learning technique called M-training is proposed to train E-noses with both labeled and unlabeled samples. We employ M-training to train the E-nose which is used to distinguish three indoor pollutant gases (benzene, toluene and formaldehyde. Data processing results prove that the classification accuracy of E-nose trained by semi-supervised techniques (tri-training and M-training is higher than that of an E-nose trained only with labeled samples, and the performance of M-training is better than that of tri-training because more base classifiers can be employed by M-training.
A semi-supervised learning approach for RNA secondary structure prediction.
Yonemoto, Haruka; Asai, Kiyoshi; Hamada, Michiaki
2015-08-01
RNA secondary structure prediction is a key technology in RNA bioinformatics. Most algorithms for RNA secondary structure prediction use probabilistic models, in which the model parameters are trained with reliable RNA secondary structures. Because of the difficulty of determining RNA secondary structures by experimental procedures, such as NMR or X-ray crystal structural analyses, there are still many RNA sequences that could be useful for training whose secondary structures have not been experimentally determined. In this paper, we introduce a novel semi-supervised learning approach for training parameters in a probabilistic model of RNA secondary structures in which we employ not only RNA sequences with annotated secondary structures but also ones with unknown secondary structures. Our model is based on a hybrid of generative (stochastic context-free grammars) and discriminative models (conditional random fields) that has been successfully applied to natural language processing. Computational experiments indicate that the accuracy of secondary structure prediction is improved by incorporating RNA sequences with unknown secondary structures into training. To our knowledge, this is the first study of a semi-supervised learning approach for RNA secondary structure prediction. This technique will be useful when the number of reliable structures is limited. Copyright © 2015 Elsevier Ltd. All rights reserved.
Semi-supervised drug-protein interaction prediction from heterogeneous biological spaces.
Xia, Zheng; Wu, Ling-Yun; Zhou, Xiaobo; Wong, Stephen T C
2010-09-13
Predicting drug-protein interactions from heterogeneous biological data sources is a key step for in silico drug discovery. The difficulty of this prediction task lies in the rarity of known drug-protein interactions and myriad unknown interactions to be predicted. To meet this challenge, a manifold regularization semi-supervised learning method is presented to tackle this issue by using labeled and unlabeled information which often generates better results than using the labeled data alone. Furthermore, our semi-supervised learning method integrates known drug-protein interaction network information as well as chemical structure and genomic sequence data. Using the proposed method, we predicted certain drug-protein interactions on the enzyme, ion channel, GPCRs, and nuclear receptor data sets. Some of them are confirmed by the latest publicly available drug targets databases such as KEGG. We report encouraging results of using our method for drug-protein interaction network reconstruction which may shed light on the molecular interaction inference and new uses of marketed drugs.
Irges, Nikos; Zoupanos, George
2011-01-01
We present an extension of the Standard Model inspired by the E_8 x E_8 Heterotic String. In order that a reasonable effective Lagrangian is presented we neglect everything else other than the ten-dimensional N=1 supersymmetric Yang-Mills sector associated with one of the gauge factors and certain couplings necessary for anomaly cancellation. We consider a compactified space-time M_4 x B_0 / Z_3, where B_0 is the nearly-Kaehler manifold SU(3)/U(1) x U(1) and Z_3 is a freely acting discrete group on B_0. Then we reduce dimensionally the E_8 on this manifold and we employ the Wilson flux mechanism leading in four dimensions to an SU(3)^3 gauge theory with the spectrum of a N=1 supersymmetric theory. We compute the effective four-dimensional Lagrangian and demonstrate that an extension of the Standard Model is obtained with interesting features including a conserved baryon number and fixed tree level Yukawa couplings and scalar potential. The spectrum contains new states such as right handed neutrinos and heavy ...
Energy Technology Data Exchange (ETDEWEB)
Park, Sang Hyun [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Gao, Yaozong, E-mail: yzgao@cs.unc.edu [Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Shi, Yinghuan, E-mail: syh@nju.edu.cn [State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713 (Korea, Republic of)
2014-11-01
Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to
International Nuclear Information System (INIS)
Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang
2014-01-01
Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to
Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang
2014-11-01
Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency
Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach
Directory of Open Access Journals (Sweden)
Hong Wang
2014-12-01
Full Text Available To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML form or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to identify search interfaces more effectively. We present a semi-supervised co-training ensemble learning approach using both neural networks and decision trees to deal with the search interface identification problem. We show that the proposed model outperforms previous methods using only labeled data. We also show that adding unlabeled data improves the effectiveness of the proposed model.
Semi-supervised Probabilistic Distance Clustering and the Uncertainty of Classification
Iyigun, Cem; Ben-Israel, Adi
Semi-supervised clustering is an attempt to reconcile clustering (unsupervised learning) and classification (supervised learning, using prior information on the data). These two modes of data analysis are combined in a parameterized model, the parameter θ ∈ [0, 1] is the weight attributed to the prior information, θ = 0 corresponding to clustering, and θ = 1 to classification. The results (cluster centers, classification rule) depend on the parameter θ, an insensitivity to θ indicates that the prior information is in agreement with the intrinsic cluster structure, and is otherwise redundant. This explains why some data sets (such as the Wisconsin breast cancer data, Merz and Murphy, UCI repository of machine learning databases, University of California, Irvine, CA) give good results for all reasonable classification methods. The uncertainty of classification is represented here by the geometric mean of the membership probabilities, shown to be an entropic distance related to the Kullback-Leibler divergence.
The helpfulness of category labels in semi-supervised learning depends on category structure.
Vong, Wai Keen; Navarro, Daniel J; Perfors, Amy
2016-02-01
The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people's responses are driven by the specific set of labels they see. We present an extension of Anderson's Rational Model of Categorization that captures this effect.
Semi-Supervised Half-Quadratic Nonnegative Matrix Factorization for Face Recognition
Alghamdi, Masheal M.
2014-05-01
Face recognition is a challenging problem in computer vision. Difficulties such as slight differences between similar faces of different people, changes in facial expressions, light and illumination condition, and pose variations add extra complications to the face recognition research. Many algorithms are devoted to solving the face recognition problem, among which the family of nonnegative matrix factorization (NMF) algorithms has been widely used as a compact data representation method. Different versions of NMF have been proposed. Wang et al. proposed the graph-based semi-supervised nonnegative learning (S2N2L) algorithm that uses labeled data in constructing intrinsic and penalty graph to enforce separability of labeled data, which leads to a greater discriminating power. Moreover the geometrical structure of labeled and unlabeled data is preserved through using the smoothness assumption by creating a similarity graph that conserves the neighboring information for all labeled and unlabeled data. However, S2N2L is sensitive to light changes, illumination, and partial occlusion. In this thesis, we propose a Semi-Supervised Half-Quadratic NMF (SSHQNMF) algorithm that combines the benefits of S2N2L and the robust NMF by the half- quadratic minimization (HQNMF) algorithm.Our algorithm improves upon the S2N2L algorithm by replacing the Frobenius norm with a robust M-Estimator loss function. A multiplicative update solution for our SSHQNMF algorithmis driven using the half- 4 quadratic (HQ) theory. Extensive experiments on ORL, Yale-A and a subset of the PIE data sets for nine M-estimator loss functions for both SSHQNMF and HQNMF algorithms are investigated, and compared with several state-of-the-art supervised and unsupervised algorithms, along with the original S2N2L algorithm in the context of classification, clustering, and robustness against partial occlusion. The proposed algorithm outperformed the other algorithms. Furthermore, SSHQNMF with Maximum Correntropy
Semi-supervised tracking of extreme weather events in global spatio-temporal climate datasets
Kim, S. K.; Prabhat, M.; Williams, D. N.
2017-12-01
Deep neural networks have been successfully applied to solve problem to detect extreme weather events in large scale climate datasets and attend superior performance that overshadows all previous hand-crafted methods. Recent work has shown that multichannel spatiotemporal encoder-decoder CNN architecture is able to localize events in semi-supervised bounding box. Motivated by this work, we propose new learning metric based on Variational Auto-Encoders (VAE) and Long-Short-Term-Memory (LSTM) to track extreme weather events in spatio-temporal dataset. We consider spatio-temporal object tracking problems as learning probabilistic distribution of continuous latent features of auto-encoder using stochastic variational inference. For this, we assume that our datasets are i.i.d and latent features is able to be modeled by Gaussian distribution. In proposed metric, we first train VAE to generate approximate posterior given multichannel climate input with an extreme climate event at fixed time. Then, we predict bounding box, location and class of extreme climate events using convolutional layers given input concatenating three features including embedding, sampled mean and standard deviation. Lastly, we train LSTM with concatenated input to learn timely information of dataset by recurrently feeding output back to next time-step's input of VAE. Our contribution is two-fold. First, we show the first semi-supervised end-to-end architecture based on VAE to track extreme weather events which can apply to massive scaled unlabeled climate datasets. Second, the information of timely movement of events is considered for bounding box prediction using LSTM which can improve accuracy of localization. To our knowledge, this technique has not been explored neither in climate community or in Machine Learning community.
Semi-supervised weighted kernel clustering based on gravitational search for fault diagnosis.
Li, Chaoshun; Zhou, Jianzhong
2014-09-01
Supervised learning method, like support vector machine (SVM), has been widely applied in diagnosing known faults, however this kind of method fails to work correctly when new or unknown fault occurs. Traditional unsupervised kernel clustering can be used for unknown fault diagnosis, but it could not make use of the historical classification information to improve diagnosis accuracy. In this paper, a semi-supervised kernel clustering model is designed to diagnose known and unknown faults. At first, a novel semi-supervised weighted kernel clustering algorithm based on gravitational search (SWKC-GS) is proposed for clustering of dataset composed of labeled and unlabeled fault samples. The clustering model of SWKC-GS is defined based on wrong classification rate of labeled samples and fuzzy clustering index on the whole dataset. Gravitational search algorithm (GSA) is used to solve the clustering model, while centers of clusters, feature weights and parameter of kernel function are selected as optimization variables. And then, new fault samples are identified and diagnosed by calculating the weighted kernel distance between them and the fault cluster centers. If the fault samples are unknown, they will be added in historical dataset and the SWKC-GS is used to partition the mixed dataset and update the clustering results for diagnosing new fault. In experiments, the proposed method has been applied in fault diagnosis for rotatory bearing, while SWKC-GS has been compared not only with traditional clustering methods, but also with SVM and neural network, for known fault diagnosis. In addition, the proposed method has also been applied in unknown fault diagnosis. The results have shown effectiveness of the proposed method in achieving expected diagnosis accuracy for both known and unknown faults of rotatory bearing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A semi-supervised learning framework for biomedical event extraction based on hidden topics.
Zhou, Deyu; Zhong, Dayou
2015-05-01
Scientists have devoted decades of efforts to understanding the interaction between proteins or RNA production. The information might empower the current knowledge on drug reactions or the development of certain diseases. Nevertheless, due to the lack of explicit structure, literature in life science, one of the most important sources of this information, prevents computer-based systems from accessing. Therefore, biomedical event extraction, automatically acquiring knowledge of molecular events in research articles, has attracted community-wide efforts recently. Most approaches are based on statistical models, requiring large-scale annotated corpora to precisely estimate models' parameters. However, it is usually difficult to obtain in practice. Therefore, employing un-annotated data based on semi-supervised learning for biomedical event extraction is a feasible solution and attracts more interests. In this paper, a semi-supervised learning framework based on hidden topics for biomedical event extraction is presented. In this framework, sentences in the un-annotated corpus are elaborately and automatically assigned with event annotations based on their distances to these sentences in the annotated corpus. More specifically, not only the structures of the sentences, but also the hidden topics embedded in the sentences are used for describing the distance. The sentences and newly assigned event annotations, together with the annotated corpus, are employed for training. Experiments were conducted on the multi-level event extraction corpus, a golden standard corpus. Experimental results show that more than 2.2% improvement on F-score on biomedical event extraction is achieved by the proposed framework when compared to the state-of-the-art approach. The results suggest that by incorporating un-annotated data, the proposed framework indeed improves the performance of the state-of-the-art event extraction system and the similarity between sentences might be precisely
Shi, Chengdi; Cai, Leyi; Hu, Wei; Sun, Junying
2017-09-19
ABSTRACTS Objective: To study the method of X-ray diagnosis of unstable pelvic fractures displaced in three-dimensional (3D) space and its clinical application in closed reduction. Five models of hemipelvic displacement were made in an adult pelvic specimen. Anteroposterior radiographs of the pelvis were analyzed in PACS. The method of X-ray diagnosis was applied in closed reductions. From February 2012 to June 2016, 23 patients (15 men, 8 women; mean age, 43.4 years) with unstable pelvic fractures were included. All patients were treated by closed reduction and percutaneous cannulate screw fixation of the pelvic ring. According to Tile's classification, the patients were classified into type B1 in 7 cases, B2 in 3, B3 in 3, C1 in 5, C2 in 3, and C3 in 2. The operation time and intraoperative blood loss were recorded. Postoperative images were evaluated by Matta radiographic standards. Five models of displacement were made successfully. The X-ray features of the models were analyzed. For clinical patients, the average operation time was 44.8 min (range, 20-90 min) and the average intraoperative blood loss was 35.7 (range, 20-100) mL. According to the Matta standards, 7 cases were excellent, 12 cases were good, and 4 were fair. The displacements in 3D space of unstable pelvic fractures can be diagnosed rapidly by X-ray analysis to guide closed reduction, with a satisfactory clinical outcome.
International Nuclear Information System (INIS)
Cardoso, W. B.; Avelar, A. T.; Bazeia, D.
2011-01-01
We deal with the three-dimensional Gross-Pitaevskii equation which is used to describe a cloud of dilute bosonic atoms that interact under competing two- and three-body scattering potentials. We study the case where the cloud of atoms is strongly confined in two spatial dimensions, allowing us to build an unidimensional nonlinear equation,controlled by the nonlinearities and the confining potentials that trap the system along the longitudinal coordinate. We focus attention on specific limits dictated by the cubic and quintic coefficients, and we implement numerical simulations to help us to quantify the validity of the procedure.
Directory of Open Access Journals (Sweden)
Min Liu
2018-03-01
Full Text Available Sidelobe reduction is a very primary task for synthetic aperture radar (SAR images. Various methods have been proposed for broadside SAR, which can suppress the sidelobes effectively while maintaining high image resolution at the same time. Alternatively, squint SAR, especially highly squint SAR, has emerged as an important tool that provides more mobility and flexibility and has become a focus of recent research studies. One of the research challenges for squint SAR is how to resolve the severe range-azimuth coupling of echo signals. Unlike broadside SAR images, the range and azimuth sidelobes of the squint SAR images no longer locate on the principal axes with high probability. Thus the spatially variant apodization (SVA filters could hardly get all the sidelobe information, and hence the sidelobe reduction process is not optimal. In this paper, we present an improved algorithm called double spatially variant apodization (D-SVA for better sidelobe suppression. Satisfactory sidelobe reduction results are achieved with the proposed algorithm by comparing the squint SAR images to the broadside SAR images. Simulation results also demonstrate the reliability and efficiency of the proposed method.
Semi-supervised prediction of SH2-peptide interactions from imbalanced high-throughput data.
Kundu, Kousik; Costa, Fabrizio; Huber, Michael; Reth, Michael; Backofen, Rolf
2013-01-01
Src homology 2 (SH2) domains are the largest family of the peptide-recognition modules (PRMs) that bind to phosphotyrosine containing peptides. Knowledge about binding partners of SH2-domains is key for a deeper understanding of different cellular processes. Given the high binding specificity of SH2, in-silico ligand peptide prediction is of great interest. Currently however, only a few approaches have been published for the prediction of SH2-peptide interactions. Their main shortcomings range from limited coverage, to restrictive modeling assumptions (they are mainly based on position specific scoring matrices and do not take into consideration complex amino acids inter-dependencies) and high computational complexity. We propose a simple yet effective machine learning approach for a large set of known human SH2 domains. We used comprehensive data from micro-array and peptide-array experiments on 51 human SH2 domains. In order to deal with the high data imbalance problem and the high signal-to-noise ration, we casted the problem in a semi-supervised setting. We report competitive predictive performance w.r.t. state-of-the-art. Specifically we obtain 0.83 AUC ROC and 0.93 AUC PR in comparison to 0.71 AUC ROC and 0.87 AUC PR previously achieved by the position specific scoring matrices (PSSMs) based SMALI approach. Our work provides three main contributions. First, we showed that better models can be obtained when the information on the non-interacting peptides (negative examples) is also used. Second, we improve performance when considering high order correlations between the ligand positions employing regularization techniques to effectively avoid overfitting issues. Third, we developed an approach to tackle the data imbalance problem using a semi-supervised strategy. Finally, we performed a genome-wide prediction of human SH2-peptide binding, uncovering several findings of biological relevance. We make our models and genome-wide predictions, for all the 51 SH2
Computerized breast cancer analysis system using three stage semi-supervised learning method.
Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei
2016-10-01
A large number of labeled medical image data is usually a requirement to train a well-performed computer-aided detection (CAD) system. But the process of data labeling is time consuming, and potential ethical and logistical problems may also present complications. As a result, incorporating unlabeled data into CAD system can be a feasible way to combat these obstacles. In this study we developed a three stage semi-supervised learning (SSL) scheme that combines a small amount of labeled data and larger amount of unlabeled data. The scheme was modified on our existing CAD system using the following three stages: data weighing, feature selection, and newly proposed dividing co-training data labeling algorithm. Global density asymmetry features were incorporated to the feature pool to reduce the false positive rate. Area under the curve (AUC) and accuracy were computed using 10 fold cross validation method to evaluate the performance of our CAD system. The image dataset includes mammograms from 400 women who underwent routine screening examinations, and each pair contains either two cranio-caudal (CC) or two mediolateral-oblique (MLO) view mammograms from the right and the left breasts. From these mammograms 512 regions were extracted and used in this study, and among them 90 regions were treated as labeled while the rest were treated as unlabeled. Using our proposed scheme, the highest AUC observed in our research was 0.841, which included the 90 labeled data and all the unlabeled data. It was 7.4% higher than using labeled data only. With the increasing amount of labeled data, AUC difference between using mixed data and using labeled data only reached its peak when the amount of labeled data was around 60. This study demonstrated that our proposed three stage semi-supervised learning can improve the CAD performance by incorporating unlabeled data. Using unlabeled data is promising in computerized cancer research and may have a significant impact for future CAD system
SSC-EKE: Semi-Supervised Classification with Extensive Knowledge Exploitation.
Qian, Pengjiang; Xi, Chen; Xu, Min; Jiang, Yizhang; Su, Kuan-Hao; Wang, Shitong; Muzic, Raymond F
2018-01-01
We introduce a new, semi-supervised classification method that extensively exploits knowledge. The method has three steps. First, the manifold regularization mechanism, adapted from the Laplacian support vector machine (LapSVM), is adopted to mine the manifold structure embedded in all training data, especially in numerous label-unknown data. Meanwhile, by converting the labels into pairwise constraints, the pairwise constraint regularization formula (PCRF) is designed to compensate for the few but valuable labelled data. Second, by further combining the PCRF with the manifold regularization, the precise manifold and pairwise constraint jointly regularized formula (MPCJRF) is achieved. Third, by incorporating the MPCJRF into the framework of the conventional SVM, our approach, referred to as semi-supervised classification with extensive knowledge exploitation (SSC-EKE), is developed. The significance of our research is fourfold: 1) The MPCJRF is an underlying adjustment, with respect to the pairwise constraints, to the graph Laplacian enlisted for approximating the potential data manifold. This type of adjustment plays the correction role, as an unbiased estimation of the data manifold is difficult to obtain, whereas the pairwise constraints, converted from the given labels, have an overall high confidence level. 2) By transforming the values of the two terms in the MPCJRF such that they have the same range, with a trade-off factor varying within the invariant interval [0, 1), the appropriate impact of the pairwise constraints to the graph Laplacian can be self-adaptively determined. 3) The implication regarding extensive knowledge exploitation is embodied in SSC-EKE. That is, the labelled examples are used not only to control the empirical risk but also to constitute the MPCJRF. Moreover, all data, both labelled and unlabelled, are recruited for the model smoothness and manifold regularization. 4) The complete framework of SSC-EKE organically incorporates multiple
International Nuclear Information System (INIS)
Zhao, Lei; Li, Yuzhao; Zou, Rui; He, Bin; Zhu, Xiang; Liu, Yong; Wang, Junsong; Zhu, Yongguan
2013-01-01
Lake Yilong in Southwestern China has been under serious eutrophication threat during the past decades; however, the lake water remained clear until sudden sharp increase in Chlorophyll a (Chl a) and turbidity in 2009 without apparent change in external loading levels. To investigate the causes as well as examining the underlying mechanism, a three-dimensional hydrodynamic and water quality model was developed, simulating the flow circulation, pollutant fate and transport, and the interactions between nutrients, phytoplankton and macrophytes. The calibrated and validated model was used to conduct three sets of scenarios for understanding the water quality responses to various load reduction intensities and ecological restoration measures. The results showed that (a) even if the nutrient loads is reduced by as much as 77%, the Chl a concentration decreased only by 50%; and (b) aquatic vegetation has strong interaction with phytoplankton, therefore requiring combined watershed and in-lake management for lake restoration. -- Highlights: ► We quantitatively investigated the non-linear lake responses to load reduction. ► The aquatic ecological condition had a great impact on algal blooms. ► Only water quality improvement cannot ensure the aquatic ecology restoration. -- The lake water quality responds to watershed load reduction in a nonlinear way, which requires combined watershed and in-lake management for lake restoration
International Nuclear Information System (INIS)
Yang Guimei; Chen Xing; Li Jie; Guo Zheng; Liu Jinhuai; Huang Xingjiu
2011-01-01
Highlights: → We synthesize the Pd nanostructures by bubbles dynamic templated. → We obtain Pd nanobuds and Pd nanodendrites by changing the reaction precursor. → We obtain Pd macroelectrode voltammertric behavior using small amount of Pd materials. → We proved a ECE process. → The Pd nanostructures/GCE for O 2 reduction is a 2-step 4-electron process. - Abstract: Three-dimensional (3D) palladium (Pd) nanostructures (that is, nano-buds or nano-dendrites) are fabricated by bubble dynamic templated deposition of Pd onto a glassy carbon electrode (GCE). The morphology can be tailored by changing the precursor concentration and reaction time. Scanning electron microscopy images reveal that nano-buds or nano-dendrites consist of nanoparticles of 40-70 nm in diameter. The electrochemical reduction of oxygen is reported at such kinds of 3D nanostructure electrodes in aqueous solution. Data were collected using cyclic voltammetry. We demonstrate the Pd macroelectrode behavior of Pd nanostructure modified electrode by exploiting the diffusion model of macro-, micro-, and nano-architectures. In contrast to bare GCE, a significant positive shift and splitting of the oxygen reduction peak (vs Ag/AgCl/saturated KCl) at Pd nanostructure modified GCE was observed.
Target discrimination method for SAR images based on semisupervised co-training
Wang, Yan; Du, Lan; Dai, Hui
2018-01-01
Synthetic aperture radar (SAR) target discrimination is usually performed in a supervised manner. However, supervised methods for SAR target discrimination may need lots of labeled training samples, whose acquirement is costly, time consuming, and sometimes impossible. This paper proposes an SAR target discrimination method based on semisupervised co-training, which utilizes a limited number of labeled samples and an abundant number of unlabeled samples. First, Lincoln features, widely used in SAR target discrimination, are extracted from the training samples and partitioned into two sets according to their physical meanings. Second, two support vector machine classifiers are iteratively co-trained with the extracted two feature sets based on the co-training algorithm. Finally, the trained classifiers are exploited to classify the test data. The experimental results on real SAR images data not only validate the effectiveness of the proposed method compared with the traditional supervised methods, but also demonstrate the superiority of co-training over self-training, which only uses one feature set.
Semi-supervised vibration-based classification and condition monitoring of compressors
Potočnik, Primož; Govekar, Edvard
2017-09-01
Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.
Porosity estimation by semi-supervised learning with sparsely available labeled samples
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
Directory of Open Access Journals (Sweden)
Bin Hou
2016-08-01
Full Text Available Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD methods have been developed to solve them by utilizing remote sensing (RS images. The advent of high resolution (HR remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC segmentation. Then, saliency and morphological building index (MBI extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF. Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.
Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation
Institute of Scientific and Technical Information of China (English)
Tian Dongping
2017-01-01
In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.
Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning
Energy Technology Data Exchange (ETDEWEB)
Adal, Kedir M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sidebe, Desire [Univ. of Burgundy, Dijon (France); Ali, Sharib [Univ. of Burgundy, Dijon (France); Chaum, Edward [Univ. of Tennessee, Knoxville, TN (United States); Karnowski, Thomas Paul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Meriaudeau, Fabrice [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2014-01-07
Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.
Application of semi-supervised deep learning to lung sound analysis.
Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon
2016-08-01
The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.
Tracking mobile users in wireless networks via semi-supervised colocalization.
Pan, Jeffrey Junfeng; Pan, Sinno Jialin; Yin, Jie; Ni, Lionel M; Yang, Qiang
2012-03-01
Recent years have witnessed the growing popularity of sensor and sensor-network technologies, supporting important practical applications. One of the fundamental issues is how to accurately locate a user with few labeled data in a wireless sensor network, where a major difficulty arises from the need to label large quantities of user location data, which in turn requires knowledge about the locations of signal transmitters or access points. To solve this problem, we have developed a novel machine learning-based approach that combines collaborative filtering with graph-based semi-supervised learning to learn both mobile users' locations and the locations of access points. Our framework exploits both labeled and unlabeled data from mobile devices and access points. In our two-phase solution, we first build a manifold-based model from a batch of labeled and unlabeled data in an offline training phase and then use a weighted k-nearest-neighbor method to localize a mobile client in an online localization phase. We extend the two-phase colocalization to an online and incremental model that can deal with labeled and unlabeled data that come sequentially and adapt to environmental changes. Finally, we embed an action model to the framework such that additional kinds of sensor signals can be utilized to further boost the performance of mobile tracking. Compared to other state-of-the-art systems, our framework has been shown to be more accurate while requiring less calibration effort in our experiments performed on three different testbeds.
Semisupervised Learning Based Opinion Summarization and Classification for Online Product Reviews
Directory of Open Access Journals (Sweden)
Mita K. Dalal
2013-01-01
Full Text Available The growth of E-commerce has led to the invention of several websites that market and sell products as well as allow users to post reviews. It is typical for an online buyer to refer to these reviews before making a buying decision. Hence, automatic summarization of users’ reviews has a great commercial significance. However, since the product reviews are written by nonexperts in an unstructured, natural language text, the task of summarizing them is challenging. This paper presents a semisupervised approach for mining online user reviews to generate comparative feature-based statistical summaries that can guide a user in making an online purchase. It includes various phases like preprocessing and feature extraction and pruning followed by feature-based opinion summarization and overall opinion sentiment classification. Empirical studies indicate that the approach used in the paper can identify opinionated sentences from blog reviews with a high average precision of 91% and can classify the polarity of the reviews with a good average accuracy of 86%.
An immune-inspired semi-supervised algorithm for breast cancer diagnosis.
Peng, Lingxi; Chen, Wenbin; Zhou, Wubai; Li, Fufang; Yang, Jin; Zhang, Jiandong
2016-10-01
Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Postprocessing of Accidental Scenarios by Semi-Supervised Self-Organizing Maps
Directory of Open Access Journals (Sweden)
Francesco Di Maio
2017-01-01
Full Text Available Integrated Deterministic and Probabilistic Safety Analysis (IDPSA of dynamic systems calls for the development of efficient methods for accidental scenarios generation. The necessary consideration of failure events timing and sequencing along the scenarios requires the number of scenarios to be generated to increase with respect to conventional PSA. Consequently, their postprocessing for retrieving safety relevant information regarding the system behavior is challenged because of the large amount of generated scenarios that makes the computational cost for scenario postprocessing enormous and the retrieved information difficult to interpret. In the context of IDPSA, the interpretation consists in the classification of the generated scenarios as safe, failed, Near Misses (NMs, and Prime Implicants (PIs. To address this issue, in this paper we propose the use of an ensemble of Semi-Supervised Self-Organizing Maps (SSSOMs whose outcomes are combined by a locally weighted aggregation according to two strategies: a locally weighted aggregation and a decision tree based aggregation. In the former, we resort to the Local Fusion (LF principle for accounting the classification reliability of the different SSSOM classifiers, whereas in the latter we build a classification scheme to select the appropriate classifier (or ensemble of classifiers, for the type of scenario to be classified. The two strategies are applied for the postprocessing of the accidental scenarios of a dynamic U-Tube Steam Generator (UTSG.
spa: Semi-Supervised Semi-Parametric Graph-Based Estimation in R
Directory of Open Access Journals (Sweden)
Mark Culp
2011-04-01
Full Text Available In this paper, we present an R package that combines feature-based (X data and graph-based (G data for prediction of the response Y . In this particular case, Y is observed for a subset of the observations (labeled and missing for the remainder (unlabeled. We examine an approach for fitting Y = Xβ + f(G where β is a coefficient vector and f is a function over the vertices of the graph. The procedure is semi-supervised in nature (trained on the labeled and unlabeled sets, requiring iterative algorithms for fitting this estimate. The package provides several key functions for fitting and evaluating an estimator of this type. The package is illustrated on a text analysis data set, where the observations are text documents (papers, the response is the category of paper (either applied or theoretical statistics, the X information is the name of the journal in which the paper resides, and the graph is a co-citation network, with each vertex an observation and each edge the number of times that the two papers cite a common paper. An application involving classification of protein location using a protein interaction graph and an application involving classification on a manifold with part of the feature data converted to a graph are also presented.
Directory of Open Access Journals (Sweden)
Yanqi Hao
2015-07-01
Full Text Available Alternative splicing acts on transcripts from almost all human multi-exon genes. Notwithstanding its ubiquity, fundamental ramifications of splicing on protein expression remain unresolved. The number and identity of spliced transcripts that form stably folded proteins remain the sources of considerable debate, due largely to low coverage of experimental methods and the resulting absence of negative data. We circumvent this issue by developing a semi-supervised learning algorithm, positive unlabeled learning for splicing elucidation (PULSE; http://www.kimlab.org/software/pulse, which uses 48 features spanning various categories. We validated its accuracy on sets of bona fide protein isoforms and directly on mass spectrometry (MS spectra for an overall AU-ROC of 0.85. We predict that around 32% of “exon skipping” alternative splicing events produce stable proteins, suggesting that the process engenders a significant number of previously uncharacterized proteins. We also provide insights into the distribution of positive isoforms in various functional classes and into the structural effects of alternative splicing.
A semi-supervised classification algorithm using the TAD-derived background as training data
Fan, Lei; Ambeau, Brittany; Messinger, David W.
2013-05-01
In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.
Semi-Supervised Classification for Fault Diagnosis in Nuclear Power Plants
International Nuclear Information System (INIS)
Ma, Jian Ping; Jiang, Jin
2014-01-01
Pattern classification methods have become important tools for fault diagnosis in industrial systems. However, it is normally difficult to obtain reliable labeled data to train a supervised pattern classification model for applications in a nuclear power plant (NPP). However, unlabeled data easily become available through increased deployment of supervisory, control, and data acquisition (SCADA) systems. In this paper, a fault diagnosis scheme based on semi-supervised classification (SSC) method is developed with specific applications for NPP. In this scheme, newly measured plant data are treated as unlabeled data. They are integrated with selected labeled data to train a SSC model which is then used to estimate labels of the new data. Compared to exclusive supervised approaches, the proposed scheme requires significantly less number of labeled data to train a classifier. Furthermore, it is shown that higher degree of uncertainties in the labeled data can be tolerated. The developed scheme has been validated using the data generated from a desktop NPP simulator and also from a physical NPP simulator using a graph-based SSC algorithm. Two case studies have been used in the validation process. In the first case study, three faults have been simulated on the desktop simulator. These faults have all been classified successfully with only four labeled data points per fault case. In the second case, six types of fault are simulated on the physical NPP simulator. All faults have been successfully diagnosed. The results have demonstrated that SSC is a promising tool for fault diagnosis
Arrangement and Applying of Movement Patterns in the Cerebellum Based on Semi-supervised Learning.
Solouki, Saeed; Pooyan, Mohammad
2016-06-01
Biological control systems have long been studied as a possible inspiration for the construction of robotic controllers. The cerebellum is known to be involved in the production and learning of smooth, coordinated movements. Therefore, highly regular structure of the cerebellum has been in the core of attention in theoretical and computational modeling. However, most of these models reflect some special features of the cerebellum without regarding the whole motor command computational process. In this paper, we try to make a logical relation between the most significant models of the cerebellum and introduce a new learning strategy to arrange the movement patterns: cerebellar modular arrangement and applying of movement patterns based on semi-supervised learning (CMAPS). We assume here the cerebellum like a big archive of patterns that has an efficient organization to classify and recall them. The main idea is to achieve an optimal use of memory locations by more than just a supervised learning and classification algorithm. Surely, more experimental and physiological researches are needed to confirm our hypothesis.
Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice
2014-04-01
Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
An Efficient Semi-supervised Learning Approach to Predict SH2 Domain Mediated Interactions.
Kundu, Kousik; Backofen, Rolf
2017-01-01
Src homology 2 (SH2) domain is an important subclass of modular protein domains that plays an indispensable role in several biological processes in eukaryotes. SH2 domains specifically bind to the phosphotyrosine residue of their binding peptides to facilitate various molecular functions. For determining the subtle binding specificities of SH2 domains, it is very important to understand the intriguing mechanisms by which these domains recognize their target peptides in a complex cellular environment. There are several attempts have been made to predict SH2-peptide interactions using high-throughput data. However, these high-throughput data are often affected by a low signal to noise ratio. Furthermore, the prediction methods have several additional shortcomings, such as linearity problem, high computational complexity, etc. Thus, computational identification of SH2-peptide interactions using high-throughput data remains challenging. Here, we propose a machine learning approach based on an efficient semi-supervised learning technique for the prediction of 51 SH2 domain mediated interactions in the human proteome. In our study, we have successfully employed several strategies to tackle the major problems in computational identification of SH2-peptide interactions.
Multi-Label Classification by Semi-Supervised Singular Value Decomposition.
Jing, Liping; Shen, Chenyang; Yang, Liu; Yu, Jian; Ng, Michael K
2017-10-01
Multi-label problems arise in various domains, including automatic multimedia data categorization, and have generated significant interest in computer vision and machine learning community. However, existing methods do not adequately address two key challenges: exploiting correlations between labels and making up for the lack of labelled data or even missing labelled data. In this paper, we proposed to use a semi-supervised singular value decomposition (SVD) to handle these two challenges. The proposed model takes advantage of the nuclear norm regularization on the SVD to effectively capture the label correlations. Meanwhile, it introduces manifold regularization on mapping to capture the intrinsic structure among data, which provides a good way to reduce the required labelled data with improving the classification performance. Furthermore, we designed an efficient algorithm to solve the proposed model based on the alternating direction method of multipliers, and thus, it can efficiently deal with large-scale data sets. Experimental results for synthetic and real-world multimedia data sets demonstrate that the proposed method can exploit the label correlations and obtain promising and better label prediction results than the state-of-the-art methods.
Liu, Minmin; Li, Jian; Cai, Chao; Zhou, Ziwei; Ling, Yun; Liu, Rui
2017-08-01
Herein, we report a novel route to construct a hierarchical three-dimensional porous carbon (3DC) through a copolymer-silica assembly. In the synthesis, silica acts as a hard template and leads to the formation of an interconnected 3D macropore, whereas styrene-co-acrylonitrile polymer has been used as both a carbon source and a soft template for micro- and meso-pores. The obtained 3DC materials possess a large surface area (∼550.5 m 2 g -1 ), which facilitates high dispersion of Pt nanoparticles on the carbon support. The 3DC-supported Pt electrocatalyst shows excellent performance in the oxygen reduction reaction (ORR). The easy processing ability along with the characteristics of hierarchical porosity offers a new strategy for the preparation of carbon nanomaterials for energy application.
Directory of Open Access Journals (Sweden)
Fanny Perraudeau
2017-07-01
Full Text Available Novel single-cell transcriptome sequencing assays allow researchers to measure gene expression levels at the resolution of single cells and offer the unprecendented opportunity to investigate at the molecular level fundamental biological questions, such as stem cell differentiation or the discovery and characterization of rare cell types. However, such assays raise challenging statistical and computational questions and require the development of novel methodology and software. Using stem cell differentiation in the mouse olfactory epithelium as a case study, this integrated workflow provides a step-by-step tutorial to the methodology and associated software for the following four main tasks: (1 dimensionality reduction accounting for zero inflation and over dispersion and adjusting for gene and cell-level covariates; (2 cell clustering using resampling-based sequential ensemble clustering; (3 inference of cell lineages and pseudotimes; and (4 differential expression analysis along lineages.
Ali, Amir Monir
2018-01-01
The aim of the study was to evaluate the commercially available orthopedic metal artifact reduction (OMAR) technique in postoperative three-dimensional computed tomography (3DCT) reconstruction studies after spinal instrumentation and to investigate its clinical application. One hundred and twenty (120) patients with spinal metallic implants were included in the study. All had 3DCT reconstruction examinations using the OMAR software after obtaining the informed consents and approval of the Institution Ethical Committee. The degree of the artifacts, the related muscular density, the clearness of intermuscular fat planes, and definition of the adjacent vertebrae were qualitatively evaluated. The diagnostic satisfaction and quality of the 3D reconstruction images were thoroughly assessed. The majority (96.7%) of 3DCT reconstruction images performed were considered satisfactory to excellent for diagnosis. Only 3.3% of the reconstructed images had rendered unacceptable diagnostic quality. OMAR can effectively reduce metallic artifacts in patients with spinal instrumentation with highly diagnostic 3DCT reconstruction images.
Yoon, Ki Ro; Kim, Dae Sik; Ryu, Won-Hee; Song, Sung Ho; Youn, Doo-Young; Jung, Ji-Won; Jeon, Seokwoo; Park, Yong Joon; Kim, Il-Doo
2016-08-23
The development of efficient bifunctional catalysts for the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) is a key issue pertaining high performance Li-O2 batteries. Here, we propose a heterogeneous electrocatalyst consisting of LaMnO3 nanofibers (NFs) functionalized with RuO2 nanoparticles (NPs) and non-oxidized graphene nanoflakes (GNFs). The Li-O2 cell employing the tailored catalysts delivers an excellent electrochemical performance, affording significantly reduced discharge/charge voltage gaps (1.0 V at 400 mA g(-1) ), and superior cyclability for over 320 cycles. The outstanding performance arises from (1) the networked LaMnO3 NFs providing ORR/OER sites without severe aggregation, (2) the synergistic coupling of RuO2 NPs for further improving the OER activity and the electrical conductivity on the surface of the LaMnO3 NFs, and (3) the use of GNFs providing a fast electronic pathway as well as improved ORR kinetics. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liang, Hui; Li, Chenwei; Chen, Tao; Cui, Liang; Han, Jingrui; Peng, Zhi; Liu, Jingquan
2018-02-01
Because of the urgent need for renewable resources, oxygen reduction reaction (ORR) has been widely studied. Finding efficient and low cost non-precious metal catalyst is increasingly critical. In this study, melamine foam is used as template to obtain porous sulfur and nitrogen-codoped graphene/carbon foam with uniformly distributed cobalt sulfide nanoparticles (Co1-xS/SNG/CF) which is prepared by a simple infiltration-drying-sulfuration method. It is noteworthy that melamine foam not only works as a three-dimensional support skeleton, but also provides a nitrogen source without any environmental pollution. Such Co1-xS/SNG/CF catalyst shows excellent oxygen reduction catalytic performance with an onset potential of only 0.99 V, which is the same as that of Pt/C catalyst (Eonset = 0.99 V). Furthermore, the stability and methanol tolerance of Co1-xS/SNG/CF are more outstanding than those of Pt/C catalyst. Our work manifests a facile method to prepare S and N-codoped 3D graphene network decorated with Co1-xS nanoparticles, which may be utilized as potential alternative to the expensive Pt/C catalysts toward ORR.
Cai, Kai; Liu, Jiawei; Zhang, Huan; Huang, Zhao; Lu, Zhicheng; Foda, Mohamed F; Li, Tingting; Han, Heyou
2015-05-11
An intermediate-template-directed method has been developed for the synthesis of quasi-one-dimensional Au/PtAu heterojunction nanotubes by the heterogeneous nucleation and growth of Au on Te/Pt core-shell nanostructures in aqueous solution. The synthesized porous Au/PtAu bimetallic nanotubes (PABNTs) consist of porous tubular framework and attached Au nanoparticles (AuNPs). The reaction intermediates played an important role in the preparation, which fabricated the framework and provided a localized reducing agent for the reduction of the Au and Pt precursors. The Pt7 Au PABNTs showed higher electrocatalytic activity and durability in the oxygen-reduction reaction (ORR) in 0.1 M HClO4 than porous Pt nanotubes (PtNTs) and commercially available Pt/C. The mass activity of PABNTs was 218 % that of commercial Pt/C after an accelerated durability test. This study demonstrates the potential of PABNTs as highly efficient electrocatalysts. In addition, this method provides a facile strategy for the synthesis of desirable hetero-nanostructures with controlled size and shape by utilizing an intermediate template. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Hu, Enlai; Gao, Xuehui; Etogo, Atangana; Xie, Yunlong; Zhong, Yijun; Hu, Yong
2014-01-01
Highlights: • 1D Bi 2 S 3 nanostructures were prepared by a facile ethanol-assisted one-pot reaction. • The size and morphology of the products can be conveniently varied. • The sulfur source plays a crucial role in determining the morphologies of products. • 1D Bi 2 S 3 nanostructures exhibit enhanced photocatalytic reduction of Cr(VI). • Bi 2 S 3 nanowires exhibit the highest photoreduction activity among three samples. - Abstract: One-dimensional (1D) Bi 2 S 3 nanostructures with various morphologies, including nanowires, nanorods, and nanotubes, have been successfully synthesized through a facile ethanol-assisted one-pot reaction. It is found that the size, morphology and structure of the products can be conveniently varied or controlled by simply adjusting the volume ratio of ethanol and water in the reaction system. Further experimental results indicate that sulfur source also plays the other crucial role in determining the product morphology. The synthetic strategy developed in this work is highly efficient in producing 1D Bi 2 S 3 nanostructures with high quality and large quantity. Photocatalysis experiments show the as-prepared 1D Bi 2 S 3 nanostructures possess significantly enhanced photocatalytic reduction of Cr(VI) when exposed to visible light irradiation. Especially, Bi 2 S 3 nanowires exhibit the highest photocatalytic activity and can be used repeatedly after washed with dilute HCl
Energy Technology Data Exchange (ETDEWEB)
Jiang, Zhihang; Ma, Yongjun [State Key Laboratory Cultivation Base for Nonmetal Composites and Functional Materials, Southwest University of Science and Technology, Mianyang 621010 (China); Zhou, Yong [Eco-materials and Renewable Energy Research Center (ERERC), School of Physics, National Lab of Solid State Microstructure, ERERC, Nanjing University, Nanjing 210093 (China); Hu, Shanglian [School of Life Science and Engineering, Southwest University of Science and Technology, Mianyang 621010 (China); Han, Chaojiang [State Key Laboratory Cultivation Base for Nonmetal Composites and Functional Materials, Southwest University of Science and Technology, Mianyang 621010 (China); Pei, Chonghua, E-mail: peichonghua@swust.edu.cn [State Key Laboratory Cultivation Base for Nonmetal Composites and Functional Materials, Southwest University of Science and Technology, Mianyang 621010 (China)
2013-10-15
Graphical abstract: - Highlights: • The Si/SiC composites were synthesized by one-step magnesiothermic reduction. • The mesoporous composites have a high specific surface area (655.7 m{sup 2} g{sup −1}). • The composites exhibited a strong photoluminescence and better biocompatibility. • The mechanisms of formation and photoluminescence of sample were discussed. - Abstract: By converting modified silica aerogels to the corresponding silicon/silicon carbide (Si/SiC) without losing its nanostructure, three-dimensional mesoporous (3DM) Si/SiC composites are successfully synthesized via one-step magnesothermic reduction at relative low temperature (650 °C). The phase composition and microstructure of the resulting samples are measured by X-ray diffraction (XRD), energy dispersive X-ray spectroscopy (EDX), Raman spectra, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). N{sub 2}-sorption isotherms results show that the products have high Brunauer–Emmett–Teller (BET) specific surface areas (up to 656 m{sup 2} g{sup −1}) and narrow pore-size distributions (1.5–30 nm). The composites exhibit a strong photoluminescence (PL) in blue-green light region (peak centered at 533 nm). We have set out work on the biocompatibility and enhancing PL of samples. As a result of excellent performances of the composites, it can be expected to have significant application in optoelectronics, biosensors, biological tracer and so on.
Graph-Based Semi-Supervised Learning for Indoor Localization Using Crowdsourced Data
Directory of Open Access Journals (Sweden)
Liye Zhang
2017-04-01
Full Text Available Indoor positioning based on the received signal strength (RSS of the WiFi signal has become the most popular solution for indoor localization. In order to realize the rapid deployment of indoor localization systems, solutions based on crowdsourcing have been proposed. However, compared to conventional methods, lots of different devices are used in crowdsourcing system and less RSS values are collected by each device. Therefore, the crowdsourced RSS values are more erroneous and can result in significant localization errors. In order to eliminate the signal strength variations across diverse devices, the Linear Regression (LR algorithm is proposed to solve the device diversity problem in crowdsourcing system. After obtaining the uniform RSS values, a graph-based semi-supervised learning (G-SSL method is used to exploit the correlation between the RSS values at nearby locations to estimate an optimal RSS value at each location. As a result, the negative effect of the erroneous measurements could be mitigated. Since the AP locations need to be known in G-SSL algorithm, the Compressed Sensing (CS method is applied to precisely estimate the location of the APs. Based on the location of the APs and a simple signal propagation model, the RSS difference between different locations is calculated and used as an additional constraint to improve the performance of G-SSL. Furthermore, to exploit the sparsity of the weights used in the G-SSL, we use the CS method to reconstruct these weights more accurately and make a further improvement on the performance of the G-SSL. Experimental results show improved results in terms of the smoothness of the radio map and the localization accuracy.
Directory of Open Access Journals (Sweden)
Jeroen van Roy
2018-03-01
Full Text Available Nowadays, quality inspection of fruit and vegetables is typically accomplished through visual inspection. Automation of this inspection is desirable to make it more objective. For this, hyperspectral imaging has been identified as a promising technique. When the field of view includes multiple objects, hypercubes should be segmented to assign individual pixels to different objects. Unsupervised and supervised methods have been proposed. While the latter are labour intensive as they require masking of the training images, the former are too computationally intensive for in-line use and may provide different results for different hypercubes. Therefore, a semi-supervised method is proposed to train a computationally efficient segmentation algorithm with minimal human interaction. As a first step, an unsupervised classification model is used to cluster spectra in similar groups. In the second step, a pixel selection algorithm applied to the output of the unsupervised classification is used to build a supervised model which is fast enough for in-line use. To evaluate this approach, it is applied to hypercubes of vine tomatoes and table grapes. After first derivative spectral preprocessing to remove intensity variation due to curvature and gloss effects, the unsupervised models segmented 86.11% of the vine tomato images correctly. Considering overall accuracy, sensitivity, specificity and time needed to segment one hypercube, partial least squares discriminant analysis (PLS-DA was found to be the best choice for in-line use, when using one training image. By adding a second image, the segmentation results improved considerably, yielding an overall accuracy of 96.95% for segmentation of vine tomatoes and 98.52% for segmentation of table grapes, demonstrating the added value of the learning phase in the algorithm.
A semi-supervised method to detect seismic random noise with fuzzy GK clustering
International Nuclear Information System (INIS)
Hashemi, Hosein; Javaherian, Abdolrahim; Babuska, Robert
2008-01-01
We present a new method to detect random noise in seismic data using fuzzy Gustafson–Kessel (GK) clustering. First, using an adaptive distance norm, a matrix is constructed from the observed seismic amplitudes. The next step is to find centres of ellipsoidal clusters and construct a partition matrix which determines the soft decision boundaries between seismic events and random noise. The GK algorithm updates the cluster centres in order to iteratively minimize the cluster variance. Multiplication of the fuzzy membership function with values of each sample yields new sections; we name them 'clustered sections'. The seismic amplitude values of the clustered sections are given in a way to decrease the level of noise in the original noisy seismic input. In pre-stack data, it is essential to study the clustered sections in a f–k domain; finding the quantitative index for weighting the post-stack data needs a similar approach. Using the knowledge of a human specialist together with the fuzzy unsupervised clustering, the method is a semi-supervised random noise detection. The efficiency of this method is investigated on synthetic and real seismic data for both pre- and post-stack data. The results show a significant improvement of the input noisy sections without harming the important amplitude and phase information of the original data. The procedure for finding the final weights of each clustered section should be carefully done in order to keep almost all the evident seismic amplitudes in the output section. The method interactively uses the knowledge of the seismic specialist in detecting the noise
Zhao, Xiaowei; Ning, Qiao; Chai, Haiting; Ma, Zhiqiang
2015-06-07
As a widespread type of protein post-translational modifications (PTMs), succinylation plays an important role in regulating protein conformation, function and physicochemical properties. Compared with the labor-intensive and time-consuming experimental approaches, computational predictions of succinylation sites are much desirable due to their convenient and fast speed. Currently, numerous computational models have been developed to identify PTMs sites through various types of two-class machine learning algorithms. These methods require both positive and negative samples for training. However, designation of the negative samples of PTMs was difficult and if it is not properly done can affect the performance of computational models dramatically. So that in this work, we implemented the first application of positive samples only learning (PSoL) algorithm to succinylation sites prediction problem, which was a special class of semi-supervised machine learning that used positive samples and unlabeled samples to train the model. Meanwhile, we proposed a novel succinylation sites computational predictor called SucPred (succinylation site predictor) by using multiple feature encoding schemes. Promising results were obtained by the SucPred predictor with an accuracy of 88.65% using 5-fold cross validation on the training dataset and an accuracy of 84.40% on the independent testing dataset, which demonstrated that the positive samples only learning algorithm presented here was particularly useful for identification of protein succinylation sites. Besides, the positive samples only learning algorithm can be applied to build predictors for other types of PTMs sites with ease. A web server for predicting succinylation sites was developed and was freely accessible at http://59.73.198.144:8088/SucPred/. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ma, Xiaoke; Wang, Bingbo; Yu, Liang
2018-01-01
Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.
Directory of Open Access Journals (Sweden)
Wei Tian
2015-01-01
Full Text Available Background: The treatment of high-grade developmental spondylolisthesis (HGDS is still challenging and controversial. In this study, we investigated the efficacy of the posterior reduction and monosegmental fusion assisted by intraoperative three-dimensional (3D navigation system in managing the HGDS. Methods: Thirteen consecutive HGDS patients were treated with posterior decompression, reduction and monosegmental fusion of L5/S1, assisted by intraoperative 3D navigation system. The clinical and radiographic outcomes were evaluated, with a minimum follow-up of 2 years. The differences between the pre- and post-operative measures were statistically analyzed using a two-tailed, paired t-test. Results: At most recent follow-up, 12 patients were pain-free. Only 1 patient had moderate pain. There were no permanent neurological complications or pseudarthrosis. The magnetic resonance imaging showed that there was no obvious disc degeneration in the adjacent segment. All radiographic parameters were improved. Mean slippage improved from 63.2% before surgery to 12.2% after surgery and 11.0% at latest follow-up. Lumbar lordosis changed from preoperative 34.9 ± 13.3° to postoperative 50.4 ± 9.9°, and 49.3 ± 7.8° at last follow-up. L5 incidence improved from 71.0 ± 11.3° to 54.0 ± 11.9° and did not change significantly at the last follow-up 53.1 ± 15.4°. While pelvic incidence remained unchanged, sacral slip significantly decreased from preoperative 32.7 ± 12.5° to postoperative 42.6 ± 9.8°and remained constant to the last follow-up 44.4 ± 6.9°. Pelvic tilt significantly decreased from 38.4 ± 12.5° to 30.9 ± 8.1° and remained unchanged at the last follow-up 28.1 ± 11.2°. Conclusions: Posterior reduction and monosegmental fusion of L5/S1 assisted by intraoperative 3D navigation are an effective technique for managing high-grade dysplastic spondylolisthesis. A complete reduction of local deformity and excellent correction of overall
Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak
2016-03-01
One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised
Directory of Open Access Journals (Sweden)
Beretta Lorenzo
2010-08-01
Full Text Available Abstract Background Epistasis is recognized as a fundamental part of the genetic architecture of individuals. Several computational approaches have been developed to model gene-gene interactions in case-control studies, however, none of them is suitable for time-dependent analysis. Herein we introduce the Survival Dimensionality Reduction (SDR algorithm, a non-parametric method specifically designed to detect epistasis in lifetime datasets. Results The algorithm requires neither specification about the underlying survival distribution nor about the underlying interaction model and proved satisfactorily powerful to detect a set of causative genes in synthetic epistatic lifetime datasets with a limited number of samples and high degree of right-censorship (up to 70%. The SDR method was then applied to a series of 386 Dutch patients with active rheumatoid arthritis that were treated with anti-TNF biological agents. Among a set of 39 candidate genes, none of which showed a detectable marginal effect on anti-TNF responses, the SDR algorithm did find that the rs1801274 SNP in the FcγRIIa gene and the rs10954213 SNP in the IRF5 gene non-linearly interact to predict clinical remission after anti-TNF biologicals. Conclusions Simulation studies and application in a real-world setting support the capability of the SDR algorithm to model epistatic interactions in candidate-genes studies in presence of right-censored data. Availability: http://sourceforge.net/projects/sdrproject/
Inoue, Yuuji; Yoneyama, Masami; Nakamura, Masanobu; Takemura, Atsushi
2018-06-01
The two-dimensional Cartesian turbo spin-echo (TSE) sequence is widely used in routine clinical studies, but it is sensitive to respiratory motion. We investigated the k-space orders in Cartesian TSE that can effectively reduce motion artifacts. The purpose of this study was to demonstrate the relationship between k-space order and degree of motion artifacts using a moving phantom. We compared the degree of motion artifacts between linear and asymmetric k-space orders. The actual spacing of ghost artifacts in the asymmetric order was doubled compared with that in the linear order in the free-breathing situation. The asymmetric order clearly showed less sensitivity to incomplete breath-hold at the latter half of the imaging period. Because of the actual number of partitions of the k-space and the temporal filling order, the asymmetric k-space order of Cartesian TSE was superior to the linear k-space order for reduction of ghosting motion artifacts.
International Nuclear Information System (INIS)
Tang, Sheng; Zhou, Xuejun; Xu, Nengneng; Bai, Zhengyu; Qiao, Jinli; Zhang, Jiujun
2016-01-01
Highlights: • 3-D porous N-doped graphene was prepared using one-step silica template-free method. • High specific surface area of 920 m 2 g −1 was achieved for 3-D porous N-doped graphene. • Much higher ORR activity was observed for N-doped graphene than S-doped one in 0.1 M KOH. • The as-prepared catalyst gave a peak power density of 275 mW cm −2 as zinc–air battery cathode. - Abstract: Three-dimensional nanoporous nitrogen-doped graphene (3D-PNG) has been synthesized through a facial one-step synthesis method without additional silica template. The as-prepared 3D-PNGwas used as an electrocatalyst for the oxygen reduction reaction (ORR), which shows excellent electrochemistry performance, demonstrated by half-cell electrochemical evaluation in 0.1 M KOH including prominent ORR activity, four electron-selectivity and remarkable methanol poisoning stability compared to commercial 20%Pt/C catalyst. The physical and surface properties of 3D-PNG catalyst were characterized by scanning electron microscopy (SEM), high-resolution transmission electron microscopy (TEM), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and BET surface area analysis. The experiments show that 3D-PNG catalyst possesses super-large specific surface area reaching 920 m 2 g −1 , which is superior to our most recently reported 3D-PNG synthesized by silica template (670 m 2 g −1 ) and other doped graphene catalysts in literature. When used for constructing a zinc–air battery cathode, such an 3D-PNG catalyst can give a discharge peak power density of 275 mW cm −2 . All the results announce a unique procedure to product high-efficiency graphene-based non-noble metal catalyst materials for electrochemical energy devices including both fuel cells and metal–air batteries.
Awan, Muaaz Gul; Saeed, Fahad
2017-08-01
Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.
Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong
2016-06-29
The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.
Directory of Open Access Journals (Sweden)
Hesong Shen
Full Text Available To investigate image quality and radiation dose of CT coronary angiography (CTCA scanned using automatic tube current modulation (ATCM and reconstructed by strong adaptive iterative dose reduction three-dimensional (AIDR3D.Eighty-four consecutive CTCA patients were collected for the study. All patients were scanned using ATCM and reconstructed with strong AIDR3D, standard AIDR3D and filtered back-projection (FBP respectively. Two radiologists who were blinded to the patients' clinical data and reconstruction methods evaluated image quality. Quantitative image quality evaluation included image noise, signal-to-noise ratio (SNR, and contrast-to-noise ratio (CNR. To evaluate image quality qualitatively, coronary artery is classified into 15 segments based on the modified guidelines of the American Heart Association. Qualitative image quality was evaluated using a 4-point scale. Radiation dose was calculated based on dose-length product.Compared with standard AIDR3D, strong AIDR3D had lower image noise, higher SNR and CNR, their differences were all statistically significant (P<0.05; compared with FBP, strong AIDR3D decreased image noise by 46.1%, increased SNR by 84.7%, and improved CNR by 82.2%, their differences were all statistically significant (P<0.05 or 0.001. Segments with diagnostic image quality for strong AIDR3D were 336 (100.0%, 486 (96.4%, and 394 (93.8% in proximal, middle, and distal part respectively; whereas those for standard AIDR3D were 332 (98.8%, 472 (93.7%, 378 (90.0%, respectively; those for FBP were 217 (64.6%, 173 (34.3%, 114 (27.1%, respectively; total segments with diagnostic image quality in strong AIDR3D (1216, 96.5% were higher than those of standard AIDR3D (1182, 93.8% and FBP (504, 40.0%; the differences between strong AIDR3D and standard AIDR3D, strong AIDR3D and FBP were all statistically significant (P<0.05 or 0.001. The mean effective radiation dose was (2.55±1.21 mSv.Compared with standard AIDR3D and FBP, CTCA
Directory of Open Access Journals (Sweden)
Nan Zhao
2014-05-01
Full Text Available Single nucleotide polymorphisms (SNPs are among the most common types of genetic variation in complex genetic disorders. A growing number of studies link the functional role of SNPs with the networks and pathways mediated by the disease-associated genes. For example, many non-synonymous missense SNPs (nsSNPs have been found near or inside the protein-protein interaction (PPI interfaces. Determining whether such nsSNP will disrupt or preserve a PPI is a challenging task to address, both experimentally and computationally. Here, we present this task as three related classification problems, and develop a new computational method, called the SNP-IN tool (non-synonymous SNP INteraction effect predictor. Our method predicts the effects of nsSNPs on PPIs, given the interaction's structure. It leverages supervised and semi-supervised feature-based classifiers, including our new Random Forest self-learning protocol. The classifiers are trained based on a dataset of comprehensive mutagenesis studies for 151 PPI complexes, with experimentally determined binding affinities of the mutant and wild-type interactions. Three classification problems were considered: (1 a 2-class problem (strengthening/weakening PPI mutations, (2 another 2-class problem (mutations that disrupt/preserve a PPI, and (3 a 3-class classification (detrimental/neutral/beneficial mutation effects. In total, 11 different supervised and semi-supervised classifiers were trained and assessed resulting in a promising performance, with the weighted f-measure ranging from 0.87 for Problem 1 to 0.70 for the most challenging Problem 3. By integrating prediction results of the 2-class classifiers into the 3-class classifier, we further improved its performance for Problem 3. To demonstrate the utility of SNP-IN tool, it was applied to study the nsSNP-induced rewiring of two disease-centered networks. The accurate and balanced performance of SNP-IN tool makes it readily available to study the
Collins, W. D.; Wehner, M. F.; Prabhat, M.; Kurth, T.; Satish, N.; Mitliagkas, I.; Zhang, J.; Racah, E.; Patwary, M.; Sundaram, N.; Dubey, P.
2017-12-01
Anthropogenically-forced climate changes in the number and character of extreme storms have the potential to significantly impact human and natural systems. Current high-performance computing enables multidecadal simulations with global climate models at resolutions of 25km or finer. Such high-resolution simulations are demonstrably superior in simulating extreme storms such as tropical cyclones than the coarser simulations available in the Coupled Model Intercomparison Project (CMIP5) and provide the capability to more credibly project future changes in extreme storm statistics and properties. The identification and tracking of storms in the voluminous model output is very challenging as it is impractical to manually identify storms due to the enormous size of the datasets, and therefore automated procedures are used. Traditionally, these procedures are based on a multi-variate set of physical conditions based on known properties of the class of storms in question. In recent years, we have successfully demonstrated that Deep Learning produces state of the art results for pattern detection in climate data. We have developed supervised and semi-supervised convolutional architectures for detecting and localizing tropical cyclones, extra-tropical cyclones and atmospheric rivers in simulation data. One of the primary challenges in the applicability of Deep Learning to climate data is in the expensive training phase. Typical networks may take days to converge on 10GB-sized datasets, while the climate science community has ready access to O(10 TB)-O(PB) sized datasets. In this work, we present the most scalable implementation of Deep Learning to date. We successfully scale a unified, semi-supervised convolutional architecture on all of the Cori Phase II supercomputer at NERSC. We use IntelCaffe, MKL and MLSL libraries. We have optimized single node MKL libraries to obtain 1-4 TF on single KNL nodes. We have developed a novel hybrid parameter update strategy to improve
Semi-Supervised Learning of Lift Optimization of Multi-Element Three-Segment Variable Camber Airfoil
Kaul, Upender K.; Nguyen, Nhan T.
2017-01-01
This chapter describes a new intelligent platform for learning optimal designs of morphing wings based on Variable Camber Continuous Trailing Edge Flaps (VCCTEF) in conjunction with a leading edge flap called the Variable Camber Krueger (VCK). The new platform consists of a Computational Fluid Dynamics (CFD) methodology coupled with a semi-supervised learning methodology. The CFD component of the intelligent platform comprises of a full Navier-Stokes solution capability (NASA OVERFLOW solver with Spalart-Allmaras turbulence model) that computes flow over a tri-element inboard NASA Generic Transport Model (GTM) wing section. Various VCCTEF/VCK settings and configurations were considered to explore optimal design for high-lift flight during take-off and landing. To determine globally optimal design of such a system, an extremely large set of CFD simulations is needed. This is not feasible to achieve in practice. To alleviate this problem, a recourse was taken to a semi-supervised learning (SSL) methodology, which is based on manifold regularization techniques. A reasonable space of CFD solutions was populated and then the SSL methodology was used to fit this manifold in its entirety, including the gaps in the manifold where there were no CFD solutions available. The SSL methodology in conjunction with an elastodynamic solver (FiDDLE) was demonstrated in an earlier study involving structural health monitoring. These CFD-SSL methodologies define the new intelligent platform that forms the basis for our search for optimal design of wings. Although the present platform can be used in various other design and operational problems in engineering, this chapter focuses on the high-lift study of the VCK-VCCTEF system. Top few candidate design configurations were identified by solving the CFD problem in a small subset of the design space. The SSL component was trained on the design space, and was then used in a predictive mode to populate a selected set of test points outside
Zhao, Nan; Han, Jing Ginger; Shyu, Chi-Ren; Korkin, Dmitry
2014-01-01
Single nucleotide polymorphisms (SNPs) are among the most common types of genetic variation in complex genetic disorders. A growing number of studies link the functional role of SNPs with the networks and pathways mediated by the disease-associated genes. For example, many non-synonymous missense SNPs (nsSNPs) have been found near or inside the protein-protein interaction (PPI) interfaces. Determining whether such nsSNP will disrupt or preserve a PPI is a challenging task to address, both experimentally and computationally. Here, we present this task as three related classification problems, and develop a new computational method, called the SNP-IN tool (non-synonymous SNP INteraction effect predictor). Our method predicts the effects of nsSNPs on PPIs, given the interaction's structure. It leverages supervised and semi-supervised feature-based classifiers, including our new Random Forest self-learning protocol. The classifiers are trained based on a dataset of comprehensive mutagenesis studies for 151 PPI complexes, with experimentally determined binding affinities of the mutant and wild-type interactions. Three classification problems were considered: (1) a 2-class problem (strengthening/weakening PPI mutations), (2) another 2-class problem (mutations that disrupt/preserve a PPI), and (3) a 3-class classification (detrimental/neutral/beneficial mutation effects). In total, 11 different supervised and semi-supervised classifiers were trained and assessed resulting in a promising performance, with the weighted f-measure ranging from 0.87 for Problem 1 to 0.70 for the most challenging Problem 3. By integrating prediction results of the 2-class classifiers into the 3-class classifier, we further improved its performance for Problem 3. To demonstrate the utility of SNP-IN tool, it was applied to study the nsSNP-induced rewiring of two disease-centered networks. The accurate and balanced performance of SNP-IN tool makes it readily available to study the rewiring of
Directory of Open Access Journals (Sweden)
Shi He
2015-01-01
Full Text Available A Bayesian hierarchical model is presented to classify very high resolution (VHR images in a semisupervised manner, in which both a maximum entropy discrimination latent Dirichlet allocation (MedLDA and a bilateral filter are combined into a novel application framework. The primary contribution of this paper is to nullify the disadvantages of traditional probabilistic topic models on pixel-level supervised information and to achieve the effective classification of VHR remote sensing images. This framework consists of the following two iterative steps. In the training stage, the model utilizes the central labeled pixel and its neighborhood, as a squared labeled image object, to train the classifiers. In the classification stage, each central unlabeled pixel with its neighborhood, as an unlabeled object, is classified as a user-provided geoobject class label with the maximum posterior probability. Gibbs sampling is adopted for model inference. The experimental results demonstrate that the proposed method outperforms two classical SVM-based supervised classification methods and probabilistic-topic-models-based classification methods.
Lu, Shen; Xia, Yong; Cai, Tom Weidong; Feng, David Dagan
2015-01-01
Dementia, Alzheimer's disease (AD) in particular is a global problem and big threat to the aging population. An image based computer-aided dementia diagnosis method is needed to providing doctors help during medical image examination. Many machine learning based dementia classification methods using medical imaging have been proposed and most of them achieve accurate results. However, most of these methods make use of supervised learning requiring fully labeled image dataset, which usually is not practical in real clinical environment. Using large amount of unlabeled images can improve the dementia classification performance. In this study we propose a new semi-supervised dementia classification method based on random manifold learning with affinity regularization. Three groups of spatial features are extracted from positron emission tomography (PET) images to construct an unsupervised random forest which is then used to regularize the manifold learning objective function. The proposed method, stat-of-the-art Laplacian support vector machine (LapSVM) and supervised SVM are applied to classify AD and normal controls (NC). The experiment results show that learning with unlabeled images indeed improves the classification performance. And our method outperforms LapSVM on the same dataset.
Chai, Hua; Li, Zi-Na; Meng, De-Yu; Xia, Liang-Yong; Liang, Yong
2017-10-12
Gene selection is an attractive and important task in cancer survival analysis. Most existing supervised learning methods can only use the labeled biological data, while the censored data (weakly labeled data) far more than the labeled data are ignored in model building. Trying to utilize such information in the censored data, a semi-supervised learning framework (Cox-AFT model) combined with Cox proportional hazard (Cox) and accelerated failure time (AFT) model was used in cancer research, which has better performance than the single Cox or AFT model. This method, however, is easily affected by noise. To alleviate this problem, in this paper we combine the Cox-AFT model with self-paced learning (SPL) method to more effectively employ the information in the censored data in a self-learning way. SPL is a kind of reliable and stable learning mechanism, which is recently proposed for simulating the human learning process to help the AFT model automatically identify and include samples of high confidence into training, minimizing interference from high noise. Utilizing the SPL method produces two direct advantages: (1) The utilization of censored data is further promoted; (2) the noise delivered to the model is greatly decreased. The experimental results demonstrate the effectiveness of the proposed model compared to the traditional Cox-AFT model.
Doostparast Torshizi, Abolfazl; Petzold, Linda R
2018-01-01
Data integration methods that combine data from different molecular levels such as genome, epigenome, transcriptome, etc., have received a great deal of interest in the past few years. It has been demonstrated that the synergistic effects of different biological data types can boost learning capabilities and lead to a better understanding of the underlying interactions among molecular levels. In this paper we present a graph-based semi-supervised classification algorithm that incorporates latent biological knowledge in the form of biological pathways with gene expression and DNA methylation data. The process of graph construction from biological pathways is based on detecting condition-responsive genes, where 3 sets of genes are finally extracted: all condition responsive genes, high-frequency condition-responsive genes, and P-value-filtered genes. The proposed approach is applied to ovarian cancer data downloaded from the Human Genome Atlas. Extensive numerical experiments demonstrate superior performance of the proposed approach compared to other state-of-the-art algorithms, including the latest graph-based classification techniques. Simulation results demonstrate that integrating various data types enhances classification performance and leads to a better understanding of interrelations between diverse omics data types. The proposed approach outperforms many of the state-of-the-art data integration algorithms. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
International Nuclear Information System (INIS)
Kang, Jeong Hyun; Kim, Young Chul; Kim, Hyunki; Kim, Young Wan; Hur, Hyuk; Kim, Jin Soo; Min, Byung Soh; Kim, Hogeun; Lim, Joon Seok; Seong, Jinsil; Keum, Ki Chang; Kim, Nam Kyu
2010-01-01
Purpose: The aim of this study was to determine the correlation between tumor volume changes assessed by three-dimensional (3D) magnetic resonance (MR) volumetry and the histopathologic tumor response in rectal cancer patients undergoing preoperative chemoradiation therapy (CRT). Methods and Materials: A total of 84 patients who underwent preoperative CRT followed by radical surgery were prospectively enrolled in the study. The post-treatment tumor volume and tumor volume reduction ratio (% decrease ratio), as shown by 3D MR volumetry, were compared with the histopathologic response, as shown by T and N downstaging and the tumor regression grade (TRG). Results: There were no significant differences in the post-treatment tumor volume and the volume reduction ratio shown by 3D MR volumetry with respect to T and N downstaging and the tumor regression grade. In a multivariate analysis, the tumor volume reduction ratio was not significantly associated with T and N downstaging. The volume reduction ratio (>75%, p = 0.01) and the pretreatment carcinoembryonic antigen level (≤3 ng/ml, p = 0.01), but not the post-treatment volume shown by 3D MR (≤ 5ml), were, however, significantly associated with an increased pathologic complete response rate. Conclusion: More than 75% of the tumor volume reduction ratios were significantly associated with a high pathologic complete response rate. Therefore, limited treatment options such as local excision or simple observation might be considered after preoperative CRT in this patient population.
Hasei, Tomohiro; Nakanishi, Haruka; Toda, Yumiko; Watanabe, Tetsushi
2012-08-31
3-Nitrobenzanthrone (3-NBA) is an extremely strong mutagen and carcinogen in rats inducing squamous cell carcinoma and adenocarcinoma. We developed a new sensitive analytical method, a two-dimensional HPLC system coupled with on-line reduction, to quantify non-fluorescent 3-NBA as fluorescent 3-aminobenzanthrone (3-ABA). The two-dimensional HPLC system consisted of reversed-phase HPLC and normal-phase HPLC, which were connected with a switch valve. 3-NBA was purified by reversed-phase HPLC and reduced to 3-ABA with a catalyst column, packed with alumina coated with platinum, in ethanol. An alcoholic solvent is necessary for reduction of 3-NBA, but 3-ABA is not fluorescent in the alcoholic solvent. Therefore, 3-ABA was separated from alcohol and impurities by normal-phase HPLC and detected with a fluorescence detector. Extracts from surface soil, airborne particles, classified airborne particles, and incinerator dust were applied to the two-dimensional HPLC system after clean-up with a silica gel column. 3-NBA, detected as 3-ABA, in the extracts was found as a single peak on the chromatograms without any interfering peaks. 3-NBA was detected in 4 incinerator dust samples (n=5). When classified airborne particles, that is, those 7.0 μm in size, were applied to the two-dimensional HPLC system after purified using a silica gel column, 3-NBA was detected in those particles with particle sizes NBA in airborne particles and the detection of 3-NBA in incinerator dust. Copyright © 2012 Elsevier B.V. All rights reserved.
Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning
Lai, Rongjie; Li, Jia
2017-01-01
Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...
Buffoni, Boris; Groves, Mark D.; Wahlén, Erik
2018-06-01
Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3}) has recently been given. In this article we present an existence theory for the physically more realistic case {0 point of the reduced functional is found by minimising it over its natural constraint set.
Onoda, M
2003-01-01
The structural and electronic properties of the Li sub 1 sub + sub x V sub 3 O sub 8 insertion electrode, where 0 sup 0.1 with nearly stoichiometric oxygen atoms, small polarons exist without carrier-creation energy at high temperatures, while at low temperatures the conduction may be of variable-range hopping (VRH) type. For x > 0.2, one-dimensional magnetic properties appear due to sizable exchange couplings and order-disorder effects of additional Li ions may lead to significant change of transport properties. For the intermediate composition 0 < x sup<= 0.1, strong randomness of the Li doping and the congenital oxygen deficiency cause VRH states even at high temperatures.
Zhang, Xiaotian; Yin, Jian; Zhang, Xu
2018-03-02
Increasing evidence suggests that dysregulation of microRNAs (miRNAs) may lead to a variety of diseases. Therefore, identifying disease-related miRNAs is a crucial problem. Currently, many computational approaches have been proposed to predict binary miRNA-disease associations. In this study, in order to predict underlying miRNA-disease association types, a semi-supervised model called the network-based label propagation algorithm is proposed to infer multiple types of miRNA-disease associations (NLPMMDA) by mutual information derived from the heterogeneous network. The NLPMMDA method integrates disease semantic similarity, miRNA functional similarity, and Gaussian interaction profile kernel similarity information of miRNAs and diseases to construct a heterogeneous network. NLPMMDA is a semi-supervised model which does not require verified negative samples. Leave-one-out cross validation (LOOCV) was implemented for four known types of miRNA-disease associations and demonstrated the reliable performance of our method. Moreover, case studies of lung cancer and breast cancer confirmed effective performance of NLPMMDA to predict novel miRNA-disease associations and their association types.
International Nuclear Information System (INIS)
Ibrahim, R. S.; El-Kalaawy, O. H.
2006-01-01
The relativistic nonlinear self-consistent equations for a collisionless cold plasma with stationary ions [R. S. Ibrahim, IMA J. Appl. Math. 68, 523 (2003)] are extended to 3 and 3+1 dimensions. The resulting system of equations is reduced to the sine-Poisson equation. The truncated Painleve expansion and reduction of the partial differential equation to a quadrature problem (RQ method) are described and applied to obtain the traveling wave solutions of the sine-Poisson equation for stationary and nonstationary equations in 3 and 3+1 dimensions describing the charge-density equilibrium configuration model
International Nuclear Information System (INIS)
Yuan, Lizhi; Jiang, Luhua; Liu, Jing; Xia, Zhangxun; Wang, Suli; Sun, Gongquan
2014-01-01
Graphical abstract: - Highlights: • Ag nanoparticles were prepared using GO as reductant without any stabilizers. • A composite support with a 3D structure was constructed by GO and carbon black. • The Ag/GO/C composite shows enhanced ORR activity compared with Ag/GO. - Abstract: A 3D graphene oxide/carbon sphere supported silver composite (Ag/GO/C) was synthesized using graphene oxide as the reducing agent. The reducing process of Ag + was monitored by the ultra violet-visible (UV-vis) absorption spectrometer and the physical properties of the Ag/GO/C composite were characterized by Fourier transform infrared spectrometer (FTIR), transmission electron microscopy (TEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS). The results demonstrated that the dispersive Ag nanoparticles are anchored uniformly on the surface of GO sheets with a mean size of about 6.9 nm. With introducing carbon black, the Ag nanoparticles aggregated slightly. Compared with its counterpart Ag/GO, the Ag/GO/C composite showed a significantly enhanced activity towards the oxygen reduction reaction in alkaline media. The enhancement can be ascribed to the 3D composite support, which not only improves the electrical conductivity, but also enforces the mass transport in the catalyst layer facilitating the reactants access to the active sites. Moreover, the Ag/GO/C composite exhibits good tolerance to alcohols, carbonates and tetramethylammonium hydroxide. This work is expected to open a new pathway to use GO as a reducing agent to synthesize electrocatalysts without surfactants
Manifold regularized multitask learning for semi-supervised multilabel image classification.
Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J
2013-02-01
It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.
Hashimoto, Masayuki; Nagatani, Yukihiro; Oshio, Yasuhiko; Nitta, Norihisa; Yamashiro, Tsuneo; Tsukagoshi, Shinsuke; Ushio, Noritoshi; Mayumi, Masayuki; Kimoto, Tatsuya; Igarashi, Tomoyuki; Yoshigoe, Makoto; Iwai, Kyohei; Tanaka, Koki; Sato, Shigetaka; Sonoda, Akinaga; Otani, Hideji; Murata, Kiyoshi; Hanaoka, Jun
2018-01-01
To assess the feasibility of Four-Dimensional Ultra-Low-Dose Computed Tomography (4D-ULDCT) for distinguishing pleural aspects with localized pleural adhesion (LPA) from those without. Twenty-seven patients underwent 4D-ULDCT during a single respiration with a 16cm-coverage of the body axis. The presence and severity of LPA was confirmed by their intraoperative thoracoscopic findings. A point on the pleura and a corresponding point on the outer edge of the costal bone were placed in identical axial planes at end-inspiration. The distance of the two points (PCD), traced by automatic tracking functions respectively, was calculated at each respiratory phase. The maximal and average change amounts in PCD (PCD MCA and PCD ACA ) were compared among 110 measurement points (MPs) without LPA, 16MPs with mild LPA and 10MPs with severe LPA in upper lung field cranial to the bronchial bifurcation (ULF), and 150MPs without LPA, 17MPs with mild LPA and 9MPs with severe LPA in lower lung field caudal to the bronchial bifurcation (LLF) using the Mann-Whitney U test. In the LLF, PCD ACA as well as PCD MCA demonstrated a significant difference among non-LPA, mild LPA and severe LPA (18.1±9.2, 12.3±6.2 and 5.0±3.3mm) (p<0.05). Also in the ULF, PCD ACA showed a significant difference among three conditions (9.2±5.5, 5.7±2.8 and 2.2±0.4mm, respectively) (p<0.05), whereas PCD MCA for mild LPA was similar to that for non-LPA (12.3±5.9 and 17.5±11.0mm). Four D-ULDCT could be a useful non-invasive preoperative assessment modality for the detection of the presence or severity of LPA. Copyright © 2017 Elsevier B.V. All rights reserved.
Yoon, Ki Ro; Choi, Jinho; Cho, Su-Ho; Jung, Ji-Won; Kim, Chanhoon; Cheong, Jun Young; Kim, Il-Doo
2018-03-01
Efficient electrocatalyst for oxygen reduction reaction (ORR) is an essential component for stable operation of various sustainable energy conversion and storage systems such as fuel cells and metal-air batteries. Herein, we report a facile preparation of meso/macroporous Co and N co-doped carbon nanofibers (Co-Nx@CNFs) as a high performance and cost-effective electrocatalyst toward ORR. Co-Nx@CNFs are simply obtained from electrospinning of Co precursor and bicomponent polymers (PVP/PAN) followed by temperature controlled carbonization and further activation step. The prepared Co-Nx@CNF catalyst carbonized at 700 °C (Co-Nx@CNF700) shows outstanding ORR performance, i.e., a low onset potential (0.941 V) and half wave potential (0.814 V) with almost four-electron transfer pathways (n= 3.9). In addition, Co-Nx@CNF700 exhibits a superior methanol tolerance and higher stability (>70 h) in Zn-air battery in comparison with Pt/C catalyst (∼30 h). The outstanding performance of Co-Nx@CNF700 catalysts is attributed to i) enlarged surface area with bimodal porosity achieved by leaching of inactive species, ii) increase of exposed ORR active Co-Nx moieties and graphitic edge sites, and iii) enhanced electrical conductivity and corrosion resistance due to the existence of numerous graphitic flakes in carbon matrix.
Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments
Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan
2016-01-01
Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768
Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.
Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan
2016-01-01
Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.
Yao, Chen; Zhu, Xiaojin; Weigel, Kent A
2016-11-07
Genomic prediction for novel traits, which can be costly and labor-intensive to measure, is often hampered by low accuracy due to the limited size of the reference population. As an option to improve prediction accuracy, we introduced a semi-supervised learning strategy known as the self-training model, and applied this method to genomic prediction of residual feed intake (RFI) in dairy cattle. We describe a self-training model that is wrapped around a support vector machine (SVM) algorithm, which enables it to use data from animals with and without measured phenotypes. Initially, a SVM model was trained using data from 792 animals with measured RFI phenotypes. Then, the resulting SVM was used to generate self-trained phenotypes for 3000 animals for which RFI measurements were not available. Finally, the SVM model was re-trained using data from up to 3792 animals, including those with measured and self-trained RFI phenotypes. Incorporation of additional animals with self-trained phenotypes enhanced the accuracy of genomic predictions compared to that of predictions that were derived from the subset of animals with measured phenotypes. The optimal ratio of animals with self-trained phenotypes to animals with measured phenotypes (2.5, 2.0, and 1.8) and the maximum increase achieved in prediction accuracy measured as the correlation between predicted and actual RFI phenotypes (5.9, 4.1, and 2.4%) decreased as the size of the initial training set (300, 400, and 500 animals with measured phenotypes) increased. The optimal number of animals with self-trained phenotypes may be smaller when prediction accuracy is measured as the mean squared error rather than the correlation between predicted and actual RFI phenotypes. Our results demonstrate that semi-supervised learning models that incorporate self-trained phenotypes can achieve genomic prediction accuracies that are comparable to those obtained with models using larger training sets that include only animals with
International Nuclear Information System (INIS)
Karuppasamy, L.; Chen, C.Y.; Anandan, S.; Wu, J.J.
2017-01-01
Highlights: •Synthesis of a new class of Au nanocrystals enclosed with high index surface supported on MoO 3 nanorods by ultrasonic probe method. •The role of supporting materials reduces the loading of Au and acts as a co-catalyst. •The as prepared electrocatalyst exhibits enhanced catalytic activity and stability towards both EOR and ORR. -- Abstract: The design of highly active electrocatalyst for ethanol electrooxidation (EOR) and oxygen reduction reaction (ORR) is of great significance for the improvement of efficient direct ethanol fuel cells (DEFCs). Therefore, creating high index facets nanocrystals with abundant catalytic active sites of stepped atoms is an effective way to enhance the electrocatalytic performance. In this article, we prepared the high index surface structures of Au nanocrystals supported on one-dimensional (1-D) MoO 3 nanorods by using two steps ultrasonic probe irradiation method. The size and physical properties of as electrocatalysts were studied by using field emission scanning electron microscopy (FE-SEM), high-resolution transmission electron microscopy (HR-TEM), and x-ray diffraction (XRD) instrumentation. Besides, the catalytic activity of as Au-MoO 3 electrocatalyst was determined by using cyclic voltammetry (CV), chronoamperometric (CA), electrochemical impedance spectroscopy (EIS), CO-stripping, and linear sweep voltammetry-rotating disk electrode (LSV-RDE). As a consequence, the Au-MoO 3 nanocomposites has been considered as an effective electrocatalyst for both ethanol oxidation and oxygen reduction reaction.
Fatehi, Moslem; Asadi, Hooshang H.
2017-04-01
In this study, the application of a transductive support vector machine (TSVM), an innovative semi-supervised learning algorithm, has been proposed for mapping the potential drill targets at a detailed exploration stage. The semi-supervised learning method is a hybrid of supervised and unsupervised learning approach that simultaneously uses both training and non-training data to design a classifier. By using the TSVM algorithm, exploration layers at the Dalli porphyry Cu-Au deposit in the central Iran were integrated to locate the boundary of the Cu-Au mineralization for further drilling. By applying this algorithm on the non-training (unlabeled) and limited training (labeled) Dalli exploration data, the study area was classified in two domains of Cu-Au ore and waste. Then, the results were validated by the earlier block models created, using the available borehole and trench data. In addition to TSVM, the support vector machine (SVM) algorithm was also implemented on the study area for comparison. Thirty percent of the labeled exploration data was used to evaluate the performance of these two algorithms. The results revealed 87 percent correct recognition accuracy for the TSVM algorithm and 82 percent for the SVM algorithm. The deepest inclined borehole, recently drilled in the western part of the Dalli deposit, indicated that the boundary of Cu-Au mineralization, as identified by the TSVM algorithm, was only 15 m off from the actual boundary intersected by this borehole. According to the results of the TSVM algorithm, six new boreholes were suggested for further drilling at the Dalli deposit. This study showed that the TSVM algorithm could be a useful tool for enhancing the mineralization zones and consequently, ensuring a more accurate drill hole planning.
Exploring Dimensionality Reduction for Text Mining
2007-05-04
result is v: v = √ K2 · 1 (3.20) 6) Define K3 to be the element-by-element division of K2 by the product of v and its transpose: K3i,j = K2i,j/(v · vT...3.21) 30 7) Compute the singular value decomposition of K3 to get U , D, and V as specified in Equation 3.5. 8) The output points can then be...adequate safety studies. Procter and Gamble agrees that olestra helps carry away fat-soluble vitamins such as A, D, E, and K. Indeed, the firm plans to
Two-color QCD via dimensional reduction
Czech Academy of Sciences Publication Activity Database
Zhang, T.; Brauner, Tomáš; Kurkela, A.; Vuorinen, A.
2012-01-01
Roč. 2012, č. 139 (2012), s. 1-16 ISSN 1126-6708 Institutional support: RVO:61389005 Keywords : thermal field theory * QCD * confinement Subject RIV: BE - Theoretical Physics Impact factor: 5.618, year: 2012
Deep Belief Networks for dimensionality reduction
Noulas, A.K.; Kröse, B.J.A.
2008-01-01
Deep Belief Networks are probabilistic generative models which are composed by multiple layers of latent stochastic variables. The top two layers have symmetric undirected connections, while the lower layers receive directed top-down connections from the layer above. The current state-of-the-art
International Nuclear Information System (INIS)
Fujiwara, Yasuhiro; Yamaguchi, Isao; Ookoshi, Yusuke; Ootani, Yuriko; Matsuda, Tsuyoshi; Ishimori, Yoshiyuki; Hayashi, Hiroyuki; Miyati, Tosiaki; Kimura, Hirohiko
2007-01-01
The purpose of this study was to decrease vascular artifacts caused by the in-flow effect in three-dimensional inversion recovery prepared fast spoiled gradient recalled acquisition in the steady state (3D IR FSPGR) at 3.0 Tesla. We developed 3D double IR FSPGR and investigated the signal characteristics of the new sequence. The 3D double IR FSPGR sequence uses two inversion pulses, the first for obtaining tissue contrast and the second for nulling vascular signal, which is applied at the time of the first IR period at the neck region. We have optimized scan parameters based on both phantom and in-vivo study. As a result, optimized parameters (1st TI=700 ms, 2nd TI=400 ms) successfully have produced much less vessel signal at reduction than conventional 3D IR FSPGR over a wide imaging range, while preserving the signal-to-noise ratio (SNR) and gray/white matter contrast. Moreover, the decreased artifact was also confirmed by visual inspection of the images obtained in vivo using those parameters. Thus, 3D double IR FSPGR was a useful sequence for the acquisition of T1-weighted images at 3.0 Tesla. (author)
Energy Technology Data Exchange (ETDEWEB)
Fu, Shaofang [School of Mechanical; Zhu, Chengzhou [School of Mechanical; Song, Junhua [School of Mechanical; Feng, Shuo [School of Mechanical; Du, Dan [School of Mechanical; Key Laboratory of Pesticide and Chemical; Engelhard, Mark H. [Environmental Molecular Science Laboratory, Pacific Northwest National Laboratory, Richland, Washington 99354, United States; Xiao, Dongdong [Environmental Molecular Science Laboratory, Pacific Northwest National Laboratory, Richland, Washington 99354, United States; Li, Dongsheng [Environmental Molecular Science Laboratory, Pacific Northwest National Laboratory, Richland, Washington 99354, United States; Lin, Yuehe [School of Mechanical
2017-10-12
Investigation of highly active and cost-efficient electrocatalysts for oxygen reduction reaction is of great importance in a wide range of clean energy devices, including fuel cells and metal-air batteries. Herein, the simultaneous formation of Co9S8 and N,S-codoped carbon was achieved in a dual templates system. First, Co(OH)2 nanosheets and tetraethyl orthosilicate were utilized to direct the formation of two-dimensional carbon precursors, which were then dispersed into thiourea solution. After subsequent pyrolysis and templates removal, N/S-codoped porous carbon sheets confined Co9S8 catalysts (Co9S8/NSC) were obtained. Owing to the morphological and compositional advantages as well as the synergistic effects, the resultant Co9S8/NSC catalysts with modified doping level and pyrolysis degree exhibit superior ORR catalytic activity and long-term stability compared with the state-of-the-art Pt/C catalyst in alkaline media. Remarkably, the as-prepared carbon composites also reveal exceptional tolerance of methanol, indicating their potential applications in fuel cells.
Directory of Open Access Journals (Sweden)
S. Szopa
2005-01-01
Full Text Available The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC compounds. The procedure is based on (i the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005, (ii the application of several commonly used reduction methods to the fully explicit scheme, and (iii the assessment of resulting errors based on direct comparison between the reduced and full schemes. The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii grouping of primary species having similar reactivities into surrogate species and (iii grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Robust methods for data reduction
Farcomeni, Alessio
2015-01-01
Robust Methods for Data Reduction gives a non-technical overview of robust data reduction techniques, encouraging the use of these important and useful methods in practical applications. The main areas covered include principal components analysis, sparse principal component analysis, canonical correlation analysis, factor analysis, clustering, double clustering, and discriminant analysis.The first part of the book illustrates how dimension reduction techniques synthesize available information by reducing the dimensionality of the data. The second part focuses on cluster and discriminant analy
Wang, Yan; Ma, Guangkai; An, Le; Shi, Feng; Zhang, Pei; Lalush, David S.; Wu, Xi; Pu, Yifei; Zhou, Jiliu; Shen, Dinggang
2017-01-01
Objective To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semi-supervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion This work proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients. PMID:27187939
Shapiro, Lawrence
2018-04-01
Putnam's criticisms of the identity theory attack a straw man. Fodor's criticisms of reduction attack a straw man. Properly interpreted, Nagel offered a conception of reduction that captures everything a physicalist could want. I update Nagel, introducing the idea of overlap, and show why multiple realization poses no challenge to reduction so construed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ficuciello, Fanny; Siciliano, Bruno
2016-07-01
A question that often arises, among researchers working on artificial hands and robotic manipulation, concerns the real meaning of synergies. Namely, are they a realistic representation of the central nervous system control of manipulation activities at different levels and of the sensory-motor manipulation apparatus of the human being, or do they constitute just a theoretical framework exploiting analytical methods to simplify the representation of grasping and manipulation activities? Apparently, this is not a simple question to answer and, in this regard, many minds from the field of neuroscience and robotics are addressing the issue [1]. The interest of robotics is definitely oriented towards the adoption of synergies to tackle the control problem of devices with high number of degrees of freedom (DoFs) which are required to achieve motor and learning skills comparable to those of humans. The synergy concept is useful for innovative underactuated design of anthropomorphic hands [2], while the resulting dimensionality reduction simplifies the control of biomedical devices such as myoelectric hand prostheses [3]. Synergies might also be useful in conjunction with the learning process [4]. This aspect is less explored since few works on synergy-based learning have been realized in robotics. In learning new tasks through trial-and-error, physical interaction is important. On the other hand, advanced mechanical designs such as tendon-driven actuation, underactuated compliant mechanisms and hyper-redundant/continuum robots might exhibit enhanced capabilities of adapting to changing environments and learning from exploration. In particular, high DoFs and compliance increase the complexity of modelling and control of these devices. An analytical approach to manipulation planning requires a precise model of the object, an accurate description of the task, and an evaluation of the object affordance, which all make the process rather time consuming. The integration of
Indian Academy of Sciences (India)
Dimensional analysis is a useful tool which finds important applications in physics and engineering. It is most effective when there exist a maximal number of dimensionless quantities constructed out of the relevant physical variables. Though a complete theory of dimen- sional analysis was developed way back in 1914 in a.
Hakky, Tariq S; Martinez, Daniel; Yang, Christopher; Carrion, Rafael E
2015-01-01
Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. The patient tolerated the procedure well and has resolution of his corporal disfigurement. Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement.
Directory of Open Access Journals (Sweden)
Han Kyungsook
2010-06-01
Full Text Available Abstract Background Genetic interaction profiles are highly informative and helpful for understanding the functional linkages between genes, and therefore have been extensively exploited for annotating gene functions and dissecting specific pathway structures. However, our understanding is rather limited to the relationship between double concurrent perturbation and various higher level phenotypic changes, e.g. those in cells, tissues or organs. Modifier screens, such as synthetic genetic arrays (SGA can help us to understand the phenotype caused by combined gene mutations. Unfortunately, exhaustive tests on all possible combined mutations in any genome are vulnerable to combinatorial explosion and are infeasible either technically or financially. Therefore, an accurate computational approach to predict genetic interaction is highly desirable, and such methods have the potential of alleviating the bottleneck on experiment design. Results In this work, we introduce a computational systems biology approach for the accurate prediction of pairwise synthetic genetic interactions (SGI. First, a high-coverage and high-precision functional gene network (FGN is constructed by integrating protein-protein interaction (PPI, protein complex and gene expression data; then, a graph-based semi-supervised learning (SSL classifier is utilized to identify SGI, where the topological properties of protein pairs in weighted FGN is used as input features of the classifier. We compare the proposed SSL method with the state-of-the-art supervised classifier, the support vector machines (SVM, on a benchmark dataset in S. cerevisiae to validate our method's ability to distinguish synthetic genetic interactions from non-interaction gene pairs. Experimental results show that the proposed method can accurately predict genetic interactions in S. cerevisiae (with a sensitivity of 92% and specificity of 91%. Noticeably, the SSL method is more efficient than SVM, especially for
Recursions of Symmetry Orbits and Reduction without Reduction
Directory of Open Access Journals (Sweden)
Andrei A. Malykh
2011-04-01
Full Text Available We consider a four-dimensional PDE possessing partner symmetries mainly on the example of complex Monge-Ampère equation (CMA. We use simultaneously two pairs of symmetries related by a recursion relation, which are mutually complex conjugate for CMA. For both pairs of partner symmetries, using Lie equations, we introduce explicitly group parameters as additional variables, replacing symmetry characteristics and their complex conjugates by derivatives of the unknown with respect to group parameters. We study the resulting system of six equations in the eight-dimensional space, that includes CMA, four equations of the recursion between partner symmetries and one integrability condition of this system. We use point symmetries of this extended system for performing its symmetry reduction with respect to group parameters that facilitates solving the extended system. This procedure does not imply a reduction in the number of physical variables and hence we end up with orbits of non-invariant solutions of CMA, generated by one partner symmetry, not used in the reduction. These solutions are determined by six linear equations with constant coefficients in the five-dimensional space which are obtained by a three-dimensional Legendre transformation of the reduced extended system. We present algebraic and exponential examples of such solutions that govern Legendre-transformed Ricci-flat Kähler metrics with no Killing vectors. A similar procedure is briefly outlined for Husain equation.
Kitchen, Helen J; Saratovsky, Ian; Hayward, Michael A
2010-07-14
Reaction of LaSrMnO(4) with CaH(2) at 420 degrees C yields LaSrMnO(3.67(3)). Raising the temperature to 480 degrees C yields the Mn(II) phase LaSrMnO(3.50(2)). Neutron powder diffraction data show both phases adopt body-centred orthorhombic crystal structures (LaSrMnO(3.67(3)), Immm: a = 3.7256(1) A, b = 3.8227(1) A, c = 13.3617(4) A; LaSrMnO(3.50(2)), Immm: a = 3.7810(1) A, b = 3.7936(1) A, c = 13.3974(3) A) with anion vacancies located within the equatorial MnO(2-x) planes of the materials. Analogous reactivity is observed between LaBaMnO(4) and CaH(2) to yield body-centred tetragonal reduced phases (LaBaMnO(3.53(3)), I4/mmm: a = 3.8872(1)A, c = 13.6438(2) A). Low-temperature neutron diffraction and magnetisation data show that LaSrMnO(3.5) and LaBaMnO(3.5) exhibit three-dimensional antiferromagnetic order below 155 K and 135 K respectively. Above these temperatures, they exhibit two-dimensional antiferromagnetic order with paramagnetic behaviour observed above 480 K in both phases. The origin of the low dimensional magnetic order and ordering of the anion vacancies in the reduced phases is discussed.
Directory of Open Access Journals (Sweden)
Tariq S. Hakky
2015-04-01
Full Text Available Objective Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Introduction Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. Materials and Methods: We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. Results The patient tolerated the procedure well and has resolution of his corporal disfigurement. Conclusions Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement.
International Nuclear Information System (INIS)
Olson, D.E.; Singh, A.K.
1986-01-01
Many safety-related piping systems in nuclear power plants have been oversupported. Since snubbers make up a large percentage of the pipe supports or restraints used in a plant, a plant's snubber population is much larger than required to adequately restrain the piping. This has resulted in operating problems and unnecessary expenses for maintenance and inservice inspections (ISIs) of snubbers. This paper presents an overview of snubber reduction, including: the incentives for removing snubbers, a historical perspective on how piping became oversupported, why it is possible to remove snubbers, and the costs and benefits of doing so
Higher-dimensional Bianchi type-VIh cosmologies
Lorenz-Petzold, D.
1985-09-01
The higher-dimensional perfect fluid equations of a generalization of the (1 + 3)-dimensional Bianchi type-VIh space-time are discussed. Bianchi type-V and Bianchi type-III space-times are also included as special cases. It is shown that the Chodos-Detweiler (1980) mechanism of cosmological dimensional-reduction is possible in these cases.
International Nuclear Information System (INIS)
Hamilton, M.A.
1990-01-01
During a radon gas screening program, elevated levels of radon gas were detected in homes on Mackinac Island, Mich. Six homes on foundations with crawl spaces were selected for a research project aimed at reducing radon gas concentrations, which ranged from 12.9 to 82.3 pCi/l. Using isolation and ventilation techniques, and variations thereof, radon concentrations were reduced to less than 1 pCi/l. This paper reports that these reductions were achieved using 3.5 mil cross laminated or 10 mil high density polyethylene plastic as a barrier without sealing to the foundation or support piers, solid and/or perforated plastic pipe and mechanical fans. Wind turbines were found to be ineffective at reducing concentrations to acceptable levels. Homeowners themselves installed all materials
Extended supersymmetry in four-dimensional Euclidean space
International Nuclear Information System (INIS)
McKeon, D.G.C.; Sherry, T.N.
2000-01-01
Since the generators of the two SU(2) groups which comprise SO(4) are not Hermitian conjugates of each other, the simplest supersymmetry algebra in four-dimensional Euclidean space more closely resembles the N=2 than the N=1 supersymmetry algebra in four-dimensional Minkowski space. An extended supersymmetry algebra in four-dimensional Euclidean space is considered in this paper; its structure resembles that of N=4 supersymmetry in four-dimensional Minkowski space. The relationship of this algebra to the algebra found by dimensionally reducing the N=1 supersymmetry algebra in ten-dimensional Euclidean space to four-dimensional Euclidean space is examined. The dimensional reduction of N=1 super Yang-Mills theory in ten-dimensional Minkowski space to four-dimensional Euclidean space is also considered
International Nuclear Information System (INIS)
Chang, Joe Y.; Zhang Xiaodong; Wang Xiaochun; Kang Yixiu; Riley, Beverly C.; Bilton, Stephen C.; Mohan, Radhe; Komaki, Ritsuko; Cox, James D.
2006-01-01
Purpose: To compare dose-volume histograms (DVH) in patients with non-small-cell lung cancer (NSCLC) treated by photon or proton radiotherapy. Methods and Materials: Dose-volume histograms were compared between photon, including three-dimensional conformal radiation therapy (3D-CRT), intensity-modulated radiation therapy (IMRT), and proton plans at doses of 66 Gy, 87.5 Gy in Stage I (n = 10) and 60-63 Gy, and 74 Gy in Stage III (n 15). Results: For Stage I, the mean total lung V5, V10, and V20 were 31.8%, 24.6%, and 15.8%, respectively, for photon 3D-CRT with 66 Gy, whereas they were 13.4%, 12.3%, and 10.9%, respectively, with proton with dose escalation to 87.5 cobalt Gray equivalents (CGE) (p = 0.002). For Stage III, the mean total lung V5, V10, and V20 were 54.1%, 46.9%, and 34.8%, respectively, for photon 3D-CRT with 63 Gy, whereas they were 39.7%, 36.6%, and 31.6%, respectively, for proton with dose escalation to 74 CGE (p = 0.002). In all cases, the doses to lung, spinal cord, heart, esophagus, and integral dose were lower with proton therapy even compared with IMRT. Conclusions: Proton treatment appears to reduce dose to normal tissues significantly, even with dose escalation, compared with standard-dose photon therapy, either 3D-CRT or IMRT
Directory of Open Access Journals (Sweden)
Wei Xiao
2016-01-01
Full Text Available A series of three-dimensional ZnxCd1-xS/reduced graphene oxide (ZnxCd1-xS/RGO hybrid aerogels was successfully synthesized based on a one-pot hydrothermal approach, which were subsequently used as visible-light-driven photocatalysts for photoreduction of Cr(VI in water. Over 95% of Cr(VI was photoreduced by Zn0.5Cd0.5S/RGO aerogel material within 140 min, and such photocatalytic performance was superior to that of other ZnxCd1-xS/RGO aerogel materials (x≠0.5 and bare Zn0.5Cd0.5S. It was assumed that the enhanced photocatalytic activity of Zn0.5Cd0.5S/RGO aerogel was attributed to its high specific surface area and the preferable synergetic catalytic effect between Zn0.5Cd0.5S and RGO. Besides, Zn0.5Cd0.5S/RGO aerogel materials were robust and durable enough so that they could be reused several times with merely limited loss of photocatalytic activity. The chemical composition, phase, structure, and morphology of Zn0.5Cd0.5S/RGO aerogel material were carefully examined by a number of techniques like XRD, SEM, TEM, BET, Raman characterizations, and so on. It was found that Zn0.5Cd0.5S/RGO aerogel possessed hierarchically porous architecture with the specific surface area as high as 260.8 m2 g−1. The Zn0.5Cd0.5S component incorporated in Zn0.5Cd0.5S/RGO aerogel existed in the form of solid solution nanoparticles, which were uniformly distributed in the RGO matrix.
DEFF Research Database (Denmark)
Walder, Christian; Henao, Ricardo; Mørup, Morten
We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....
Hidden symmetries in five-dimensional supergravity
International Nuclear Information System (INIS)
Poessel, M.
2003-05-01
This thesis is concerned with the study of hidden symmetries in supergravity, which play an important role in the present picture of supergravity and string theory. Concretely, the appearance of a hidden G 2(+2) /SO(4) symmetry is studied in the dimensional reduction of d=5, N=2 supergravity to three dimensions - a parallel model to the more famous E 8(+8) /SO(16) case in eleven-dimensional supergravity. Extending previous partial results for the bosonic part, I give a derivation that includes fermionic terms. This sheds new light on the appearance of the local hidden symmetry SO(4) in the reduction, and shows up an unusual feature which follows from an analysis of the R-symmetry associated with N=4 supergravity and of the supersymmetry variations, and which has no parallel in the eleven-dimensional case: The emergence of an additional SO(3) as part of the enhanced local symmetry, invisible in the dimensional reduction of the gravitino, and corresponding to the fact that, of the SO(4) used in the coset model, only the diagonal SO(3) is visible immediately upon dimensional reduction. The uncovering of the hidden symmetries proceeds via the construction of the proper coset gravity in three dimensions, and matching it with the Lagrangian obtained from the reduction. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Hinrichsen, B [Max-Planck-Institute for Solid State Research, Heisenbergstrasse 1, D-70569 Stuttgart (Germany); Dinnebier, R E [Max-Planck-Institute for Solid State Research, Heisenbergstrasse 1, D-70569 Stuttgart (Germany); Rajiv, P [Max-Planck-Institute for Solid State Research, Heisenbergstrasse 1, D-70569 Stuttgart (Germany); Hanfland, M [European Synchrotron Radiation Facility, 6 rue Jules Horowitz, BP220, 38043 Grenoble Cedex (France); Grzechnik, A [Departamento de Fisica de la Materia Condensada, Facultad de Ciencia y Technologia, Universidad del Pais Vasco, Apartado 644, E-48080 Bilbao (Spain); Jansen, M [Max-Planck-Institute for Solid State Research, Heisenbergstrasse 1, D-70569 Stuttgart (Germany)
2006-06-28
Methods have been developed to facilitate the data analysis of multiple two-dimensional powder diffraction images. These include, among others, automatic detection and calibration of Debye-Scherrer ellipses using pattern recognition techniques, and signal filtering employing established statistical procedures like fractile statistics. All algorithms are implemented in the freely available program package Powder3D developed for the evaluation and graphical presentation of large powder diffraction data sets. As a case study, we report the pressure dependence of the crystal structure of iron antimony oxide FeSb{sub 2}O{sub 4} (p{<=}21 GPa, T = 298 K) using high-resolution angle dispersive x-ray powder diffraction. FeSb{sub 2}O{sub 4} shows two phase transitions in the measured pressure range. The crystal structures of all modifications consist of frameworks of Fe{sup 2+}O{sub 6} octahedra and irregular Sb{sup 3+}O{sub 4} polyhedra. At ambient conditions, FeSb{sub 2}O{sub 4} crystallizes in space group P4{sub 2}/mbc (phase I). Between p = 3.2 GPa and 4.1 GPa it exhibits a displacive second order phase transition to a structure of space group P 2{sub 1}/c (phase II, a = 5.7792(4) A, b = 8.3134(9) A, c = 8.4545(11) A, {beta} = 91.879(10){sup 0}, at p = 4.2 GPa). A second phase transition occurs between p = 6.4 GPa and 7.4 GPa to a structure of space group P4{sub 2}/m (phase III, a = 7.8498(4) A, c = 5.7452(5) A, at p = 10.5 GPa). A nonlinear compression behaviour over the entire pressure range is observed, which can be described by three Vinet equations in the ranges from p = 0.52 GPa to p 3.12 GPa, p = 4.2 GPa to p = 6.3 GPa and from p = 7.5 GPa to p = 19.8 GPa. The extrapolated bulk moduli of the high-pressure phases were determined to K{sub 0} = 49(2) GPa for phase I, K{sub 0} = 27(3) GPa for phase II and K{sub 0} = 45(2) GPa for phase III. The crystal structures of all phases are refined against x-ray powder data measured at several pressures between p = 0.52 GPa
Model reduction of parametrized systems
Ohlberger, Mario; Patera, Anthony; Rozza, Gianluigi; Urban, Karsten
2017-01-01
The special volume offers a global guide to new concepts and approaches concerning the following topics: reduced basis methods, proper orthogonal decomposition, proper generalized decomposition, approximation theory related to model reduction, learning theory and compressed sensing, stochastic and high-dimensional problems, system-theoretic methods, nonlinear model reduction, reduction of coupled problems/multiphysics, optimization and optimal control, state estimation and control, reduced order models and domain decomposition methods, Krylov-subspace and interpolatory methods, and applications to real industrial and complex problems. The book represents the state of the art in the development of reduced order methods. It contains contributions from internationally respected experts, guaranteeing a wide range of expertise and topics. Further, it reflects an important effor t, carried out over the last 12 years, to build a growing research community in this field. Though not a textbook, some of the chapters ca...
Infinite dimensional gauge structure of Kaluza-Klein theories II: D>5
International Nuclear Information System (INIS)
Aulakh, C.S.; Sahdev, D.
1985-12-01
We carry out the dimensional reduction of the pure gravity sector of Kaluza Klein theories without making truncations of any sort. This generalizes our previous result for the 5-dimensional case to 4+d(>1) dimensions. The effective 4-dimensional action has the structure of an infinite dimensional gauge theory
Dimensional degression in AdSd
International Nuclear Information System (INIS)
Artsukevich, A. Yu.; Vasiliev, M. A.
2009-01-01
We analyze the pattern of fields in (d+1)-dimensional anti-de Sitter space in terms of those in d-dimensional anti-de Sitter space. The procedure, which is neither dimensional reduction nor dimensional compactification, is called dimensional degression. The analysis is performed group theoretically for all totally symmetric bosonic and fermionic representations of the anti-de Sitter algebra. The field-theoretical analysis is done for a massive scalar field in AdS d+d ' and massless spin-one-half, spin-one, and spin-two fields in AdS d+1 . The mass spectra of the resulting towers of fields in AdS d are found. For the scalar field case, the obtained results extend to the shadow sector those obtained by Metsaev [Nucl. Phys. B, Proc. Suppl. 102, 100 (2001)] by a different method.
Model reduction for circuit simulation
Hinze, Michael; Maten, E Jan W Ter
2011-01-01
Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi
Cohomological reduction of sigma models
Energy Technology Data Exchange (ETDEWEB)
Candu, Constantin; Mitev, Vladimir; Schomerus, Volker [DESY, Hamburg (Germany). Theory Group; Creutzig, Thomas [North Carolina Univ., Chapel Hill, NC (United States). Dept. of Physics and Astronomy
2010-01-15
This article studies some features of quantum field theories with internal supersymmetry, focusing mainly on 2-dimensional non-linear sigma models which take values in a coset superspace. It is discussed how BRST operators from the target space super- symmetry algebra can be used to identify subsectors which are often simpler than the original model and may allow for an explicit computation of correlation functions. After an extensive discussion of the general reduction scheme, we present a number of interesting examples, including symmetric superspaces G/G{sup Z{sub 2}} and coset superspaces of the form G/G{sup Z{sub 4}}. (orig.)
Joint statistics of strongly correlated neurons via dimensionality reduction
International Nuclear Information System (INIS)
Deniz, Taşkın; Rotter, Stefan
2017-01-01
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input. (paper)
Dimensionality Reduction in Big Data with Nonnegative Matrix Factorization
2017-06-20
Multiplicative Update Rule(MUR), Projected Gradient Meth- ods (PrG), Block Principal Pivoting method(BlP), Fast Active-set-like method(AcS), Fast...16], one of the robust ensemble meth- ods , to classify the testing datasets. The proposed algorithm outperforms the other algorithms and PCA over all
Finite-dimensional reductions of the discrete Toda chain
International Nuclear Information System (INIS)
Kazakova, T G
2004-01-01
The problem of construction of integrable boundary conditions for the discrete Toda chain is considered. The restricted chains for properly chosen closure conditions are reduced to the well-known discrete Painleve equations dP III , dP V , dP VI . Lax representations for these discrete Painleve equations are found
Gauss-Bonnet actions and their dimensionally reduced descendants
International Nuclear Information System (INIS)
Mueller-Hoissen, F.
1989-01-01
A brief introduction to Gauss-Bonnet type generalizations of the Einstein-Hilbert gravity action in more than four dimensions is given and the structure of associated (effective) theories obtained by dimensional reduction is discussed. (author)
AbouEisha, Hassan M.
2014-01-01
The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
... considering breast reduction surgery, consult a board-certified plastic surgeon. It's important to understand what breast reduction surgery entails — including possible risks and complications — as ...
Dimensional cosmological principles
International Nuclear Information System (INIS)
Chi, L.K.
1985-01-01
The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle
Hamiltonian reduction and supersymmetric mechanics with Dirac monopole
International Nuclear Information System (INIS)
Bellucci, Stefano; Nersessian, Armen; Yeranyan, Armen
2006-01-01
We apply the technique of Hamiltonian reduction for the construction of three-dimensional N=4 supersymmetric mechanics specified by the presence of a Dirac monopole. For this purpose we take the conventional N=4 supersymmetric mechanics on the four-dimensional conformally-flat spaces and perform its Hamiltonian reduction to three-dimensional system. We formulate the final system in the canonical coordinates, and present, in these terms, the explicit expressions of the Hamiltonian and supercharges. We show that, besides a magnetic monopole field, the resulting system is specified by the presence of a spin-orbit coupling term. A comparision with previous work is also carried out
Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies
Ketema, J.; Simonsen, Jakob Grue
2010-01-01
We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in
Reduction of Large Dynamical Systems by Minimization of Evolution Rate
Girimaji, Sharath S.
1999-01-01
Reduction of a large system of equations to a lower-dimensional system of similar dynamics is investigated. For dynamical systems with disparate timescales, a criterion for determining redundant dimensions and a general reduction method based on the minimization of evolution rate are proposed.
Hidden symmetries in minimal five-dimensional supergravity
International Nuclear Information System (INIS)
Poessel, Markus; Silva, Sebastian
2004-01-01
We study the hidden symmetries arising in the dimensional reduction of d=5, N=2 supergravity to three dimensions. Extending previous partial results for the bosonic part, we give a derivation that includes fermionic terms, shedding light on the appearance of the local hidden symmetry SO(4) in the reduction
Formulation of 11-dimensional supergravity in superspace
International Nuclear Information System (INIS)
Cremmer, E.; Ferrara, S.
1980-01-01
We formulate on-shell 11-dimensional supergravity in superspace and express its equations of motion in terms of purely geometrical quantities. All torsion and curvature components are solved in terms of a single superfield Wsub(rstu), totally antisymmetric in its (flat vector) indices. The dimensional reduction of this formulation is expected to be related to the superspace formulation of N = 8 extended supergravity and might explain the origin of the hidden (local) SU(8) and (global) E 7 symmetries present in this theory. (orig.)
DEFF Research Database (Denmark)
Dimova, Slobodanka; Jensen, Christian
2013-01-01
/video recorded speech samples and written reports produced by two experienced raters after testing. Our findings suggest that reduction or reduction-like pronunciation features are found in tested L2 speech, but whenever raters identify and comment on such reductions, they tend to assess reductions negatively......This study represents an initial exploration of raters' comments and actual realisations of form reductions in L2 test speech performances. Performances of three L2 speakers were selected as case studies and illustrations of how reductions are evaluated by the raters. The analysis is based on audio...
System reduction for nanoscale IC design
2017-01-01
This book describes the computational challenges posed by the progression toward nanoscale electronic devices and increasingly short design cycles in the microelectronics industry, and proposes methods of model reduction which facilitate circuit and device simulation for specific tasks in the design cycle. The goal is to develop and compare methods for system reduction in the design of high dimensional nanoelectronic ICs, and to test these methods in the practice of semiconductor development. Six chapters describe the challenges for numerical simulation of nanoelectronic circuits and suggest model reduction methods for constituting equations. These include linear and nonlinear differential equations tailored to circuit equations and drift diffusion equations for semiconductor devices. The performance of these methods is illustrated with numerical experiments using real-world data. Readers will benefit from an up-to-date overview of the latest model reduction methods in computational nanoelectronics.
Four-dimensional Hall mechanics as a particle on CP3
International Nuclear Information System (INIS)
Bellucci, Stefano; Casteill, Pierre-Yves; Nersessian, Armen
2003-01-01
In order to establish an explicit connection between four-dimensional Hall effect on S 4 and six-dimensional Hall effect on CP 3 , we perform the Hamiltonian reduction of a particle moving on CP 3 in a constant magnetic field to the four-dimensional Hall mechanics (i.e., a-bar particle on S 4 in a SU(2) instanton field). This reduction corresponds to fixing the isospin of the latter system
MCNP variance reduction overview
International Nuclear Information System (INIS)
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code
Andersson, Pher G
2008-01-01
With its comprehensive overview of modern reduction methods, this book features high quality contributions allowing readers to find reliable solutions quickly and easily. The monograph treats the reduction of carbonyles, alkenes, imines and alkynes, as well as reductive aminations and cross and heck couplings, before finishing off with sections on kinetic resolutions and hydrogenolysis. An indispensable lab companion for every chemist.
Three-dimensional versus two-dimensional vision in laparoscopy
DEFF Research Database (Denmark)
Sørensen, Stine D; Savran, Mona Meral; Konge, Lars
2016-01-01
were cohort size and characteristics, skill trained or operation performed, instrument used, outcome measures, and conclusions. Two independent authors performed the search and data extraction. RESULTS: Three hundred and forty articles were screened for eligibility, and 31 RCTs were included...... through a two-dimensional (2D) projection on a monitor, which results in loss of depth perception. To counter this problem, 3D imaging for laparoscopy was developed. A systematic review of the literature was performed to assess the effect of 3D laparoscopy. METHODS: A systematic search of the literature...... in the review. Three trials were carried out in a clinical setting, and 28 trials used a simulated setting. Time was used as an outcome measure in all of the trials, and number of errors was used in 19 out of 31 trials. Twenty-two out of 31 trials (71 %) showed a reduction in performance time, and 12 out of 19...
International Nuclear Information System (INIS)
Brown, J.D.
1988-01-01
This book addresses the subject of gravity theories in two and three spacetime dimensions. The prevailing philosophy is that lower dimensional models of gravity provide a useful arena for developing new ideas and insights, which are applicable to four dimensional gravity. The first chapter consists of a comprehensive introduction to both two and three dimensional gravity, including a discussion of their basic structures. In the second chapter, the asymptotic structure of three dimensional Einstein gravity with a negative cosmological constant is analyzed. The third chapter contains a treatment of the effects of matter sources in classical two dimensional gravity. The fourth chapter gives a complete analysis of particle pair creation by electric and gravitational fields in two dimensions, and the resulting effect on the cosmological constant
Three dimensional strained semiconductors
Voss, Lars; Conway, Adam; Nikolic, Rebecca J.; Leao, Cedric Rocha; Shao, Qinghui
2016-11-08
In one embodiment, an apparatus includes a three dimensional structure comprising a semiconductor material, and at least one thin film in contact with at least one exterior surface of the three dimensional structure for inducing a strain in the structure, the thin film being characterized as providing at least one of: an induced strain of at least 0.05%, and an induced strain in at least 5% of a volume of the three dimensional structure. In another embodiment, a method includes forming a three dimensional structure comprising a semiconductor material, and depositing at least one thin film on at least one surface of the three dimensional structure for inducing a strain in the structure, the thin film being characterized as providing at least one of: an induced strain of at least 0.05%, and an induced strain in at least 5% of a volume of the structure.
Clustering high dimensional data
DEFF Research Database (Denmark)
Assent, Ira
2012-01-01
High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...
Big Data, Biostatistics and Complexity Reduction
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2018-01-01
Roč. 14, č. 2 (2018), s. 24-32 ISSN 1801-5603 R&D Projects: GA MZd(CZ) NV15-29835A Institutional support: RVO:67985807 Keywords : Biostatistics * Big data * Multivariate statistics * Dimensionality * Variable selection Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) https://www.ejbi.org/scholarly-articles/big-data-biostatistics-and-complexity-reduction.pdf
Hamiltonian formalism of two-dimensional Vlasov kinetic equation.
Pavlov, Maxim V
2014-12-08
In this paper, the two-dimensional Benney system describing long wave propagation of a finite depth fluid motion and the multi-dimensional Russo-Smereka kinetic equation describing a bubbly flow are considered. The Hamiltonian approach established by J. Gibbons for the one-dimensional Vlasov kinetic equation is extended to a multi-dimensional case. A local Hamiltonian structure associated with the hydrodynamic lattice of moments derived by D. J. Benney is constructed. A relationship between this hydrodynamic lattice of moments and the two-dimensional Vlasov kinetic equation is found. In the two-dimensional case, a Hamiltonian hydrodynamic lattice for the Russo-Smereka kinetic model is constructed. Simple hydrodynamic reductions are presented.
Super integrable four-dimensional autonomous mappings
International Nuclear Information System (INIS)
Capel, H W; Sahadevan, R; Rajakumar, S
2007-01-01
A systematic investigation of the complete integrability of a fourth-order autonomous difference equation of the type w(n + 4) = w(n)F(w(n + 1), w(n + 2), w(n + 3)) is presented. We identify seven distinct families of four-dimensional mappings which are super integrable and have three (independent) integrals via a duality relation as introduced in a recent paper by Quispel, Capel and Roberts (2005 J. Phys. A: Math. Gen. 38 3965-80). It is observed that these seven families can be related to the four-dimensional symplectic mappings with two integrals including all the four-dimensional periodic reductions of the integrable double-discrete modified Korteweg-deVries and sine-Gordon equations treated in an earlier paper by two of us (Capel and Sahadevan 2001 Physica A 289 86-106)
Nagatani, Yukihiro; Takahashi, Masashi; Murata, Kiyoshi; Ikeda, Mitsuru; Yamashiro, Tsuneo; Miyara, Tetsuhiro; Koyama, Hisanobu; Koyama, Mitsuhiro; Sato, Yukihisa; Moriya, Hiroshi; Noma, Satoshi; Tomiyama, Noriyuki; Ohno, Yoshiharu; Murayama, Sadayuki
2015-07-01
To compare lung nodule detection performance (LNDP) in computed tomography (CT) with adaptive iterative dose reduction using three dimensional processing (AIDR3D) between ultra-low dose CT (ULDCT) and low dose CT (LDCT). This was part of the Area-detector Computed Tomography for the Investigation of Thoracic Diseases (ACTIve) Study, a multicenter research project being conducted in Japan. Institutional Review Board approved this study and informed consent was obtained. Eighty-three subjects (body mass index, 23.3 ± 3.2) underwent chest CT at 6 institutions using identical scanners and protocols. In a single visit, each subject was scanned using different tube currents: 240, 120 and 20 mA (3.52, 1.74 and 0.29 mSv, respectively). Axial CT images with 2-mm thickness/increment were reconstructed using AIDR3D. Standard of reference (SOR) was determined based on CT images at 240 mA by consensus reading of 2 board-certificated radiologists as to the presence of lung nodules with the longest diameter (LD) of more than 3mm. Another 5 radiologists independently assessed and recorded presence/absence of lung nodules and their locations by continuously-distributed rating in CT images at 20 mA (ULDCT) and 120 mA (LDCT). Receiver-operating characteristic (ROC) analysis was used to evaluate LNDP of both methods in total and also in subgroups classified by LD (>4, 6 and 8 mm) and nodular characteristics (solid and ground glass nodules). For SOR, 161 solid and 60 ground glass nodules were identified. No significant difference in LNDP for entire solid nodules was demonstrated between both methods, as area under ROC curve (AUC) was 0.844 ± 0.017 in ULDCT and 0.876 ± 0.026 in LDCT (p=0.057). For ground glass nodules with LD 8mm or more, LNDP was similar between both methods, as AUC 0.899 ± 0.038 in ULDCT and 0.941 ± 0.030 in LDCT. (p=0.144). ULDCT using AIDR3D with an equivalent radiation dose to chest x-ray could have comparable LNDP to LDCT with AIDR3D except for smaller ground
Dimensional comparison theory.
Möller, Jens; Marsh, Herb W
2013-07-01
Although social comparison (Festinger, 1954) and temporal comparison (Albert, 1977) theories are well established, dimensional comparison is a largely neglected yet influential process in self-evaluation. Dimensional comparison entails a single individual comparing his or her ability in a (target) domain with his or her ability in a standard domain (e.g., "How good am I in math compared with English?"). This article reviews empirical findings from introspective, path-analytic, and experimental studies on dimensional comparisons, categorized into 3 groups according to whether they address the "why," "with what," or "with what effect" question. As the corresponding research shows, dimensional comparisons are made in everyday life situations. They impact on domain-specific self-evaluations of abilities in both domains: Dimensional comparisons reduce self-concept in the worse off domain and increase self-concept in the better off domain. The motivational basis for dimensional comparisons, their integration with recent social cognitive approaches, and the interdependence of dimensional, temporal, and social comparisons are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Unitarity cuts and Reduction to master integrals in d dimensions for one-loop amplitudes
Anastasiou, C; Feng, B; Kunszt, Z; Mastrolia, Pierpaolo; Anastasiou, Charalampos; Britto, Ruth; Feng, Bo; Kunszt, Zoltan; Mastrolia, Pierpaolo
2007-01-01
We present an alternative reduction to master integrals for one-loop amplitudes using a unitarity cut method in arbitrary dimensions. We carry out the reduction in two steps. The first step is a pure four-dimensional cut-integration of tree amplitudes with a mass parameter, and the second step is applying dimensional shift identities to master integrals. This reduction is performed at the integrand level, so that coefficients can be read out algebraically.
Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.
2012-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...
Three dimensional canonical transformations
International Nuclear Information System (INIS)
Tegmen, A.
2010-01-01
A generic construction of canonical transformations is given in three-dimensional phase spaces on which Nambu bracket is imposed. First, the canonical transformations are defined as based on cannonade transformations. Second, it is shown that determination of the generating functions and the transformation itself for given generating function is possible by solving correspondent Pfaffian differential equations. Generating functions of type are introduced and all of them are listed. Infinitesimal canonical transformations are also discussed as the complementary subject. Finally, it is shown that decomposition of canonical transformations is also possible in three-dimensional phase spaces as in the usual two-dimensional ones.
Gauged supergravities from M-theory reductions
Katmadas, Stefanos; Tomasiello, Alessandro
2018-04-01
In supergravity compactifications, there is in general no clear prescription on how to select a finite-dimensional family of metrics on the internal space, and a family of forms on which to expand the various potentials, such that the lower-dimensional effective theory is supersymmetric. We propose a finite-dimensional family of deformations for regular Sasaki-Einstein seven-manifolds M 7, relevant for M-theory compactifications down to four dimensions. It consists of integrable Cauchy-Riemann structures, corresponding to complex deformations of the Calabi-Yau cone M 8 over M 7. The non-harmonic forms we propose are the ones contained in one of the Kohn-Rossi cohomology groups, which is finite-dimensional and naturally controls the deformations of Cauchy-Riemann structures. The same family of deformations can be also described in terms of twisted cohomology of the base M 6, or in terms of Milnor cycles arising in deformations of M 8. Using existing results on SU(3) structure compactifications, we briefly discuss the reduction of M-theory on our class of deformed Sasaki-Einstein manifolds to four-dimensional gauged supergravity.
Reduction - competitive tomorrow
International Nuclear Information System (INIS)
Worley, L.; Bargerstock, S.
1995-01-01
Inventory reduction is one of the few initiatives that represent significant cost-reduction potential that does not result in personnel reduction. Centerior Energy's Perry nuclear power plant has embarked on an aggressive program to reduce inventory while maintaining plant material availability. Material availability to the plant was above 98%, but at an unacceptable 1994 inventory book value of $47 million with inventory carrying costs calculated at 30% annually
International Nuclear Information System (INIS)
Lowthian, W.E.
1993-01-01
Process Energy Reduction (PER) is a demand-side energy reduction approach which complements and often supplants other traditional energy reduction methods such as conservation and heat recovery. Because the application of PER is less obvious than the traditional methods, it takes some time to learn the steps as well as practice to become proficient in its use. However, the benefit is significant, often far outweighing the traditional energy reduction approaches. Furthermore, the method usually results in a better process having less waste and pollution along with improved yields, increased capacity, and lower operating costs
AbouEisha, Hassan M.
2014-01-01
The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts with minimum cardinality. This algorithm transforms the initial table to a decision table of a special kind, apply a set of simplification steps to this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. I present results of computer experiments for a collection of decision tables from UCIML Repository. For many of the experimented tables, the simplification steps solved the problem.
Metallothermic reduction of molybdate
International Nuclear Information System (INIS)
Mukherjee, T.K.; Bose, D.K.
1987-01-01
This paper gives a brief account of the investigations conducted so far on metallothermic reduction of high grade molybdenite with particular emphasis on the work carried out in Bhabha Atomic Research Centre. Based on thermochemical considerations, the paper first introduces a number of metallic reductants suitable for use in metallothermic reduction of molybdenite. Aluminium, sodium and tin are found to be suitable reducing agents and very rightly they have found most applications in the research and development efforts on metallothermic reduction of molybdenite. The reduction with tin was conducted on fairly large scale both in vacuum and hydrogen atmosphere. The reaction was reported to be invariant depending mainly on the reduction temperature and a temperature of the order of 1250deg to 1300degC was required for good metal recovery. In comparison to tin, aluminothermic reduction of molybdenite was studied more extensively and it was conducted in closed bomb, vacuum and also in open atmosphere. In aluminothermic reduction, the influence of amount of reducing agent, amount of heat booster, preheating temperature and charging procedure on these metal yield was studied in detail. The reduction generally yielded massive molybdenum metal contaminated with aluminium as the major impurity element. Efforts were made to purify the reduced metal by arc melting, electron beam melting and molten salt electrorefining. 9 refs. (author)
High-dimensional data in economics and their (robust) analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf
High-dimensional Data in Economics and their (Robust) Analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability
Microbial reductive dehalogenation.
Mohn, W W; Tiedje, J M
1992-01-01
A wide variety of compounds can be biodegraded via reductive removal of halogen substituents. This process can degrade toxic pollutants, some of which are not known to be biodegraded by any other means. Reductive dehalogenation of aromatic compounds has been found primarily in undefined, syntrophic anaerobic communities. We discuss ecological and physiological principles which appear to be important in these communities and evaluate how widely applicable these principles are. Anaerobic communities that catalyze reductive dehalogenation appear to differ in many respects. A large number of pure cultures which catalyze reductive dehalogenation of aliphatic compounds are known, in contrast to only a few organisms which catalyze reductive dehalogenation of aromatic compounds. Desulfomonile tiedjei DCB-1 is an anaerobe which dehalogenates aromatic compounds and is physiologically and morphologically unusual in a number of respects, including the ability to exploit reductive dehalogenation for energy metabolism. When possible, we use D. tiedjei as a model to understand dehalogenating organisms in the above-mentioned undefined systems. Aerobes use reductive dehalogenation for substrates which are resistant to known mechanisms of oxidative attack. Reductive dehalogenation, especially of aliphatic compounds, has recently been found in cell-free systems. These systems give us an insight into how and why microorganisms catalyze this activity. In some cases transition metal complexes serve as catalysts, whereas in other cases, particularly with aromatic substrates, the catalysts appear to be enzymes. Images PMID:1406492
International Nuclear Information System (INIS)
Holzfuss, J.
1996-01-01
Noise reduction is a problem being encountered in a variety of applications, such as environmental noise cancellation, signal recovery and separation. Passive noise reduction is done with the help of absorbers. Active noise reduction includes the transmission of phase inverted signals for the cancellation. This paper is about a threefold active approach to noise reduction. It includes the separation of a combined source, which consists of both a noise and a signal part. With the help of interaction with the source by scanning it and recording its response, modeling as a nonlinear dynamical system is achieved. The analysis includes phase space analysis and global radial basis functions as tools for the prediction used in a subsequent cancellation procedure. Examples are given which include noise reduction of speech. copyright 1996 American Institute of Physics
Energy Technology Data Exchange (ETDEWEB)
Nagatani, Yukihiro, E-mail: yatsushi@belle.shiga-med.ac.jp [Department of Radiology, Shiga University of Medical Science, Otsu 520-2192, Shiga (Japan); Takahashi, Masashi; Murata, Kiyoshi [Department of Radiology, Shiga University of Medical Science, Otsu 520-2192, Shiga (Japan); Ikeda, Mitsuru [Department of Radiological and Medical Laboratory Science, Nagoya University Graduate School of Medicine, Nagoya 461-8673, Aichi (Japan); Yamashiro, Tsuneo [Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, Nishihara 903-0215, Okinawa (Japan); Miyara, Tetsuhiro [Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, Nishihara 903-0215, Okinawa (Japan); Department of Radiology, Okinawa Prefectural Yaeyama Hospital, Ishigaki 907-0022, Okinawa (Japan); Koyama, Hisanobu [Department of Radiology, Kobe University Graduate School of Medicine, Kobe 650-0017, Hyogo (Japan); Koyama, Mitsuhiro [Department of Radiology, Osaka Medical College, Takatsuki 569-8686, Osaka (Japan); Sato, Yukihisa [Department of Radiology, Osaka University Graduate School of Medicine, Suita 565-0871, Osaka (Japan); Department of Radiology, Osaka Medical Center of Cancer and Cardiovascular Diseases, Osaka 537-8511, Osaka (Japan); Moriya, Hiroshi [Department of Radiology, Ohara General Hospital, Fukushima 960-8611 (Japan); Noma, Satoshi [Department of Radiology, Tenri Hospital, Tenri 632-8552, Nara (Japan); Tomiyama, Noriyuki [Department of Radiology, Osaka University Graduate School of Medicine, Suita 565-0871, Osaka (Japan); Ohno, Yoshiharu [Department of Radiology, Kobe University Graduate School of Medicine, Kobe 650-0017, Hyogo (Japan); Murayama, Sadayuki [Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, Nishihara 903-0215, Okinawa (Japan)
2015-07-15
Highlights: • Using AIDR 3D, ULDCT showed comparable LND of solid nodules to LDCT. • Using AIDR 3D, LND of smaller GGN in ULDCT was inferior to that in LDCT. • Effective dose in ULDCT was about only twice of that in chest X-ray. • BMI values in study population were mostly in the normal range body habitus. - Abstract: Purpose: To compare lung nodule detection performance (LNDP) in computed tomography (CT) with adaptive iterative dose reduction using three dimensional processing (AIDR3D) between ultra-low dose CT (ULDCT) and low dose CT (LDCT). Materials and methods: This was part of the Area-detector Computed Tomography for the Investigation of Thoracic Diseases (ACTIve) Study, a multicenter research project being conducted in Japan. Institutional Review Board approved this study and informed consent was obtained. Eighty-three subjects (body mass index, 23.3 ± 3.2) underwent chest CT at 6 institutions using identical scanners and protocols. In a single visit, each subject was scanned using different tube currents: 240, 120 and 20 mA (3.52, 1.74 and 0.29 mSv, respectively). Axial CT images with 2-mm thickness/increment were reconstructed using AIDR3D. Standard of reference (SOR) was determined based on CT images at 240 mA by consensus reading of 2 board-certificated radiologists as to the presence of lung nodules with the longest diameter (LD) of more than 3 mm. Another 5 radiologists independently assessed and recorded presence/absence of lung nodules and their locations by continuously-distributed rating in CT images at 20 mA (ULDCT) and 120 mA (LDCT). Receiver-operating characteristic (ROC) analysis was used to evaluate LNDP of both methods in total and also in subgroups classified by LD (>4, 6 and 8 mm) and nodular characteristics (solid and ground glass nodules). Results: For SOR, 161 solid and 60 ground glass nodules were identified. No significant difference in LNDP for entire solid nodules was demonstrated between both methods, as area under ROC
Euclidean D-branes and higher-dimensional gauge theory
International Nuclear Information System (INIS)
Acharya, B.S.; Figueroa-O'Farrill, J.M.; Spence, B.; O'Loughlin, M.
1997-07-01
We consider euclidean D-branes wrapping around manifolds of exceptional holonomy in dimensions seven and eight. The resulting theory on the D-brane-that is, the dimensional reduction of 10-dimensional supersymmetric Yang-Mills theory-is a cohomological field theory which describes the topology of the moduli space of instantons. The 7-dimensional theory is an N T =2 (or balanced) cohomological theory given by an action potential of Chern-Simons type. As a by-product of this method, we construct a related cohomological field theory which describes the monopole moduli space on a 7-manifold of G 2 holonomy. (author). 22 refs, 3 tabs
International Nuclear Information System (INIS)
Warren, J.L.
1990-01-01
The author focuses on wastes considered hazardous under the Resource Conservation and Recovery Act. This chapter discusses wastes that are of interest as well as the factors affecting the quantity of waste considered available for waste reduction. Estimates are provided of the quantities of wastes generated. Estimates of the potential for waste reduction are meaningful only to the extent that one can understand the amount of waste actually being generated. Estimates of waste reduction potential are summarized from a variety of government and nongovernment sources
Dimensional transition of the universe
International Nuclear Information System (INIS)
Terazawa, Hidezumi.
1989-08-01
In the extended n-dimensional Einstein theory of gravitation, where the spacetime dimension can be taken as a 'dynamical variable' which is determined by the 'Hamilton principle' of minimizing the extended Einstein-Hilbert action, it is suggested that our Universe of four-dimensional spacetime may encounter an astonishing dimensional transition into a new universe of three-dimensional or higher-than-four-dimensional spacetime. (author)
International Nuclear Information System (INIS)
Schroer, Bert; Freie Universitaet, Berlin
2005-02-01
It is not possible to compactly review the overwhelming literature on two-dimensional models in a meaningful way without a specific viewpoint; I have therefore tacitly added to the above title the words 'as theoretical laboratories for general quantum field theory'. I dedicate this contribution to the memory of J. A. Swieca with whom I have shared the passion of exploring 2-dimensional models for almost one decade. A shortened version of this article is intended as a contribution to the project 'Encyclopedia of mathematical physics' and comments, suggestions and critical remarks are welcome. (author)
Three-dimensional neuroimaging
International Nuclear Information System (INIS)
Toga, A.W.
1990-01-01
This book reports on new neuroimaging technologies that are revolutionizing the study of the brain be enabling investigators to visualize its structure and entire pattern of functional activity in three dimensions. The book provides a theoretical and practical explanation of the new science of creating three-dimensional computer images of the brain. The coverage includes a review of the technology and methodology of neuroimaging, the instrumentation and procedures, issues of quantification, analytic protocols, and descriptions of neuroimaging systems. Examples are given to illustrate the use of three-dimensional enuroimaging to quantitate spatial measurements, perform analysis of autoradiographic and histological studies, and study the relationship between brain structure and function
International Nuclear Information System (INIS)
Zanchin, Vilson T.; Kleber, Antares; Lemos, Jose P.S.
2002-01-01
The dimensional reduction of black hole solutions in four-dimensional (4D) general relativity is performed and new 3D black hole solutions are obtained. Considering a 4D spacetime with one spacelike Killing vector, it is possible to split the Einstein-Hilbert-Maxwell action with a cosmological term in terms of 3D quantities. Definitions of quasilocal mass and charges in 3D spacetimes are reviewed. The analysis is then particularized to the toroidal charged rotating anti-de Sitter black hole. The reinterpretation of the fields and charges in terms of a three-dimensional point of view is given in each case, and the causal structure analyzed
A Corresponding Lie Algebra of a Reductive homogeneous Group and Its Applications
International Nuclear Information System (INIS)
Zhang Yu-Feng; Rui Wen-Juan; Wu Li-Xin
2015-01-01
With the help of a Lie algebra of a reductive homogeneous space G/K, where G is a Lie group and K is a resulting isotropy group, we introduce a Lax pair for which an expanding (2+1)-dimensional integrable hierarchy is obtained by applying the binormial-residue representation (BRR) method, whose Hamiltonian structure is derived from the trace identity for deducing (2+1)-dimensional integrable hierarchies, which was proposed by Tu, et al. We further consider some reductions of the expanding integrable hierarchy obtained in the paper. The first reduction is just right the (2+1)-dimensional AKNS hierarchy, the second-type reduction reveals an integrable coupling of the (2+1)-dimensional AKNS equation (also called the Davey-Stewartson hierarchy), a kind of (2+1)-dimensional Schrödinger equation, which was once reobtained by Tu, Feng and Zhang. It is interesting that a new (2+1)-dimensional integrable nonlinear coupled equation is generated from the reduction of the part of the (2+1)-dimensional integrable coupling, which is further reduced to the standard (2+1)-dimensional diffusion equation along with a parameter. In addition, the well-known (1+1)-dimensional AKNS hierarchy, the (1+1)-dimensional nonlinear Schrödinger equation are all special cases of the (2+1)-dimensional expanding integrable hierarchy. Finally, we discuss a few discrete difference equations of the diffusion equation whose stabilities are analyzed by making use of the von Neumann condition and the Fourier method. Some numerical solutions of a special stationary initial value problem of the (2+1)-dimensional diffusion equation are obtained and the resulting convergence and estimation formula are investigated. (paper)
Breast reduction (mammoplasty) - slideshow
... page: //medlineplus.gov/ency/presentations/100189.htm Breast reduction (mammoplasty) - series—Indications To use the sharing features ... Lickstein, MD, FACS, specializing in cosmetic and reconstructive plastic surgery, Palm Beach Gardens, FL. Review provided by ...
Medical Errors Reduction Initiative
National Research Council Canada - National Science Library
Mutter, Michael L
2005-01-01
The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...
Microbial reductive dehalogenation.
Mohn, W W; Tiedje, J M
1992-01-01
A wide variety of compounds can be biodegraded via reductive removal of halogen substituents. This process can degrade toxic pollutants, some of which are not known to be biodegraded by any other means. Reductive dehalogenation of aromatic compounds has been found primarily in undefined, syntrophic anaerobic communities. We discuss ecological and physiological principles which appear to be important in these communities and evaluate how widely applicable these principles are. Anaerobic commun...
Ceccio, Steven; Elbing, Brian; Winkel, Eric; Dowling, David; Perlin, Marc
2008-11-01
A set of experiments have been conducted at the US Navy's Large Cavitation Channel to investigate skin-friction drag reduction with the injection of air into a high Reynolds number turbulent boundary layer. Testing was performed on a 12.9 m long flat-plate test model with the surface hydraulically smooth and fully rough at downstream-distance-based Reynolds numbers to 220 million and at speeds to 20 m/s. Local skin-friction, near-wall bulk void fraction, and near-wall bubble imaging were monitored along the length of the model. The instrument suite was used to access the requirements necessary to achieve air layer drag reduction (ALDR). Injection of air over a wide range of air fluxes showed that three drag reduction regimes exist when injecting air; (1) bubble drag reduction that has poor downstream persistence, (2) a transitional regime with a steep rise in drag reduction, and (3) ALDR regime where the drag reduction plateaus at 90% ± 10% over the entire model length with large void fractions in the near-wall region. These investigations revealed several requirements for ALDR including; sufficient volumetric air fluxes that increase approximately with the square of the free-stream speed, slightly higher air fluxes are needed when the surface tension is reduced, higher air fluxes are required for rough surfaces, and the formation of ALDR is sensitive to the inlet condition.
DEFF Research Database (Denmark)
Larsen, Mihail
De fire dimensioner er en humanistisk håndbog beregnet især på studerende og vejledere inden for humaniora, men kan også læses af andre med interesse for, hvad humanistisk forskning er og kan. Den er blevet til over et langt livs engageret forskning, uddannelse og formidling på Roskilde Universitet...... og udgør på den måde også et bidrag til universitetets historie, som jeg var med til at grundlægge. De fire dimensioner sætter mennesket i centrum. Men det er et centrum, der peger ud over sig selv; et centrum, hvorfra verden anskues, erfares og forstås. Alle mennesker har en forhistorie og en...... fremtid, og udstrakt mellem disse punkter i tiden tænker og handler de i rummet. Den menneskelige tilværelse omfatter alle fire dimensioner. De fire dimensioner udgør derfor også et forsvar for en almen dannelse, der gennemtrænger og kommer kulturelt til udtryk i vores historie, viden, praksis og kunst....
dimensional nonlinear evolution equations
Indian Academy of Sciences (India)
in real-life situations, it is important to find their exact solutions. Further, in ... But only little work is done on the high-dimensional equations. .... Similarly, to determine the values of d and q, we balance the linear term of the lowest order in eq.
Two dimensional generalizations of the Newcomb equation
International Nuclear Information System (INIS)
Dewar, R.L.; Pletzer, A.
1989-11-01
The Bineau reduction to scalar form of the equation governing ideal, zero frequency linearized displacements from a hydromagnetic equilibrium possessing a continuous symmetry is performed in 'universal coordinates', applicable to both the toroidal and helical cases. The resulting generalized Newcomb equation (GNE) has in general a more complicated form than the corresponding one dimensional equation obtained by Newcomb in the case of circular cylindrical symmetry, but in this cylindrical case , the equation can be transformed to that of Newcomb. In the two dimensional case there is a transformation which leaves the form of the GNE invariant and simplifies the Frobenius expansion about a rational surface, especially in the limit of zero pressure gradient. The Frobenius expansions about a mode rational surface is developed and the connection with Hamiltonian transformation theory is shown. 17 refs
Quantum transport in d -dimensional lattices
International Nuclear Information System (INIS)
Manzano, Daniel; Chuang, Chern; Cao, Jianshu
2016-01-01
We show that both fermionic and bosonic uniform d -dimensional lattices can be reduced to a set of independent one-dimensional chains. This reduction leads to the expression for ballistic energy fluxes in uniform fermionic and bosonic lattices. By the use of the Jordan–Wigner transformation we can extend our analysis to spin lattices, proving the coexistence of both ballistic and non-ballistic subspaces in any dimension and for any system size. We then relate the nature of transport to the number of excitations in the homogeneous spin lattice, indicating that a single excitation always propagates ballistically and that the non-ballistic behaviour of uniform spin lattices is a consequence of the interaction between different excitations. (paper)
Extended inflation from higher dimensional theories
International Nuclear Information System (INIS)
Holman, R.; Kolb, E.W.; Vadas, S.L.; Wang, Yun.
1990-04-01
The possibility is considered that higher dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. Two separate models are analayzed. One is a very simple toy model consisting of higher dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of non-trivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a non-trivial potential for the radius of the internal space. It was found that extended inflation does not occur in these models. It was also found that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation
Extended inflation from higher-dimensional theories
International Nuclear Information System (INIS)
Holman, R.; Kolb, E.W.; Vadas, S.L.; Wang, Y.
1991-01-01
We consider the possibility that higher-dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. We analyze two separate models. One is a very simple toy model consisting of higher-dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of nontrivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a nontrivial potential for the radius of the internal space. We find that extended inflation does not occur in these models. We also find that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation
Incomplete Dirac reduction of constrained Hamiltonian systems
Energy Technology Data Exchange (ETDEWEB)
Chandre, C., E-mail: chandre@cpt.univ-mrs.fr
2015-10-15
First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.
A supersymmetric reduction on the three-sphere
International Nuclear Information System (INIS)
Deger, Nihat Sadik; Samtleben, Henning; Sarıoğlu, Özgür; Van den Bleeken, Dieter
2015-01-01
We present the embedding of three-dimensional SO(4)⋉R 6 gauged N=4 supergravity with quaternionic target space SO(4,4)/(SO(4)×SO(4)) into D=6, N=(1,0) supergravity coupled to a single chiral tensor multiplet through a consistent reduction on AdS 3 ×S 3
Quantum theory without reduction
International Nuclear Information System (INIS)
Cini, Marcello; Levy-Leblond, J.-M.
1990-01-01
Quantum theory offers a strange, and perhaps unique, case in the history of science. Although research into its roots has provided important results in recent years, the debate goes on. Some theorists argue that quantum theory is weakened by the inclusion of the so called 'reduction of the state vector' in its foundations. Quantum Theory without Reduction presents arguments in favour of quantum theory as a consistent and complete theory without this reduction, and which is capable of explaining all known features of the measurement problem. This collection of invited contributions defines and explores different aspects of this issue, bringing an old debate into a new perspective, and leading to a more satisfying consensus about quantum theory. (author)
Measuring mandibular ridge reduction
International Nuclear Information System (INIS)
Steen, W.H.A.
1984-01-01
This thesis investigates the mandibular reduction in height of complete denture wearers and overdenture wearers. To follow this reduction in the anterior region as well as in the lateral sections of the mandible, an accurate and reproducible measuring method is a prerequisite. A radiologic technique offers the best chance. A survey is given of the literature concerning the resorption process after the extraction of teeth. An oblique cephalometric radiographic technique is introduced as a promising method to measure mandibular ridge reduction. The reproducibility and the accuracy of the technique are determined. The reproducibility in the positioning of the mandible is improved by the introduction of a mandibular support which permits a precise repositioning of the edentulous jaw, even after long periods of investigation. (Auth.)
Effective Hamiltonian for 2-dimensional arbitrary spin Ising model
International Nuclear Information System (INIS)
Sznajd, J.; Polska Akademia Nauk, Wroclaw. Inst. Niskich Temperatur i Badan Strukturalnych)
1983-08-01
The method of the reduction of the generalized arbitrary-spin 2-dimensional Ising model to spin-half Ising model is presented. The method is demonstrated in detail by calculating the effective interaction constants to the third order in cumulant expansion for the triangular spin-1 Ising model (the Blume-Emery-Griffiths model). (author)
Chronoprojective invariance of the five-dimensional Schroedinger formalism
International Nuclear Information System (INIS)
Perrin, M.; Burdet, G.; Duval, C.
1984-10-01
Invariance properties of the five-dimensional Schroedinger formalism describing a quantum test particle in the Newton-Cartan theory of gravitation are studied. The geometry which underlies these invariance properties is presented as a reduction of the 0(5,2) conformal geometry various applications are given
How to Reduce Dimensionality of Data: Robustness Point of View
Czech Academy of Sciences Publication Activity Database
Kalina, Jan; Rensová, D.
2015-01-01
Roč. 10, č. 1 (2015), s. 131-140 ISSN 1452-4864 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : data analysis * dimensionality reduction * robust statistics * principal component analysis * robust classification analysis Subject RIV: BB - Applied Statistics, Operational Research
International Nuclear Information System (INIS)
Feinsilver, Philip; Schott, Rene
2009-01-01
We discuss topics related to finite-dimensional calculus in the context of finite-dimensional quantum mechanics. The truncated Heisenberg-Weyl algebra is called a TAA algebra after Tekin, Aydin and Arik who formulated it in terms of orthofermions. It is shown how to use a matrix approach to implement analytic representations of the Heisenberg-Weyl algebra in univariate and multivariate settings. We provide examples for the univariate case. Krawtchouk polynomials are presented in detail, including a review of Krawtchouk polynomials that illustrates some curious properties of the Heisenberg-Weyl algebra, as well as presenting an approach to computing Krawtchouk expansions. From a mathematical perspective, we are providing indications as to how to implement infinite terms Rota's 'finite operator calculus'.
REDUCTIONS WITHOUT REGRET: SUMMARY
Energy Technology Data Exchange (ETDEWEB)
Swegle, J.; Tincher, D.
2013-09-16
This paper briefly summarizes the series in which we consider the possibilities for losing, or compromising, key capabilities of the U.S. nuclear force in the face of modernization and reductions. The first of the three papers takes an historical perspective, considering capabilities that were eliminated in past force reductions. The second paper is our attempt to define the needed capabilities looking forward in the context of the current framework for force modernization and the current picture of the evolving challenges of deterrence and assurance. The third paper then provides an example for each of our undesirable outcomes: the creation of roach motels, box canyons, and wrong turns.
Dimensional analysis for engineers
Simon, Volker; Gomaa, Hassan
2017-01-01
This monograph provides the fundamentals of dimensional analysis and illustrates the method by numerous examples for a wide spectrum of applications in engineering. The book covers thoroughly the fundamental definitions and the Buckingham theorem, as well as the choice of the system of basic units. The authors also include a presentation of model theory and similarity solutions. The target audience primarily comprises researchers and practitioners but the book may also be suitable as a textbook at university level.
Three Dimensional Dirac Semimetals
Zaheer, Saad
2014-03-01
Dirac points on the Fermi surface of two dimensional graphene are responsible for its unique electronic behavior. One can ask whether any three dimensional materials support similar pseudorelativistic physics in their bulk electronic spectra. This possibility has been investigated theoretically and is now supported by two successful experimental demonstrations reported during the last year. In this talk, I will summarize the various ways in which Dirac semimetals can be realized in three dimensions with primary focus on a specific theory developed on the basis of representations of crystal spacegroups. A three dimensional Dirac (Weyl) semimetal can appear in the presence (absence) of inversion symmetry by tuning parameters to the phase boundary separating a bulk insulating and a topological insulating phase. More generally, we find that specific rules governing crystal symmetry representations of electrons with spin lead to robust Dirac points at high symmetry points in the Brillouin zone. Combining these rules with microscopic considerations identifies six candidate Dirac semimetals. Another method towards engineering Dirac semimetals involves combining crystal symmetry and band inversion. Several candidate materials have been proposed utilizing this mechanism and one of the candidates has been successfully demonstrated as a Dirac semimetal in two independent experiments. Work carried out in collaboration with: Julia A. Steinberg, Steve M. Young, J.C.Y. Teo, C.L. Kane, E.J. Mele and Andrew M. Rappe.
Symmetry Reductions of a 1.5-Layer Ocean Circulation Model
International Nuclear Information System (INIS)
Huang Fei; Lou Senyue
2007-01-01
The (2+1)-dimensional nonlinear 1.5-layer ocean circulation model without external wind stress forcing is analyzed by using the classical Lie group approach. Some Lie point symmetries and their corresponding two-dimensional reduction equations are obtained.
Chernozhukov, Victor; Hansen, Christian; Spindler, Martin
2016-01-01
In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...
Three-dimensional ICT reconstruction
International Nuclear Information System (INIS)
Zhang Aidong; Li Ju; Chen Fa; Sun Lingxia
2005-01-01
The three-dimensional ICT reconstruction method is the hot topic of recent ICT technology research. In the context, qualified visual three-dimensional ICT pictures are achieved through multi-piece two-dimensional images accumulation by, combining with thresholding method and linear interpolation. Different direction and different position images of the reconstructed pictures are got by rotation and interception respectively. The convenient and quick method is significantly instructive to more complicated three-dimensional reconstruction of ICT images. (authors)
Three-dimensional ICT reconstruction
International Nuclear Information System (INIS)
Zhang Aidong; Li Ju; Chen Fa; Sun Lingxia
2004-01-01
The three-dimensional ICT reconstruction method is the hot topic of recent ICT technology research. In the context qualified visual three-dimensional ICT pictures are achieved through multi-piece two-dimensional images accumulation by order, combining with thresholding method and linear interpolation. Different direction and different position images of the reconstructed pictures are got by rotation and interception respectively. The convenient and quick method is significantly instructive to more complicated three-dimensional reconstruction of ICT images. (authors)
Dimensional control of die castings
Karve, Aniruddha Ajit
The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša
2014-01-01
Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...
Wake Management Strategies for Reduction of Turbomachinery Fan Noise
Waitz, Ian A.
1998-01-01
The primary objective of our work was to evaluate and test several wake management schemes for the reduction of turbomachinery fan noise. Throughout the course of this work we relied on several tools. These include 1) Two-dimensional steady boundary-layer and wake analyses using MISES (a thin-shear layer Navier-Stokes code), 2) Two-dimensional unsteady wake-stator interaction simulations using UNSFLO, 3) Three-dimensional, steady Navier-Stokes rotor simulations using NEWT, 4) Internal blade passage design using quasi-one-dimensional passage flow models developed at MIT, 5) Acoustic modeling using LINSUB, 6) Acoustic modeling using VO72, 7) Experiments in a low-speed cascade wind-tunnel, and 8) ADP fan rig tests in the MIT Blowdown Compressor.
One-Dimensionality and Whiteness
Calderon, Dolores
2006-01-01
This article is a theoretical discussion that links Marcuse's concept of one-dimensional society and the Great Refusal with critical race theory in order to achieve a more robust interrogation of whiteness. The author argues that in the context of the United States, the one-dimensionality that Marcuse condemns in "One-Dimensional Man" is best…
One-dimensional structures behind twisted and untwisted superYang-Mills theory
Baulieu, Laurent
2011-01-01
We give a one-dimensional interpretation of the four-dimensional twisted N=1 superYang-Mills theory on a Kaehler manifold by performing an appropriate dimensional reduction. We prove the existence of a 6-generator superalgebra, which does not possess any invariant Lagrangian but contains two different subalgebras that determine the twisted and untwisted formulations of the N=1 superYang-Mills theory.
One-dimensional structures behind twisted and untwisted super Yang-Mills theory
Energy Technology Data Exchange (ETDEWEB)
Baulieu, Laurent [CERN, Geneve (Switzerland). Theoretical Div.; Toppan, Francesco, E-mail: baulieu@lpthe.jussieu.f, E-mail: toppan@cbpf.b [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil)
2010-07-01
We give a one-dimensional interpretation of the four-dimensional twisted N = 1 super Yang-Mills theory on a Kaehler manifold by performing an appropriate dimensional reduction. We prove the existence of a 6-generator superalgebra, which does not possess any invariant Lagrangian but contains two different subalgebras that determine the twisted and untwisted formulations of the N = 1 super Yang-Mills theory. (author)
One-dimensional structures behind twisted and untwisted super Yang-Mills theory
International Nuclear Information System (INIS)
Baulieu, Laurent
2010-01-01
We give a one-dimensional interpretation of the four-dimensional twisted N = 1 super Yang-Mills theory on a Kaehler manifold by performing an appropriate dimensional reduction. We prove the existence of a 6-generator superalgebra, which does not possess any invariant Lagrangian but contains two different subalgebras that determine the twisted and untwisted formulations of the N = 1 super Yang-Mills theory. (author)
The one-loop Green's functions of dimensionally reduced gauge theories
International Nuclear Information System (INIS)
Ketov, S.V.; Prager, Y.S.
1988-01-01
The dimensional regularization technique as well as that by dimensional reduction is applied to the calculation of the regularized one-loop Green's functions in dsub(o)-dimensional Yang-Mills theory with real massless scalars and spinors in arbitrary (real) representations of a gauge group G. As a particular example, the super-symmetrically regularized one-loop Green's functions of the N=4 supersymmetric Yang-Mills model are derived. (author). 17 refs
Reduction of dinitrogen ligands
International Nuclear Information System (INIS)
Richards, R.L.
1983-01-01
Processes of dinitrogen ligand reduction in complexes of transition metals are considered. The basic character of the dinitrogen ligand is underlined. Data on X-ray photoelectronic spectroscopy and intensities of bands ν (N 2 ) in IR-spectra of nitrogen complexes are given. The mechanism of protonation of an edge dinitrogen ligand is discussed. Model systems and mechanism of nitrogenogenase are compared
Infinitary Combinatory Reduction Systems
DEFF Research Database (Denmark)
Ketema, Jeroen; Simonsen, Jakob Grue
2011-01-01
We define infinitary Combinatory Reduction Systems (iCRSs), thus providing the first notion of infinitary higher-order rewriting. The systems defined are sufficiently general that ordinary infinitary term rewriting and infinitary ¿-calculus are special cases. Furthermore,we generalise a number...
Galactorrhea after reduction mammaplasty
Schuurman, A. H.; Assies, J.; van der Horst, C. M.; Bos, K. E.
1993-01-01
A case of extremely painful swelling of the breasts following a reduction mammaplasty is presented. There were no signs of an abscess or hematoma. A milky white fluid due to galactorrhea was evacuated at operation, and further galactorrhea was inhibited by medication. The pathogenesis of
African Journals Online (AJOL)
to inhibit proliferation of HeLa cells was determined using the 3443- dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) dye reduction assay. Extracts from roots of Agathisanthemum bojeri, Synaptolepis kirkii and Zanha africana and the leaf extract of Physalis peruviana at a concentration of 10 pg/ml inhibited cell ...
Gerards, Marco Egbertus Theodorus; Kuper, Jan; Kokkeler, Andre B.J.; Molenkamp, Egbert
2009-01-01
Reduction circuits are used to reduce rows of ﬂoating point values to single values. Binary ﬂoating point operators often have deep pipelines, which may cause hazards when many consecutive rows have to be reduced. We present an algorithm by which any number of consecutive rows of arbitrary lengths
Dixon-Souriau equations from a 5-dimensional spinning particle in a Kaluza-Klein framework
International Nuclear Information System (INIS)
Cianfrani, F.; Milillo, I.; Montani, G.
2007-01-01
The dimensional reduction of Papapetrou equations is performed in a 5-dimensional Kaluza-Klein background and Dixon-Souriau results for the motion of a charged spinning body are obtained. The splitting provides an electric dipole moment, and, for elementary particles, the induced parity and time-reversal violations are explained
Reduced order methods for modeling and computational reduction
Rozza, Gianluigi
2014-01-01
This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics. Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...
Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis
Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca
2017-11-01
Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.
Two-dimensional NMR spectrometry
International Nuclear Information System (INIS)
Farrar, T.C.
1987-01-01
This article is the second in a two-part series. In part one (ANALYTICAL CHEMISTRY, May 15) the authors discussed one-dimensional nuclear magnetic resonance (NMR) spectra and some relatively advanced nuclear spin gymnastics experiments that provide a capability for selective sensitivity enhancements. In this article and overview and some applications of two-dimensional NMR experiments are presented. These powerful experiments are important complements to the one-dimensional experiments. As in the more sophisticated one-dimensional experiments, the two-dimensional experiments involve three distinct time periods: a preparation period, t 0 ; an evolution period, t 1 ; and a detection period, t 2
Wang, Wei; Yang, Jiong
With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.
Chemical Reduction Synthesis of Iron Aluminum Powders
Zurita-Méndez, N. N.; la Torre, G. Carbajal-De; Ballesteros-Almanza, L.; Villagómez-Galindo, M.; Sánchez-Castillo, A.; Espinosa-Medina, M. A.
In this study, a chemical reduction synthesis method of iron aluminum (FeAl) nano-dimensional intermetallic powders is described. The process has two stages: a salt reduction and solvent evaporation by a heat treatment at 1100°C. The precursors of the synthesis are ferric chloride, aluminum foil chips, a mix of Toluene/THF in a 75/25 volume relationship, and concentrated hydrochloric acid as initiator of the reaction. The reaction time was 20 days, the product obtained was dried at 60 °C for 2 h and calcined at 400, 800, and 1100 °C for 4 h each. To characterize and confirm the obtained synthesis products, X-Ray Diffraction (XRD), and Scanning Electron Microscopy (SEM) techniques were used. The results of morphology and chemical characterization of nano-dimensional powders obtained showed a formation of agglomerated particles of a size range of approximately 150 nm to 1.0 μm. Composition of powders was identified as corundum (Al2O3), iron aluminide (FeAl3), and iron-aluminum oxides (Fe0. 53Al0. 47)2O3 phases. The oxide phases formation were associated with the reaction of atmospheric concentration-free oxygen during synthesis and sintering steps, reducing the concentration of the iron aluminum phase.
Coset space dimension reduction of gauge theories
International Nuclear Information System (INIS)
Farakos, K.; Kapetanakis, D.; Koutsoumbas, G.; Zoupanos, G.
1989-01-01
A very interesting approach in the attempts to unify all the interactions is to consider that a unification takes place in higher than four dimensions. The most ambitious program based on the old Kaluza-Klein idea is not able to reproduce the low energy chiral nature of the weak interactions. A suggested way out was the introduction of Yang-Mills fields in the higher dimensional theory. From the particle physics point of view the most important question is how such a theory behaves in four dimensions and in particular in low energies. Therefore most of our efforts concern studies of the properties of an attractive scheme, the Coset-Space-Dimensional-Reduction (C.S.D.R.) scheme, which permits the study of the effective four dimensional theory coming from a gauge theory defined in higher dimensions. Here we summarize the C.S.D.R. procedure the main the rems which are obeyed and to present a realistic model which is the result of the model building efforts that take into account all the C.S.D.R. properties. (orig./HSI)
Higher (odd dimensional quantum Hall effect and extended dimensional hierarchy
Directory of Open Access Journals (Sweden)
Kazuki Hasebe
2017-07-01
Full Text Available We demonstrate dimensional ladder of higher dimensional quantum Hall effects by exploiting quantum Hall effects on arbitrary odd dimensional spheres. Non-relativistic and relativistic Landau models are analyzed on S2k−1 in the SO(2k−1 monopole background. The total sub-band degeneracy of the odd dimensional lowest Landau level is shown to be equal to the winding number from the base-manifold S2k−1 to the one-dimension higher SO(2k gauge group. Based on the chiral Hopf maps, we clarify the underlying quantum Nambu geometry for odd dimensional quantum Hall effect and the resulting quantum geometry is naturally embedded also in one-dimension higher quantum geometry. An origin of such dimensional ladder connecting even and odd dimensional quantum Hall effects is illuminated from a viewpoint of the spectral flow of Atiyah–Patodi–Singer index theorem in differential topology. We also present a BF topological field theory as an effective field theory in which membranes with different dimensions undergo non-trivial linking in odd dimensional space. Finally, an extended version of the dimensional hierarchy for higher dimensional quantum Hall liquids is proposed, and its relationship to quantum anomaly and D-brane physics is discussed.
Javidi, Bahram; Andres, Pedro
2014-01-01
Provides a broad overview of advanced multidimensional imaging systems with contributions from leading researchers in the field Multi-dimensional Imaging takes the reader from the introductory concepts through to the latest applications of these techniques. Split into 3 parts covering 3D image capture, processing, visualization and display, using 1) a Multi-View Approach and 2.) a Holographic Approach, followed by a 3rd part addressing other 3D systems approaches, applications and signal processing for advanced 3D imaging. This book describes recent developments, as well as the prospects and
Dimensional analysis made simple
International Nuclear Information System (INIS)
Lira, Ignacio
2013-01-01
An inductive strategy is proposed for teaching dimensional analysis to second- or third-year students of physics, chemistry, or engineering. In this strategy, Buckingham's theorem is seen as a consequence and not as the starting point. In order to concentrate on the basics, the mathematics is kept as elementary as possible. Simple examples are suggested for classroom demonstrations of the power of the technique and others are put forward for homework or experimentation, but instructors are encouraged to produce examples of their own. (paper)
Osserman, Robert
2011-01-01
The basic component of several-variable calculus, two-dimensional calculus is vital to mastery of the broader field. This extensive treatment of the subject offers the advantage of a thorough integration of linear algebra and materials, which aids readers in the development of geometric intuition. An introductory chapter presents background information on vectors in the plane, plane curves, and functions of two variables. Subsequent chapters address differentiation, transformations, and integration. Each chapter concludes with problem sets, and answers to selected exercises appear at the end o
Araujo, Vitor; Viana, Marcelo
2010-01-01
In this book, the authors present the elements of a general theory for flows on three-dimensional compact boundaryless manifolds, encompassing flows with equilibria accumulated by regular orbits. The book aims to provide a global perspective of this theory and make it easier for the reader to digest the growing literature on this subject. This is not the first book on the subject of dynamical systems, but there are distinct aspects which together make this book unique. Firstly, this book treats mostly continuous time dynamical systems, instead of its discrete counterpart, exhaustively treated
Two dimensional simplicial paths
International Nuclear Information System (INIS)
Piso, M.I.
1994-07-01
Paths on the R 3 real Euclidean manifold are defined as 2-dimensional simplicial strips which are orbits of the action of a discrete one-parameter group. It is proven that there exists at least one embedding of R 3 in the free Z-module generated by S 2 (x 0 ). The speed is defined as the simplicial derivative of the path. If mass is attached to the simplex, the free Lagrangian is proportional to the width of the path. In the continuum limit, the relativistic form of the Lagrangian is recovered. (author). 7 refs
Three dimensional system integration
Papanikolaou, Antonis; Radojcic, Riko
2010-01-01
Three-dimensional (3D) integrated circuit (IC) stacking is the next big step in electronic system integration. It enables packing more functionality, as well as integration of heterogeneous materials, devices, and signals, in the same space (volume). This results in consumer electronics (e.g., mobile, handheld devices) which can run more powerful applications, such as full-length movies and 3D games, with longer battery life. This technology is so promising that it is expected to be a mainstream technology a few years from now, less than 10-15 years from its original conception. To achieve thi