WorldWideScience

Sample records for high-dimensional feature space

  1. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  2. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  3. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  4. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Science.gov (United States)

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  5. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Directory of Open Access Journals (Sweden)

    Wei Ji Ma

    Full Text Available Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness, one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  6. AN EFFECTIVE MULTI-CLUSTERING ANONYMIZATION APPROACH USING DISCRETE COMPONENT TASK FOR NON-BINARY HIGH DIMENSIONAL DATA SPACES

    Directory of Open Access Journals (Sweden)

    L.V. Arun Shalin

    2016-01-01

    Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization

  7. Emotion-based Music Rretrieval on a Well-reduced Audio Feature Space

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Chua, Bee Yong; Nanopoulos, Alexandros

    2009-01-01

    -emotion. However, the real-time systems that retrieve music over large music databases, can achieve order of magnitude performance increase, if applying multidimensional indexing over a dimensionally reduced audio feature space. To meet this performance achievement, in this paper, extensive studies are conducted......Music expresses emotion. A number of audio extracted features have influence on the perceived emotional expression of music. These audio features generate a high-dimensional space, on which music similarity retrieval can be performed effectively, with respect to human perception of the music...... on a number of dimensionality reduction algorithms, including both classic and novel approaches. The paper clearly envisages which dimensionality reduction techniques on the considered audio feature space, can preserve in average the accuracy of the emotion-based music retrieval....

  8. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  9. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  11. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  12. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  13. On dimensional reduction over coset spaces

    International Nuclear Information System (INIS)

    Kapetanakis, D.; Zoupanos, G.

    1990-01-01

    Gauge theories defined in higher dimensions can be dimensionally reduced over coset spaces giving definite predictions for the resulting four-dimensional theory. We present the most interesting features of these theories as well as an attempt to construct a model with realistic low energy behaviour within this framework. (author)

  14. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  15. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  16. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  17. Visual scan-path analysis with feature space transient fixation moments

    Science.gov (United States)

    Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong

    2003-05-01

    The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.

  18. Low-Dimensional Feature Representation for Instrument Identification

    Science.gov (United States)

    Ihara, Mizuki; Maeda, Shin-Ichi; Ikeda, Kazushi; Ishii, Shin

    For monophonic music instrument identification, various feature extraction and selection methods have been proposed. One of the issues toward instrument identification is that the same spectrum is not always observed even in the same instrument due to the difference of the recording condition. Therefore, it is important to find non-redundant instrument-specific features that maintain information essential for high-quality instrument identification to apply them to various instrumental music analyses. For such a dimensionality reduction method, the authors propose the utilization of linear projection methods: local Fisher discriminant analysis (LFDA) and LFDA combined with principal component analysis (PCA). After experimentally clarifying that raw power spectra are actually good for instrument classification, the authors reduced the feature dimensionality by LFDA or by PCA followed by LFDA (PCA-LFDA). The reduced features achieved reasonably high identification performance that was comparable or higher than those by the power spectra and those achieved by other existing studies. These results demonstrated that our LFDA and PCA-LFDA can successfully extract low-dimensional instrument features that maintain the characteristic information of the instruments.

  19. A selective overview of feature screening for ultrahigh-dimensional data.

    Science.gov (United States)

    JingYuan, Liu; Wei, Zhong; RunZe, L I

    2015-10-01

    High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of high-dimensional data poses many challenges for statisticians. Feature selection and variable selection are fundamental for high-dimensional data analysis. The sparsity principle, which assumes that only a small number of predictors contribute to the response, is frequently adopted and deemed useful in the analysis of high-dimensional data. Following this general principle, a large number of variable selection approaches via penalized least squares or likelihood have been developed in the recent literature to estimate a sparse model and select significant variables simultaneously. While the penalized variable selection methods have been successfully applied in many high-dimensional analyses, modern applications in areas such as genomics and proteomics push the dimensionality of data to an even larger scale, where the dimension of data may grow exponentially with the sample size. This has been called ultrahigh-dimensional data in the literature. This work aims to present a selective overview of feature screening procedures for ultrahigh-dimensional data. We focus on insights into how to construct marginal utilities for feature screening on specific models and motivation for the need of model-free feature screening procedures.

  20. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  1. On High Dimensional Searching Spaces and Learning Methods

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2017-01-01

    , and similarity functions and discuss the pros and cons of using each of them. Conventional similarity functions evaluate objects in the vector space. Contrarily, Weighted Feature Distance (WFD) functions compare data objects in both feature and vector spaces, preventing the system from being affected by some...

  2. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    Science.gov (United States)

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  3. Feature-Space Clustering for fMRI Meta-Analysis

    DEFF Research Database (Denmark)

    Goutte, Cyril; Hansen, Lars Kai; Liptrot, Mathew G.

    2001-01-01

    MRI sequences containing several hundreds of images, it is sometimes necessary to invoke feature extraction to reduce the dimensionality of the data space. A second interesting application is in the meta-analysis of fMRI experiment, where features are obtained from a possibly large number of single......-voxel analyses. In particular this allows the checking of the differences and agreements between different methods of analysis. Both approaches are illustrated on a fMRI data set involving visual stimulation, and we show that the feature space clustering approach yields nontrivial results and, in particular......, shows interesting differences between individual voxel analysis performed with traditional methods. © 2001 Wiley-Liss, Inc....

  4. Weakly infinite-dimensional spaces

    International Nuclear Information System (INIS)

    Fedorchuk, Vitalii V

    2007-01-01

    In this survey article two new classes of spaces are considered: m-C-spaces and w-m-C-spaces, m=2,3,...,∞. They are intermediate between the class of weakly infinite-dimensional spaces in the Alexandroff sense and the class of C-spaces. The classes of 2-C-spaces and w-2-C-spaces coincide with the class of weakly infinite-dimensional spaces, while the compact ∞-C-spaces are exactly the C-compact spaces of Haver. The main results of the theory of weakly infinite-dimensional spaces, including classification via transfinite Lebesgue dimensions and Luzin-Sierpinsky indices, extend to these new classes of spaces. Weak m-C-spaces are characterised by means of essential maps to Henderson's m-compacta. The existence of hereditarily m-strongly infinite-dimensional spaces is proved.

  5. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  6. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  8. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    Science.gov (United States)

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  9. Study on the construction of multi-dimensional Remote Sensing feature space for hydrological drought

    International Nuclear Information System (INIS)

    Xiang, Daxiang; Tan, Debao; Wen, Xiongfei; Shen, Shaohong; Li, Zhe; Cui, Yuanlai

    2014-01-01

    Hydrological drought refers to an abnormal water shortage caused by precipitation and surface water shortages or a groundwater imbalance. Hydrological drought is reflected in a drop of surface water, decrease of vegetation productivity, increase of temperature difference between day and night and so on. Remote sensing permits the observation of surface water, vegetation, temperature and other information from a macro perspective. This paper analyzes the correlation relationship and differentiation of both remote sensing and surface measured indicators, after the selection and extraction a series of representative remote sensing characteristic parameters according to the spectral characterization of surface features in remote sensing imagery, such as vegetation index, surface temperature and surface water from HJ-1A/B CCD/IRS data. Finally, multi-dimensional remote sensing features such as hydrological drought are built on a intelligent collaborative model. Further, for the Dong-ting lake area, two drought events are analyzed for verification of multi-dimensional features using remote sensing data with different phases and field observation data. The experiments results proved that multi-dimensional features are a good method for hydrological drought

  10. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  11. High-dimensional structured light coding/decoding for free-space optical communications free of obstructions.

    Science.gov (United States)

    Du, Jing; Wang, Jian

    2015-11-01

    Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.

  12. TripAdvisor^{N-D}: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail.

    Science.gov (United States)

    Nam, Julia EunJu; Mueller, Klaus

    2013-02-01

    Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.

  13. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  14. Online prediction of respiratory motion: multidimensional processing with low-dimensional feature learning

    International Nuclear Information System (INIS)

    Ruan, Dan; Keall, Paul

    2010-01-01

    Accurate real-time prediction of respiratory motion is desirable for effective motion management in radiotherapy for lung tumor targets. Recently, nonparametric methods have been developed and their efficacy in predicting one-dimensional respiratory-type motion has been demonstrated. To exploit the correlation among various coordinates of the moving target, it is natural to extend the 1D method to multidimensional processing. However, the amount of learning data required for such extension grows exponentially with the dimensionality of the problem, a phenomenon known as the 'curse of dimensionality'. In this study, we investigate a multidimensional prediction scheme based on kernel density estimation (KDE) in an augmented covariate-response space. To alleviate the 'curse of dimensionality', we explore the intrinsic lower dimensional manifold structure and utilize principal component analysis (PCA) to construct a proper low-dimensional feature space, where kernel density estimation is feasible with the limited training data. Interestingly, the construction of this lower dimensional representation reveals a useful decomposition of the variations in respiratory motion into the contribution from semiperiodic dynamics and that from the random noise, as it is only sensible to perform prediction with respect to the former. The dimension reduction idea proposed in this work is closely related to feature extraction used in machine learning, particularly support vector machines. This work points out a pathway in processing high-dimensional data with limited training instances, and this principle applies well beyond the problem of target-coordinate-based respiratory-based prediction. A natural extension is prediction based on image intensity directly, which we will investigate in the continuation of this work. We used 159 lung target motion traces obtained with a Synchrony respiratory tracking system. Prediction performance of the low-dimensional feature learning

  15. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  16. The formation method of the feature space for the identification of fatigued bills

    Science.gov (United States)

    Kang, Dongshik; Oshiro, Ayumu; Ozawa, Kenji; Mitsui, Ikugo

    2014-10-01

    Fatigued bills make a trouble such as the paper jam in a bill handling machine. In the discrimination of fatigued bills using an acoustic signal, the variation of an observed bill sound is considered to be one of causes in misclassification. Therefore a technique has demanded in order to make the classification of fatigued bills more efficient. In this paper, we proposed the algorithm that extracted feature quantity of bill sound from acoustic signal using the frequency difference, and carried out discrimination experiment of fatigued bill money by Support Vector Machine(SVM). The feature quantity of frequency difference can represent the frequency components of an acoustic signal is varied by the fatigued degree of bill money. The generalization performance of SVM does not depend on the size of dimensions of the feature space, even in a high dimensional feature space such as bill-acoustic signals. Furthermore, SVM can induce an optimal classifier which considers the combination of features by the virtue of polynomial kernel functions.

  17. Discovering highly informative feature set over high dimensions

    KAUST Repository

    Zhang, Chongsheng; Masseglia, Florent; Zhang, Xiangliang

    2012-01-01

    For many textual collections, the number of features is often overly large. These features can be very redundant, it is therefore desirable to have a small, succinct, yet highly informative collection of features that describes the key characteristics of a dataset. Information theory is one such tool for us to obtain this feature collection. With this paper, we mainly contribute to the improvement of efficiency for the process of selecting the most informative feature set over high-dimensional unlabeled data. We propose a heuristic theory for informative feature set selection from high dimensional data. Moreover, we design data structures that enable us to compute the entropies of the candidate feature sets efficiently. We also develop a simple pruning strategy that eliminates the hopeless candidates at each forward selection step. We test our method through experiments on real-world data sets, showing that our proposal is very efficient. © 2012 IEEE.

  18. Discovering highly informative feature set over high dimensions

    KAUST Repository

    Zhang, Chongsheng

    2012-11-01

    For many textual collections, the number of features is often overly large. These features can be very redundant, it is therefore desirable to have a small, succinct, yet highly informative collection of features that describes the key characteristics of a dataset. Information theory is one such tool for us to obtain this feature collection. With this paper, we mainly contribute to the improvement of efficiency for the process of selecting the most informative feature set over high-dimensional unlabeled data. We propose a heuristic theory for informative feature set selection from high dimensional data. Moreover, we design data structures that enable us to compute the entropies of the candidate feature sets efficiently. We also develop a simple pruning strategy that eliminates the hopeless candidates at each forward selection step. We test our method through experiments on real-world data sets, showing that our proposal is very efficient. © 2012 IEEE.

  19. Extended supersymmetry in four-dimensional Euclidean space

    International Nuclear Information System (INIS)

    McKeon, D.G.C.; Sherry, T.N.

    2000-01-01

    Since the generators of the two SU(2) groups which comprise SO(4) are not Hermitian conjugates of each other, the simplest supersymmetry algebra in four-dimensional Euclidean space more closely resembles the N=2 than the N=1 supersymmetry algebra in four-dimensional Minkowski space. An extended supersymmetry algebra in four-dimensional Euclidean space is considered in this paper; its structure resembles that of N=4 supersymmetry in four-dimensional Minkowski space. The relationship of this algebra to the algebra found by dimensionally reducing the N=1 supersymmetry algebra in ten-dimensional Euclidean space to four-dimensional Euclidean space is examined. The dimensional reduction of N=1 super Yang-Mills theory in ten-dimensional Minkowski space to four-dimensional Euclidean space is also considered

  20. Feature Space Dimensionality Reduction for Real-Time Vision-Based Food Inspection

    Directory of Open Access Journals (Sweden)

    Mai Moussa CHETIMA

    2009-03-01

    Full Text Available Machine vision solutions are becoming a standard for quality inspection in several manufacturing industries. In the processed-food industry where the appearance attributes of the product are essential to customer’s satisfaction, visual inspection can be reliably achieved with machine vision. But such systems often involve the extraction of a larger number of features than those actually needed to ensure proper quality control, making the process less efficient and difficult to tune. This work experiments with several feature selection techniques in order to reduce the number of attributes analyzed by a real-time vision-based food inspection system. Identifying and removing as much irrelevant and redundant information as possible reduces the dimensionality of the data and allows classification algorithms to operate faster. In some cases, accuracy on classification can even be improved. Filter-based and wrapper-based feature selectors are experimentally evaluated on different bakery products to identify the best performing approaches.

  1. A Novel Method Using Abstract Convex Underestimation in Ab-Initio Protein Structure Prediction for Guiding Search in Conformational Feature Space.

    Science.gov (United States)

    Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng

    2016-01-01

    To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more

  2. Evaluation of aqueductal patency in patients with hydrocephalus: Three-dimensional high-sampling efficiency technique(SPACE) versus two-dimensional turbo spin echo at 3 Tesla

    International Nuclear Information System (INIS)

    Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut

    2014-01-01

    To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p < 0.001). No significant difference in image quality between T2W 2D-TSE and T2W 3D-SPACE (p = 0.056) occurred. The kappa values for inter-observer agreement were 0.714 for T2W 2D-TSE and 0.899 for T2W 3D-SPACE. Three-dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.

  3. Evaluation of aqueductal patency in patients with hydrocephalus: Three-dimensional high-sampling efficiency technique(SPACE) versus two-dimensional turbo spin echo at 3 Tesla

    Energy Technology Data Exchange (ETDEWEB)

    Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut [School of Medicine, Gazi University, Ankara (Turkey)

    2014-12-15

    To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p < 0.001). No significant difference in image quality between T2W 2D-TSE and T2W 3D-SPACE (p = 0.056) occurred. The kappa values for inter-observer agreement were 0.714 for T2W 2D-TSE and 0.899 for T2W 3D-SPACE. Three-dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.

  4. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  5. Dimensional reduction from entanglement in Minkowski space

    International Nuclear Information System (INIS)

    Brustein, Ram; Yarom, Amos

    2005-01-01

    Using a quantum field theoretic setting, we present evidence for dimensional reduction of any sub-volume of Minkowksi space. First, we show that correlation functions of a class of operators restricted to a sub-volume of D-dimensional Minkowski space scale as its surface area. A simple example of such area scaling is provided by the energy fluctuations of a free massless quantum field in its vacuum state. This is reminiscent of area scaling of entanglement entropy but applies to quantum expectation values in a pure state, rather than to statistical averages over a mixed state. We then show, in a specific case, that fluctuations in the bulk have a lower-dimensional representation in terms of a boundary theory at high temperature. (author)

  6. The literary uses of high-dimensional space

    Directory of Open Access Journals (Sweden)

    Ted Underwood

    2015-12-01

    Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.

  7. High-level intuitive features (HLIFs) for intuitive skin lesion description.

    Science.gov (United States)

    Amelard, Robert; Glaister, Jeffrey; Wong, Alexander; Clausi, David A

    2015-03-01

    A set of high-level intuitive features (HLIFs) is proposed to quantitatively describe melanoma in standard camera images. Melanoma is the deadliest form of skin cancer. With rising incidence rates and subjectivity in current clinical detection methods, there is a need for melanoma decision support systems. Feature extraction is a critical step in melanoma decision support systems. Existing feature sets for analyzing standard camera images are comprised of low-level features, which exist in high-dimensional feature spaces and limit the system's ability to convey intuitive diagnostic rationale. The proposed HLIFs were designed to model the ABCD criteria commonly used by dermatologists such that each HLIF represents a human-observable characteristic. As such, intuitive diagnostic rationale can be conveyed to the user. Experimental results show that concatenating the proposed HLIFs with a full low-level feature set increased classification accuracy, and that HLIFs were able to separate the data better than low-level features with statistical significance. An example of a graphical interface for providing intuitive rationale is given.

  8. Quantum magnification of classical sub-Planck phase space features

    International Nuclear Information System (INIS)

    Hensinger, W.K.; Heckenberg, N.; Rubinsztein-Dunlop, H.; Delande, D.

    2002-01-01

    Full text: To understand the relationship between quantum mechanics and classical physics a crucial question to be answered is how distinct classical dynamical phase space features translate into the quantum picture. This problem becomes even more interesting if these phase space features occupy a much smaller volume than ℎ in a phase space spanned by two non-commuting variables such as position and momentum. The question whether phase space structures in quantum mechanics associated with sub-Planck scales have physical signatures has recently evoked a lot of discussion. Here we will show that sub-Planck classical dynamical phase space structures, for example regions of regular motion, can give rise to states whose phase space representation is of size ℎ or larger. This is illustrated using period-1 regions of regular motion (modes of oscillatory motion of a particle in a modulated well) whose volume is distinctly smaller than Planck's constant. They are magnified in the quantum picture and appear as states whose phase space representation is of size h or larger. Cold atoms provide an ideal test bed to probe such fundamental aspects of quantum and classical dynamics. In the experiment a Bose-Einstein condensate is loaded into a far detuned optical lattice. The lattice depth is modulated resulting in the emergence of regions of regular motion surrounded by chaotic motion in the phase space spanned by position and momentum of the atoms along the standing wave. Sub-Planck scaled phase space features in the classical phase space are magnified and appear as distinct broad peaks in the atomic momentum distribution. The corresponding quantum analysis shows states of size Ti which can be associated with much smaller classical dynamical phase space features. This effect may considered as the dynamical equivalent of the Goldstone and Jaffe theorem which predicts the existence of at least one bound state at a bend in a two or three dimensional spatial potential

  9. Six-dimensional real and reciprocal space small-angle X-ray scattering tomography.

    Science.gov (United States)

    Schaff, Florian; Bech, Martin; Zaslansky, Paul; Jud, Christoph; Liebi, Marianne; Guizar-Sicairos, Manuel; Pfeiffer, Franz

    2015-11-19

    When used in combination with raster scanning, small-angle X-ray scattering (SAXS) has proven to be a valuable imaging technique of the nanoscale, for example of bone, teeth and brain matter. Although two-dimensional projection imaging has been used to characterize various materials successfully, its three-dimensional extension, SAXS computed tomography, poses substantial challenges, which have yet to be overcome. Previous work using SAXS computed tomography was unable to preserve oriented SAXS signals during reconstruction. Here we present a solution to this problem and obtain a complete SAXS computed tomography, which preserves oriented scattering information. By introducing virtual tomography axes, we take advantage of the two-dimensional SAXS information recorded on an area detector and use it to reconstruct the full three-dimensional scattering distribution in reciprocal space for each voxel of the three-dimensional object in real space. The presented method could be of interest for a combined six-dimensional real and reciprocal space characterization of mesoscopic materials with hierarchically structured features with length scales ranging from a few nanometres to a few millimetres--for example, biomaterials such as bone or teeth, or functional materials such as fuel-cell or battery components.

  10. On the space dimensionality based on metrics

    International Nuclear Information System (INIS)

    Gorelik, G.E.

    1978-01-01

    A new approach to space time dimensionality is suggested, which permits to take into account the possibility of altering dimensionality depending on the phenomenon scale. An attempt is made to give the definition of dimensionality, equivalent to a conventional definition for the Euclidean space and variety. The conventional definition of variety dimensionality is connected with the possibility of homeomorphic reflection of the Euclidean space on some region of each variety point

  11. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  12. Four Dimensional Trace Space Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, M.

    2005-02-10

    Future high energy colliders and FELs (Free Electron Lasers) such as the proposed LCLS (Linac Coherent Light Source) at SLAC require high brightness electron beams. In general a high brightness electron beam will contain a large number of electrons that occupy a short longitudinal duration, can be focused to a small transverse area while having small transverse divergences. Therefore the beam must have a high peak current and occupy small areas in transverse phase space and so have small transverse emittances. Additionally the beam should propagate at high energy and have a low energy spread to reduce chromatic effects. The requirements of the LCLS for example are pulses which contain 10{sup 10} electrons in a temporal duration of 10 ps FWHM with projected normalized transverse emittances of 1{pi} mm mrad[1]. Currently the most promising method of producing such a beam is the RF photoinjector. The GTF (Gun Test Facility) at SLAC was constructed to produce and characterize laser and electron beams which fulfill the LCLS requirements. Emittance measurements of the electron beam at the GTF contain evidence of strong coupling between the transverse dimensions of the beam. This thesis explores the effects of this coupling on the determination of the projected emittances of the electron beam. In the presence of such a coupling the projected normalized emittance is no longer a conserved quantity. The conserved quantity is the normalized full four dimensional phase space occupied by the beam. A method to determine the presence and evaluate the strength of the coupling in emittance measurements made in the laboratory is developed. A method to calculate the four dimensional volume the beam occupies in phase space using quantities available in the laboratory environment is also developed. Results of measurements made of the electron beam at the GTF that demonstrate these concepts are presented and discussed.

  13. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  14. Guiding exploration in conformational feature space with Lipschitz underestimation for ab-initio protein structure prediction.

    Science.gov (United States)

    Hao, Xiaohu; Zhang, Guijun; Zhou, Xiaogen

    2018-04-01

    Computing conformations which are essential to associate structural and functional information with gene sequences, is challenging due to the high dimensionality and rugged energy surface of the protein conformational space. Consequently, the dimension of the protein conformational space should be reduced to a proper level, and an effective exploring algorithm should be proposed. In this paper, a plug-in method for guiding exploration in conformational feature space with Lipschitz underestimation (LUE) for ab-initio protein structure prediction is proposed. The conformational space is converted into ultrafast shape recognition (USR) feature space firstly. Based on the USR feature space, the conformational space can be further converted into Underestimation space according to Lipschitz estimation theory for guiding exploration. As a consequence of the use of underestimation model, the tight lower bound estimate information can be used for exploration guidance, the invalid sampling areas can be eliminated in advance, and the number of energy function evaluations can be reduced. The proposed method provides a novel technique to solve the exploring problem of protein conformational space. LUE is applied to differential evolution (DE) algorithm, and metropolis Monte Carlo(MMC) algorithm which is available in the Rosetta; When LUE is applied to DE and MMC, it will be screened by the underestimation method prior to energy calculation and selection. Further, LUE is compared with DE and MMC by testing on 15 small-to-medium structurally diverse proteins. Test results show that near-native protein structures with higher accuracy can be obtained more rapidly and efficiently with the use of LUE. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  16. Manifold Learning with Self-Organizing Mapping for Feature Extraction of Nonlinear Faults in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Lin Liang

    2015-01-01

    Full Text Available A new method for extracting the low-dimensional feature automatically with self-organization mapping manifold is proposed for the detection of rotating mechanical nonlinear faults (such as rubbing, pedestal looseness. Under the phase space reconstructed by single vibration signal, the self-organization mapping (SOM with expectation maximization iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment algorithm is adopted to compress the high-dimensional phase space into low-dimensional feature space. The proposed method takes advantages of the manifold learning in low-dimensional feature extraction and adaptive neighborhood construction of SOM and can extract intrinsic fault features of interest in two dimensional projection space. To evaluate the performance of the proposed method, the Lorenz system was simulated and rotation machinery with nonlinear faults was obtained for test purposes. Compared with the holospectrum approaches, the results reveal that the proposed method is superior in identifying faults and effective for rotating machinery condition monitoring.

  17. Teleportation schemes in infinite dimensional Hilbert spaces

    International Nuclear Information System (INIS)

    Fichtner, Karl-Heinz; Freudenberg, Wolfgang; Ohya, Masanori

    2005-01-01

    The success of quantum mechanics is due to the discovery that nature is described in infinite dimension Hilbert spaces, so that it is desirable to demonstrate the quantum teleportation process in a certain infinite dimensional Hilbert space. We describe the teleportation process in an infinite dimensional Hilbert space by giving simple examples

  18. Discrete symmetries and coset space dimensional reduction

    International Nuclear Information System (INIS)

    Kapetanakis, D.; Zoupanos, G.

    1989-01-01

    We consider the discrete symmetries of all the six-dimensional coset spaces and we apply them in gauge theories defined in ten dimensions which are dimensionally reduced over these homogeneous spaces. Particular emphasis is given in the consequences of the discrete symmetries on the particle content as well as on the symmetry breaking a la Hosotani of the resulting four-dimensional theory. (orig.)

  19. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  20. Coset space dimensional reduction of gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Kapetanakis, D. (Physik Dept., Technische Univ. Muenchen, Garching (Germany)); Zoupanos, G. (CERN, Geneva (Switzerland))

    1992-10-01

    We review the attempts to construct unified theories defined in higher dimensions which are dimensionally reduced over coset spaces. We employ the coset space dimensional reduction scheme, which permits the detailed study of the resulting four-dimensional gauge theories. In the context of this scheme we present the difficulties and the suggested ways out in the attempts to describe the observed interactions in a realistic way. (orig.).

  1. Coset space dimensional reduction of gauge theories

    International Nuclear Information System (INIS)

    Kapetanakis, D.; Zoupanos, G.

    1992-01-01

    We review the attempts to construct unified theories defined in higher dimensions which are dimensionally reduced over coset spaces. We employ the coset space dimensional reduction scheme, which permits the detailed study of the resulting four-dimensional gauge theories. In the context of this scheme we present the difficulties and the suggested ways out in the attempts to describe the observed interactions in a realistic way. (orig.)

  2. Inferring biological tasks using Pareto analysis of high-dimensional data.

    Science.gov (United States)

    Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri

    2015-03-01

    We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.

  3. Fractal electrodynamics via non-integer dimensional space approach

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  4. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  5. On infinite-dimensional state spaces

    International Nuclear Information System (INIS)

    Fritz, Tobias

    2013-01-01

    It is well known that the canonical commutation relation [x, p]=i can be realized only on an infinite-dimensional Hilbert space. While any finite set of experimental data can also be explained in terms of a finite-dimensional Hilbert space by approximating the commutation relation, Occam's razor prefers the infinite-dimensional model in which [x, p]=i holds on the nose. This reasoning one will necessarily have to make in any approach which tries to detect the infinite-dimensionality. One drawback of using the canonical commutation relation for this purpose is that it has unclear operational meaning. Here, we identify an operationally well-defined context from which an analogous conclusion can be drawn: if two unitary transformations U, V on a quantum system satisfy the relation V −1 U 2 V=U 3 , then finite-dimensionality entails the relation UV −1 UV=V −1 UVU; this implication strongly fails in some infinite-dimensional realizations. This is a result from combinatorial group theory for which we give a new proof. This proof adapts to the consideration of cases where the assumed relation V −1 U 2 V=U 3 holds only up to ε and then yields a lower bound on the dimension.

  6. On infinite-dimensional state spaces

    Science.gov (United States)

    Fritz, Tobias

    2013-05-01

    It is well known that the canonical commutation relation [x, p] = i can be realized only on an infinite-dimensional Hilbert space. While any finite set of experimental data can also be explained in terms of a finite-dimensional Hilbert space by approximating the commutation relation, Occam's razor prefers the infinite-dimensional model in which [x, p] = i holds on the nose. This reasoning one will necessarily have to make in any approach which tries to detect the infinite-dimensionality. One drawback of using the canonical commutation relation for this purpose is that it has unclear operational meaning. Here, we identify an operationally well-defined context from which an analogous conclusion can be drawn: if two unitary transformations U, V on a quantum system satisfy the relation V-1U2V = U3, then finite-dimensionality entails the relation UV-1UV = V-1UVU; this implication strongly fails in some infinite-dimensional realizations. This is a result from combinatorial group theory for which we give a new proof. This proof adapts to the consideration of cases where the assumed relation V-1U2V = U3 holds only up to ɛ and then yields a lower bound on the dimension.

  7. Using a Feature Subset Selection method and Support Vector Machine to address curse of dimensionality and redundancy in Hyperion hyperspectral data classification

    Directory of Open Access Journals (Sweden)

    Amir Salimi

    2018-04-01

    Full Text Available The curse of dimensionality resulted from insufficient training samples and redundancy is considered as an important problem in the supervised classification of hyperspectral data. This problem can be handled by Feature Subset Selection (FSS methods and Support Vector Machine (SVM. The FSS methods can manage the redundancy by removing redundant spectral bands. Moreover, kernel based methods, especially SVM have a high ability to classify limited-sample data sets. This paper mainly aims to assess the capability of a FSS method and the SVM in curse of dimensional circumstances and to compare results with the Artificial Neural Network (ANN, when they are used to classify alteration zones of the Hyperion hyperspectral image acquired from the greatest Iranian porphyry copper complex. The results demonstrated that by decreasing training samples, the accuracy of SVM was just decreased 1.8% while the accuracy of ANN was highly reduced i.e. 14.01%. In addition, a hybrid FSS was applied to reduce the dimension of Hyperion. Accordingly, among the 165 useable spectral bands of Hyperion, 18 bands were only selected as the most important and informative bands. Although this dimensionality reduction could not intensively improve the performance of SVM, ANN revealed a significant improvement in the computational time and a slightly enhancement in the average accuracy. Therefore, SVM as a low-sensitive method respect to the size of training data set and feature space can be applied to classify the curse of dimensional problems. Also, the FSS methods can improve the performance of non-kernel based classifiers by eliminating redundant features. Keywords: Curse of dimensionality, Feature Subset Selection, Hydrothermal alteration, Hyperspectral, SVM

  8. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  9. Feature Import Vector Machine: A General Classifier with Flexible Feature Selection.

    Science.gov (United States)

    Ghosh, Samiran; Wang, Yazhen

    2015-02-01

    The support vector machine (SVM) and other reproducing kernel Hilbert space (RKHS) based classifier systems are drawing much attention recently due to its robustness and generalization capability. General theme here is to construct classifiers based on the training data in a high dimensional space by using all available dimensions. The SVM achieves huge data compression by selecting only few observations which lie close to the boundary of the classifier function. However when the number of observations are not very large (small n ) but the number of dimensions/features are large (large p ), then it is not necessary that all available features are of equal importance in the classification context. Possible selection of an useful fraction of the available features may result in huge data compression. In this paper we propose an algorithmic approach by means of which such an optimal set of features could be selected. In short, we reverse the traditional sequential observation selection strategy of SVM to that of sequential feature selection. To achieve this we have modified the solution proposed by Zhu and Hastie (2005) in the context of import vector machine (IVM), to select an optimal sub-dimensional model to build the final classifier with sufficient accuracy.

  10. Individual-based models for adaptive diversification in high-dimensional phenotype spaces.

    Science.gov (United States)

    Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael

    2016-02-07

    Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. A high-order integral solver for scalar problems of diffraction by screens and apertures in three-dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Oscar P., E-mail: obruno@caltech.edu; Lintner, Stéphane K.

    2013-11-01

    We present a novel methodology for the numerical solution of problems of diffraction by infinitely thin screens in three-dimensional space. Our approach relies on new integral formulations as well as associated high-order quadrature rules. The new integral formulations involve weighted versions of the classical integral operators related to the thin-screen Dirichlet and Neumann problems as well as a generalization to the open-surface problem of the classical Calderón formulae. The high-order quadrature rules we introduce for these operators, in turn, resolve the multiple Green function and edge singularities (which occur at arbitrarily close distances from each other, and which include weakly singular as well as hypersingular kernels) and thus give rise to super-algebraically fast convergence as the discretization sizes are increased. When used in conjunction with Krylov-subspace linear algebra solvers such as GMRES, the resulting solvers produce results of high accuracy in small numbers of iterations for low and high frequencies alike. We demonstrate our methodology with a variety of numerical results for screen and aperture problems at high frequencies—including simulation of classical experiments such as the diffraction by a circular disc (featuring in particular the famous Poisson spot), evaluation of interference fringes resulting from diffraction across two nearby circular apertures, as well as solution of problems of scattering by more complex geometries consisting of multiple scatterers and cavities.

  12. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  13. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  14. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  15. Three-dimensional oscillator and Coulomb systems reduced from Kaehler spaces

    International Nuclear Information System (INIS)

    Nersessian, Armen; Yeranyan, Armen

    2004-01-01

    We define the oscillator and Coulomb systems on four-dimensional spaces with U(2)-invariant Kaehler metric and perform their Hamiltonian reduction to the three-dimensional oscillator and Coulomb systems specified by the presence of Dirac monopoles. We find the Kaehler spaces with conic singularity, where the oscillator and Coulomb systems on three-dimensional sphere and two-sheet hyperboloid originate. Then we construct the superintegrable oscillator system on three-dimensional sphere and hyperboloid, coupled to a monopole, and find their four-dimensional origins. In the latter case the metric of configuration space is a non-Kaehler one. Finally, we extend these results to the family of Kaehler spaces with conic singularities

  16. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  17. Spinors and supersymmetry in four-dimensional Euclidean space

    International Nuclear Information System (INIS)

    McKeon, D.G.C.; Sherry, T.N.

    2001-01-01

    Spinors in four-dimensional Euclidean space are treated using the decomposition of the Euclidean space SO(4) symmetry group into SU(2)xSU(2). Both 2- and 4-spinor representations of this SO(4) symmetry group are shown to differ significantly from the corresponding spinor representations of the SO(3, 1) symmetry group in Minkowski space. The simplest self conjugate supersymmetry algebra allowed in four-dimensional Euclidean space is demonstrated to be an N=2 supersymmetry algebra which resembles the N=2 supersymmetry algebra in four-dimensional Minkowski space. The differences between the two supersymmetry algebras gives rise to different representations; in particular an analysis of the Clifford algebra structure shows that the momentum invariant is bounded above by the central charges in 4dE, while in 4dM the central charges bound the momentum invariant from below. Dimensional reduction of the N=1 SUSY algebra in six-dimensional Minkowski space (6dM) to 4dE reproduces our SUSY algebra in 4dE. This dimensional reduction can be used to introduce additional generators into the SUSY algebra in 4dE. Well known interpolating maps are used to relate the N=2 SUSY algebra in 4dE derived in this paper to the N=2 SUSY algebra in 4dM. The nature of the spinors in 4dE allows us to write an axially gauge invariant model which is shown to be both Hermitian and anomaly-free. No equivalent model exists in 4dM. Useful formulae in 4dE are collected together in two appendixes

  18. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  19. Two-dimensional wavelet transform feature extraction for porous silicon chemical sensors.

    Science.gov (United States)

    Murguía, José S; Vergara, Alexander; Vargas-Olmos, Cecilia; Wong, Travis J; Fonollosa, Jordi; Huerta, Ramón

    2013-06-27

    Designing reliable, fast responding, highly sensitive, and low-power consuming chemo-sensory systems has long been a major goal in chemo-sensing. This goal, however, presents a difficult challenge because having a set of chemo-sensory detectors exhibiting all these aforementioned ideal conditions are still largely un-realizable to-date. This paper presents a unique perspective on capturing more in-depth insights into the physicochemical interactions of two distinct, selectively chemically modified porous silicon (pSi) film-based optical gas sensors by implementing an innovative, based on signal processing methodology, namely the two-dimensional discrete wavelet transform. Specifically, the method consists of using the two-dimensional discrete wavelet transform as a feature extraction method to capture the non-stationary behavior from the bi-dimensional pSi rugate sensor response. Utilizing a comprehensive set of measurements collected from each of the aforementioned optically based chemical sensors, we evaluate the significance of our approach on a complex, six-dimensional chemical analyte discrimination/quantification task problem. Due to the bi-dimensional aspects naturally governing the optical sensor response to chemical analytes, our findings provide evidence that the proposed feature extractor strategy may be a valuable tool to deepen our understanding of the performance of optically based chemical sensors as well as an important step toward attaining their implementation in more realistic chemo-sensing applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Topology as fluid geometry two-dimensional spaces, volume 2

    CERN Document Server

    Cannon, James W

    2017-01-01

    This is the second of a three volume collection devoted to the geometry, topology, and curvature of 2-dimensional spaces. The collection provides a guided tour through a wide range of topics by one of the twentieth century's masters of geometric topology. The books are accessible to college and graduate students and provide perspective and insight to mathematicians at all levels who are interested in geometry and topology. The second volume deals with the topology of 2-dimensional spaces. The attempts encountered in Volume 1 to understand length and area in the plane lead to examples most easily described by the methods of topology (fluid geometry): finite curves of infinite length, 1-dimensional curves of positive area, space-filling curves (Peano curves), 0-dimensional subsets of the plane through which no straight path can pass (Cantor sets), etc. Volume 2 describes such sets. All of the standard topological results about 2-dimensional spaces are then proved, such as the Fundamental Theorem of Algebra (two...

  1. Two-dimensional black holes and non-commutative spaces

    International Nuclear Information System (INIS)

    Sadeghi, J.

    2008-01-01

    We study the effects of non-commutative spaces on two-dimensional black hole. The event horizon of two-dimensional black hole is obtained in non-commutative space up to second order of perturbative calculations. A lower limit for the non-commutativity parameter is also obtained. The observer in that limit in contrast to commutative case see two horizon

  2. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru [Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow 119991 (Russian Federation)

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  3. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Science.gov (United States)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  4. Anisotropic fractal media by vector calculus in non-integer dimensional space

    International Nuclear Information System (INIS)

    Tarasov, Vasily E.

    2014-01-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media

  5. Oversampling the Minority Class in the Feature Space.

    Science.gov (United States)

    Perez-Ortiz, Maria; Gutierrez, Pedro Antonio; Tino, Peter; Hervas-Martinez, Cesar

    2016-09-01

    The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.

  6. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  7. Superintegrability in two-dimensional Euclidean space and associated polynomial solutions

    International Nuclear Information System (INIS)

    Kalnins, E.G.; Miller, W. Jr; Pogosyan, G.S.

    1996-01-01

    In this work we examine the basis functions for those classical and quantum mechanical systems in two dimensions which admit separation of variables in at least two coordinate systems. We do this for the corresponding systems defined in Euclidean space and on the two dimensional sphere. We present all of these cases from a unified point of view. In particular, all of the spectral functions that arise via variable separation have their essential features expressed in terms of their zeros. The principal new results are the details of the polynomial base for each of the nonsubgroup base, not just the subgroup cartesian and polar coordinate case, and the details of the structure of the quadratic algebras. We also study the polynomial eigenfunctions in elliptic coordinates of the N-dimensional isotropic quantum oscillator. 28 refs., 1 tab

  8. Restoring the Generalizability of SVM Based Decoding in High Dimensional Neuroimage Data

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Variance inflation is caused by a mismatch between linear projections of test and training data when projections are estimated on training sets smaller than the dimensionality of the feature space. We demonstrate that variance inflation can lead to an increased neuroimage decoding error rate...

  9. The use of virtual reality to reimagine two-dimensional representations of three-dimensional spaces

    Science.gov (United States)

    Fath, Elaine

    2015-03-01

    A familiar realm in the world of two-dimensional art is the craft of taking a flat canvas and creating, through color, size, and perspective, the illusion of a three-dimensional space. Using well-explored tricks of logic and sight, impossible landscapes such as those by surrealists de Chirico or Salvador Dalí seem to be windows into new and incredible spaces which appear to be simultaneously feasible and utterly nonsensical. As real-time 3D imaging becomes increasingly prevalent as an artistic medium, this process takes on an additional layer of depth: no longer is two-dimensional space restricted to strategies of light, color, line and geometry to create the impression of a three-dimensional space. A digital interactive environment is a space laid out in three dimensions, allowing the user to explore impossible environments in a way that feels very real. In this project, surrealist two-dimensional art was researched and reimagined: what would stepping into a de Chirico or a Magritte look and feel like, if the depth and distance created by light and geometry were not simply single-perspective illusions, but fully formed and explorable spaces? 3D environment-building software is allowing us to step into these impossible spaces in ways that 2D representations leave us yearning for. This art project explores what we gain--and what gets left behind--when these impossible spaces become doors, rather than windows. Using sketching, Maya 3D rendering software, and the Unity Engine, surrealist art was reimagined as a fully navigable real-time digital environment. The surrealist movement and its key artists were researched for their use of color, geometry, texture, and space and how these elements contributed to their work as a whole, which often conveys feelings of unexpectedness or uneasiness. The end goal was to preserve these feelings while allowing the viewer to actively engage with the space.

  10. We live in the quantum 4-dimensional Minkowski space-time

    OpenAIRE

    Hwang, W-Y. Pauchy

    2015-01-01

    We try to define "our world" by stating that "we live in the quantum 4-dimensional Minkowski space-time with the force-fields gauge group $SU_c(3) \\times SU_L(2) \\times U(1) \\times SU_f(3)$ built-in from the outset". We begin by explaining what "space" and "time" are meaning for us - the 4-dimensional Minkowski space-time, then proceeding to the quantum 4-dimensional Minkowski space-time. In our world, there are fields, or, point-like particles. Particle physics is described by the so-called ...

  11. Multiple-canister flow and transport code in 2-dimensional space. MCFT2D: user's manual

    International Nuclear Information System (INIS)

    Lim, Doo-Hyun

    2006-03-01

    A two-dimensional numerical code, MCFT2D (Multiple-Canister Flow and Transport code in 2-Dimensional space), has been developed for groundwater flow and radionuclide transport analyses in a water-saturated high-level radioactive waste (HLW) repository with multiple canisters. A multiple-canister configuration and a non-uniform flow field of the host rock are incorporated in the MCFT2D code. Effects of heterogeneous flow field of the host rock on migration of nuclides can be investigated using MCFT2D. The MCFT2D enables to take into account the various degrees of the dependency of canister configuration for nuclide migration in a water-saturated HLW repository, while the dependency was assumed to be either independent or perfectly dependent in previous studies. This report presents features of the MCFT2D code, numerical simulation using MCFT2D code, and graphical representation of the numerical results. (author)

  12. Feature Screening for Ultrahigh Dimensional Categorical Data with Applications.

    Science.gov (United States)

    Huang, Danyang; Li, Runze; Wang, Hansheng

    2014-01-01

    Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies, and illustrate the proposed method by two empirical datasets.

  13. Feature selection for high-dimensional integrated data

    KAUST Repository

    Zheng, Charles

    2012-04-26

    Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.

  14. Feature selection for high-dimensional integrated data

    KAUST Repository

    Zheng, Charles; Schwartz, Scott; Chapkin, Robert S.; Carroll, Raymond J.; Ivanov, Ivan

    2012-01-01

    Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.

  15. A Two-Dimensional Solar Tracking Stationary Guidance Method Based on Feature-Based Time Series

    Directory of Open Access Journals (Sweden)

    Keke Zhang

    2018-01-01

    Full Text Available The amount of satellite energy acquired has a direct impact on operational capacities of the satellite. As for practical high functional density microsatellites, solar tracking guidance design of solar panels plays an extremely important role. Targeted at stationary tracking problems incurred in a new system that utilizes panels mounted in the two-dimensional turntable to acquire energies to the greatest extent, a two-dimensional solar tracking stationary guidance method based on feature-based time series was proposed under the constraint of limited satellite attitude coupling control capability. By analyzing solar vector variation characteristics within an orbit period and solar vector changes within the whole life cycle, such a method could be adopted to establish a two-dimensional solar tracking guidance model based on the feature-based time series to realize automatic switching of feature-based time series and stationary guidance under the circumstance of different β angles and the maximum angular velocity control, which was applicable to near-earth orbits of all orbital inclination. It was employed to design a two-dimensional solar tracking stationary guidance system, and a mathematical simulation for guidance performance was carried out in diverse conditions under the background of in-orbit application. The simulation results show that the solar tracking accuracy of two-dimensional stationary guidance reaches 10∘ and below under the integrated constraints, which meet engineering application requirements.

  16. Quantum phase space points for Wigner functions in finite-dimensional spaces

    OpenAIRE

    Luis Aina, Alfredo

    2004-01-01

    We introduce quantum states associated with single phase space points in the Wigner formalism for finite-dimensional spaces. We consider both continuous and discrete Wigner functions. This analysis provides a procedure for a direct practical observation of the Wigner functions for states and transformations without inversion formulas.

  17. Quantum phase space points for Wigner functions in finite-dimensional spaces

    International Nuclear Information System (INIS)

    Luis, Alfredo

    2004-01-01

    We introduce quantum states associated with single phase space points in the Wigner formalism for finite-dimensional spaces. We consider both continuous and discrete Wigner functions. This analysis provides a procedure for a direct practical observation of the Wigner functions for states and transformations without inversion formulas

  18. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  19. Features of the Gravity Probe B Space Vehicle

    Science.gov (United States)

    Reeve, William; Green, Gaylord

    2007-04-01

    Space vehicle performance enabled successful relativity data collection throughout the Gravity Probe B mission. Precision pointing and drag-free translation control was maintained using proportional helium micro-thrusters. Electrical power was provided by rigid, double sided solar arrays. The 1.8 kelvin science instrument temperature was maintained using the largest cryogenic liquid helium dewar ever flown in space. The flight software successfully performed autonomous operations and safemode protection. Features of the Gravity Probe B Space Vehicle mechanisms include: 1) sixteen helium micro-thrusters, the first proportional thrusters flown in space, and large-orifice thruster isolation valves, 2) seven precision and high-authority mass trim mechanisms, 3) four non-pyrotechnic, highly reliable solar array deployment and release mechanism sets. Early incremental prototyping was used extensively to reduce spacecraft development risk. All spacecraft systems were redundant and provided multiple failure tolerance in critical systems. Lockheed Martin performed the spacecraft design, systems engineering, hardware and software integration, environmental testing and launch base operations, as well as on-orbit operations support for the Gravity Probe B space science experiment.

  20. Electromagnetic-field equations in the six-dimensional space-time R6

    International Nuclear Information System (INIS)

    Teli, M.T.; Palaskar, D.

    1984-01-01

    Maxwell's equations (without monopoles) for electromagnetic fields are obtained in six-dimensional space-time. The equations possess structural symmetry in space and time, field and source densities. Space-time-symmetric conservation laws and field solutions are obtained. The results are successfully correlated with their four-dimensional space-time counterparts

  1. Computer-aided diagnosis for phase-contrast X-ray computed tomography: quantitative characterization of human patellar cartilage with high-dimensional geometric features.

    Science.gov (United States)

    Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Glaser, Christian; Wismüller, Axel

    2014-02-01

    Phase-contrast computed tomography (PCI-CT) has shown tremendous potential as an imaging modality for visualizing human cartilage with high spatial resolution. Previous studies have demonstrated the ability of PCI-CT to visualize (1) structural details of the human patellar cartilage matrix and (2) changes to chondrocyte organization induced by osteoarthritis. This study investigates the use of high-dimensional geometric features in characterizing such chondrocyte patterns in the presence or absence of osteoarthritic damage. Geometrical features derived from the scaling index method (SIM) and statistical features derived from gray-level co-occurrence matrices were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. These features were subsequently used in a machine learning task with support vector regression to classify ROIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic curve (AUC). SIM-derived geometrical features exhibited the best classification performance (AUC, 0.95 ± 0.06) and were most robust to changes in ROI size. These results suggest that such geometrical features can provide a detailed characterization of the chondrocyte organization in the cartilage matrix in an automated and non-subjective manner, while also enabling classification of cartilage as healthy or osteoarthritic with high accuracy. Such features could potentially serve as imaging markers for evaluating osteoarthritis progression and its response to different therapeutic intervention strategies.

  2. Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition

    OpenAIRE

    Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan

    2015-01-01

    This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...

  3. Weighted simultaneous algebraic reconstruction technique for tomosynthesis imaging of objects with high-attenuation features

    International Nuclear Information System (INIS)

    Levakhina, Y. M.; Müller, J.; Buzug, T. M.; Duschka, R. L.; Vogt, F.; Barkhausen, J.

    2013-01-01

    Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position an individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical

  4. Weighted simultaneous algebraic reconstruction technique for tomosynthesis imaging of objects with high-attenuation features

    Energy Technology Data Exchange (ETDEWEB)

    Levakhina, Y. M. [Institute of Medical Engineering, University of Luebeck, Luebeck 23562, Germany and Graduate School for Computing in Medicine and Life Sciences, Luebeck 23562 (Germany); Mueller, J.; Buzug, T. M. [Institute of Medical Engineering, University of Luebeck, Luebeck 23562 (Germany); Duschka, R. L.; Vogt, F.; Barkhausen, J. [Clinic for Radiology, University Clinics Schleswig-Holstein, Luebeck 23562 (Germany)

    2013-03-15

    Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position an individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical

  5. Green functions and scattering amplitudes in many-dimensional space

    International Nuclear Information System (INIS)

    Fabre de la Ripelle, M.

    1993-01-01

    Methods for solving scattering are studied in many-dimensional space. Green function and scattering amplitudes are given in terms of the required asymptotic behaviour of the wave function. The Born approximation and the optical theorem are derived in many-dimensional space. Phase-shift analyses are performed for hypercentral potentials and for non-hypercentral potentials by use of the hyperspherical adiabatic approximation. (author)

  6. Identification of Architectural Functions in A Four-Dimensional Space

    Directory of Open Access Journals (Sweden)

    Firza Utama

    2012-06-01

    Full Text Available This research has explored the possibilities and concept of architectural space in a virtual environment. The virtual environment exists as a different concept, and challenges the constraints of the physical world. One of the possibilities in a virtual environment is that it is able to extend the spatial dimension higher than the physical three-dimension. To take the advantage of this possibility, this research has applied some geometrical four-dimensional (4D methods to define virtual architectural space. The spatial characteristics of 4D space is established by analyzing the four-dimensional structure that can be comprehended by human participant for its spatial quality, and by developing a system to control the fourth axis of movement. Multiple three-dimensional spaces that fluidly change their volume have been defined as one of the possibilities of virtual architecturalspace concept in order to enrich our understanding of virtual spatial experience.

  7. Generalized space-charge limited current and virtual cathode behaviors in one-dimensional drift space

    International Nuclear Information System (INIS)

    Yang, Zhanfeng; Liu, Guozhi; Shao, Hao; Chen, Changhua; Sun, Jun

    2013-01-01

    This paper reports the space-charge limited current (SLC) and virtual cathode behaviors in one-dimensional grounded drift space. A simple general analytical solution and an approximate solution for the planar diode are given. Through a semi-analytical method, a general solution for SLC in one-dimensional drift space is obtained. The behaviors of virtual cathode in the drift space, including dominant frequency, electron transit time, position, and transmitted current, are yielded analytically. The relationship between the frequency of the virtual cathode oscillation and the injected current presented may explain previously reported numerical works. Results are significant in facilitating estimations and further analytical studies

  8. Research on the development of space target detecting system and three-dimensional reconstruction technology

    Science.gov (United States)

    Li, Dong; Wei, Zhen; Song, Dawei; Sun, Wenfeng; Fan, Xiaoyan

    2016-11-01

    With the development of space technology, the number of spacecrafts and debris are increasing year by year. The demand for detecting and identification of spacecraft is growing strongly, which provides support to the cataloguing, crash warning and protection of aerospace vehicles. The majority of existing approaches for three-dimensional reconstruction is scattering centres correlation, which is based on the radar high resolution range profile (HRRP). This paper proposes a novel method to reconstruct the threedimensional scattering centre structure of target from a sequence of radar ISAR images, which mainly consists of three steps. First is the azimuth scaling of consecutive ISAR images based on fractional Fourier transform (FrFT). The later is the extraction of scattering centres and matching between adjacent ISAR images using grid method. Finally, according to the coordinate matrix of scattering centres, the three-dimensional scattering centre structure is reconstructed using improved factorization method. The three-dimensional structure is featured with stable and intuitive characteristic, which provides a new way to improve the identification probability and reduce the complexity of the model matching library. A satellite model is reconstructed using the proposed method from four consecutive ISAR images. The simulation results prove that the method has gotten a satisfied consistency and accuracy.

  9. Green function and scattering amplitudes in many dimensional space

    International Nuclear Information System (INIS)

    Fabre de la Ripelle, M.

    1991-06-01

    Methods for solving scattering are studied in many dimensional space. Green function and scattering amplitudes are given in terms of the requested asymptotic behaviour of the wave function. The Born approximation and the optical theorem are derived in many dimensional space. Phase-shift analysis are developed for hypercentral potentials and for non-hypercentral potentials with the hyperspherical adiabatic approximation. (author) 16 refs., 3 figs

  10. The space-time model according to dimensional continuous space-time theory

    International Nuclear Information System (INIS)

    Martini, Luiz Cesar

    2014-01-01

    This article results from the Dimensional Continuous Space-Time Theory for which the introductory theoretician was presented in [1]. A theoretical model of the Continuous Space-Time is presented. The wave equation of time into absolutely stationary empty space referential will be described in detail. The complex time, that is the time fixed on the infinite phase time speed referential, is deduced from the New View of Relativity Theory that is being submitted simultaneously with this article in this congress. Finally considering the inseparable Space-Time is presented the duality equation wave-particle.

  11. Few helium atoms in quasi two-dimensional space

    International Nuclear Information System (INIS)

    Kilic, Srecko; Vranjes, Leandra

    2003-01-01

    Two, three and four 3 He and 4 He atoms in quasi two-dimensional space above graphite and cesium surfaces and in 'harmonic' potential perpendicular to the surface have been studied. Using some previously examined variational wave functions and the Diffusion Monte Carlo procedure, it has been shown that all molecules: dimers, trimers and tetramers, are bound more strongly than in pure two- and three-dimensional space. The enhancement of binding with respect to unrestricted space is more pronounced on cesium than on graphite. Furthermore, for 3 He 3 ( 3 He 4 ) on all studied surfaces, there is an indication that the configuration of a dimer and a 'free' particle (two dimers) may be equivalently established

  12. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit more descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction

  13. Motion of gas in highly rarefied space

    Science.gov (United States)

    Chirkunov, Yu A.

    2017-10-01

    A model describing a motion of gas in a highly rarefied space received an unlucky number 13 in the list of the basic models of the motion of gas in the three-dimensional space obtained by L.V. Ovsyannikov. For a given initial pressure distribution, a special choice of mass Lagrangian variables leads to the system describing this motion for which the number of independent variables is less by one. Hence, there is a foliation of a highly rarefied gas with respect to pressure. In a strongly rarefied space for each given initial pressure distribution, all gas particles are localized on a two-dimensional surface that moves with time in this space We found some exact solutions of the obtained system that describe the processes taking place inside of the tornado. For this system we found all nontrivial conservation laws of the first order. In addition to the classical conservation laws the system has another conservation law, which generalizes the energy conservation law. With the additional condition we found another one generalized energy conservation law.

  14. Lorentz covariant tempered distributions in two-dimensional space-time

    International Nuclear Information System (INIS)

    Zinov'ev, Yu.M.

    1989-01-01

    The problem of describing Lorentz covariant distributions without any spectral condition has hitherto remained unsolved even for two-dimensional space-time. Attempts to solve this problem have already been made. Zharinov obtained an integral representation for the Laplace transform of Lorentz invariant distributions with support in the product of two-dimensional future light cones. However, this integral representation does not make it possible to obtain a complete description of the corresponding Lorentz invariant distributions. In this paper the author gives a complete description of Lorentz covariant distributions for two-dimensional space-time. No spectral conditions is assumed

  15. Multi-Scale Singularity Trees: Soft-Linked Scale-Space Hierarchies

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    We consider images as manifolds embedded in a hybrid of a high dimensional space of coordinates and features. Using the proposed energy functional and mathematical landmarks, images are partitioned into segments. The nesting of image segments occurring at catastrophe points in the scale-space is ...

  16. Mannheim Curves in Nonflat 3-Dimensional Space Forms

    Directory of Open Access Journals (Sweden)

    Wenjing Zhao

    2015-01-01

    Full Text Available We consider the Mannheim curves in nonflat 3-dimensional space forms (Riemannian or Lorentzian and we give the concept of Mannheim curves. In addition, we investigate the properties of nonnull Mannheim curves and their partner curves. We come to the conclusion that a necessary and sufficient condition is that a linear relationship with constant coefficients will exist between the curvature and the torsion of the given original curves. In the case of null curve, we reveal that there are no null Mannheim curves in the 3-dimensional de Sitter space.

  17. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  18. Blended particle filters for large-dimensional chaotic dynamical systems

    Science.gov (United States)

    Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.

    2014-01-01

    A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886

  19. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  20. Linear embeddings of finite-dimensional subsets of Banach spaces into Euclidean spaces

    International Nuclear Information System (INIS)

    Robinson, James C

    2009-01-01

    This paper treats the embedding of finite-dimensional subsets of a Banach space B into finite-dimensional Euclidean spaces. When the Hausdorff dimension of X − X is finite, d H (X − X) k are injective on X. The proof motivates the definition of the 'dual thickness exponent', which is the key to proving that a prevalent set of such linear maps have Hölder continuous inverse when the box-counting dimension of X is finite and k > 2d B (X). A related argument shows that if the Assouad dimension of X − X is finite and k > d A (X − X), a prevalent set of such maps are bi-Lipschitz with logarithmic corrections. This provides a new result for compact homogeneous metric spaces via the Kuratowksi embedding of (X, d) into L ∞ (X)

  1. An angle-based subspace anomaly detection approach to high-dimensional data: With an application to industrial fault detection

    International Nuclear Information System (INIS)

    Zhang, Liangwei; Lin, Jing; Karim, Ramin

    2015-01-01

    The accuracy of traditional anomaly detection techniques implemented on full-dimensional spaces degrades significantly as dimensionality increases, thereby hampering many real-world applications. This work proposes an approach to selecting meaningful feature subspace and conducting anomaly detection in the corresponding subspace projection. The aim is to maintain the detection accuracy in high-dimensional circumstances. The suggested approach assesses the angle between all pairs of two lines for one specific anomaly candidate: the first line is connected by the relevant data point and the center of its adjacent points; the other line is one of the axis-parallel lines. Those dimensions which have a relatively small angle with the first line are then chosen to constitute the axis-parallel subspace for the candidate. Next, a normalized Mahalanobis distance is introduced to measure the local outlier-ness of an object in the subspace projection. To comprehensively compare the proposed algorithm with several existing anomaly detection techniques, we constructed artificial datasets with various high-dimensional settings and found the algorithm displayed superior accuracy. A further experiment on an industrial dataset demonstrated the applicability of the proposed algorithm in fault detection tasks and highlighted another of its merits, namely, to provide preliminary interpretation of abnormality through feature ordering in relevant subspaces. - Highlights: • An anomaly detection approach for high-dimensional reliability data is proposed. • The approach selects relevant subspaces by assessing vectorial angles. • The novel ABSAD approach displays superior accuracy over other alternatives. • Numerical illustration approves its efficacy in fault detection applications

  2. Phase space interrogation of the empirical response modes for seismically excited structures

    Science.gov (United States)

    Paul, Bibhas; George, Riya C.; Mishra, Sudib K.

    2017-07-01

    Conventional Phase Space Interrogation (PSI) for structural damage assessment relies on exciting the structure with low dimensional chaotic waveform, thereby, significantly limiting their applicability to large structures. The PSI technique is presently extended for structure subjected to seismic excitations. The high dimensionality of the phase space for seismic response(s) are overcome by the Empirical Mode Decomposition (EMD), decomposing the responses to a number of intrinsic low dimensional oscillatory modes, referred as Intrinsic Mode Functions (IMFs). Along with their low dimensionality, a few IMFs, retain sufficient information of the system dynamics to reflect the damage induced changes. The mutually conflicting nature of low-dimensionality and the sufficiency of dynamic information are taken care by the optimal choice of the IMF(s), which is shown to be the third/fourth IMFs. The optimal IMF(s) are employed for the reconstruction of the Phase space attractor following Taken's embedding theorem. The widely referred Changes in Phase Space Topology (CPST) feature is then employed on these Phase portrait(s) to derive the damage sensitive feature, referred as the CPST of the IMFs (CPST-IMF). The legitimacy of the CPST-IMF is established as a damage sensitive feature by assessing its variation with a number of damage scenarios benchmarked in the IASC-ASCE building. The damage localization capability, remarkable tolerance to noise contamination and the robustness under different seismic excitations of the feature are demonstrated.

  3. Dual dimensionality reduction reveals independent encoding of motor features in a muscle synergy for insect flight control.

    Science.gov (United States)

    Sponberg, Simon; Daniel, Thomas L; Fairhall, Adrienne L

    2015-04-01

    What are the features of movement encoded by changing motor commands? Do motor commands encode movement independently or can they be represented in a reduced set of signals (i.e. synergies)? Motor encoding poses a computational and practical challenge because many muscles typically drive movement, and simultaneous electrophysiology recordings of all motor commands are typically not available. Moreover, during a single locomotor period (a stride or wingstroke) the variation in movement may have high dimensionality, even if only a few discrete signals activate the muscles. Here, we apply the method of partial least squares (PLS) to extract the encoded features of movement based on the cross-covariance of motor signals and movement. PLS simultaneously decomposes both datasets and identifies only the variation in movement that relates to the specific muscles of interest. We use this approach to explore how the main downstroke flight muscles of an insect, the hawkmoth Manduca sexta, encode torque during yaw turns. We simultaneously record muscle activity and turning torque in tethered flying moths experiencing wide-field visual stimuli. We ask whether this pair of muscles acts as a muscle synergy (a single linear combination of activity) consistent with their hypothesized function of producing a left-right power differential. Alternatively, each muscle might individually encode variation in movement. We show that PLS feature analysis produces an efficient reduction of dimensionality in torque variation within a wingstroke. At first, the two muscles appear to behave as a synergy when we consider only their wingstroke-averaged torque. However, when we consider the PLS features, the muscles reveal independent encoding of torque. Using these features we can predictably reconstruct the variation in torque corresponding to changes in muscle activation. PLS-based feature analysis provides a general two-sided dimensionality reduction that reveals encoding in high dimensional

  4. Dual dimensionality reduction reveals independent encoding of motor features in a muscle synergy for insect flight control.

    Directory of Open Access Journals (Sweden)

    Simon Sponberg

    2015-04-01

    Full Text Available What are the features of movement encoded by changing motor commands? Do motor commands encode movement independently or can they be represented in a reduced set of signals (i.e. synergies? Motor encoding poses a computational and practical challenge because many muscles typically drive movement, and simultaneous electrophysiology recordings of all motor commands are typically not available. Moreover, during a single locomotor period (a stride or wingstroke the variation in movement may have high dimensionality, even if only a few discrete signals activate the muscles. Here, we apply the method of partial least squares (PLS to extract the encoded features of movement based on the cross-covariance of motor signals and movement. PLS simultaneously decomposes both datasets and identifies only the variation in movement that relates to the specific muscles of interest. We use this approach to explore how the main downstroke flight muscles of an insect, the hawkmoth Manduca sexta, encode torque during yaw turns. We simultaneously record muscle activity and turning torque in tethered flying moths experiencing wide-field visual stimuli. We ask whether this pair of muscles acts as a muscle synergy (a single linear combination of activity consistent with their hypothesized function of producing a left-right power differential. Alternatively, each muscle might individually encode variation in movement. We show that PLS feature analysis produces an efficient reduction of dimensionality in torque variation within a wingstroke. At first, the two muscles appear to behave as a synergy when we consider only their wingstroke-averaged torque. However, when we consider the PLS features, the muscles reveal independent encoding of torque. Using these features we can predictably reconstruct the variation in torque corresponding to changes in muscle activation. PLS-based feature analysis provides a general two-sided dimensionality reduction that reveals encoding in

  5. Dual Dimensionality Reduction Reveals Independent Encoding of Motor Features in a Muscle Synergy for Insect Flight Control

    Science.gov (United States)

    Sponberg, Simon; Daniel, Thomas L.; Fairhall, Adrienne L.

    2015-01-01

    What are the features of movement encoded by changing motor commands? Do motor commands encode movement independently or can they be represented in a reduced set of signals (i.e. synergies)? Motor encoding poses a computational and practical challenge because many muscles typically drive movement, and simultaneous electrophysiology recordings of all motor commands are typically not available. Moreover, during a single locomotor period (a stride or wingstroke) the variation in movement may have high dimensionality, even if only a few discrete signals activate the muscles. Here, we apply the method of partial least squares (PLS) to extract the encoded features of movement based on the cross-covariance of motor signals and movement. PLS simultaneously decomposes both datasets and identifies only the variation in movement that relates to the specific muscles of interest. We use this approach to explore how the main downstroke flight muscles of an insect, the hawkmoth Manduca sexta, encode torque during yaw turns. We simultaneously record muscle activity and turning torque in tethered flying moths experiencing wide-field visual stimuli. We ask whether this pair of muscles acts as a muscle synergy (a single linear combination of activity) consistent with their hypothesized function of producing a left-right power differential. Alternatively, each muscle might individually encode variation in movement. We show that PLS feature analysis produces an efficient reduction of dimensionality in torque variation within a wingstroke. At first, the two muscles appear to behave as a synergy when we consider only their wingstroke-averaged torque. However, when we consider the PLS features, the muscles reveal independent encoding of torque. Using these features we can predictably reconstruct the variation in torque corresponding to changes in muscle activation. PLS-based feature analysis provides a general two-sided dimensionality reduction that reveals encoding in high dimensional

  6. Superconductivity and the existence of Nambu's three-dimensional phase space mechanics

    International Nuclear Information System (INIS)

    Angulo, R.; Gonzalez-Bernardo, C.A.; Rodriguez-Gomez, J.; Kalnay, A.J.; Perez-M, F.; Tello-Llanos, R.A.

    1984-01-01

    Nambu proposed a generalization of hamiltonian mechanics such that three-dimensional phase space is allowed. Thanks to a recent paper by Holm and Kupershmidt we are able to show the existence of such three-dimensional phase space systems in superconductivity. (orig.)

  7. Supersymmetric quantum mechanics in three-dimensional space, 1

    International Nuclear Information System (INIS)

    Ui, Haruo

    1984-01-01

    As a direct generalization of the model of supersymmetric quantum mechanics by Witten, which describes the motion of a spin one-half particle in the one-dimensional space, we construct a model of the supersymmetric quantum mechanics in the three-dimensional space, which describes the motion of a spin one-half particle in central and spin-orbit potentials in the context of the nonrelativistic quantum mechanics. With the simplest choice of the (super) potential, this model is shown to reduce to the model of the harmonic oscillator plus constant spin-orbit potential of unit strength of both positive and negative signs, which was studied in detail in our recent paper in connection with ''accidental degeneracy'' as well as the ''graded groups''. This simplest model is discussed in some detail as an example of the three-dimensional supersymmetric quantum mechanical system, where the supersymmetry is an exact symmetry of the system. More general choice of a polynomial superpotential is also discussed. It is shown that the supersymmetry cannot be spontaneously broken for any polynomial superpotential in our three-dimensional model; this result is contrasted to the corresponding one in the one-dimensional model. (author)

  8. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  9. Visualising very large phylogenetic trees in three dimensional hyperbolic space

    Directory of Open Access Journals (Sweden)

    Liberles David A

    2004-04-01

    Full Text Available Abstract Background Common existing phylogenetic tree visualisation tools are not able to display readable trees with more than a few thousand nodes. These existing methodologies are based in two dimensional space. Results We introduce the idea of visualising phylogenetic trees in three dimensional hyperbolic space with the Walrus graph visualisation tool and have developed a conversion tool that enables the conversion of standard phylogenetic tree formats to Walrus' format. With Walrus, it becomes possible to visualise and navigate phylogenetic trees with more than 100,000 nodes. Conclusion Walrus enables desktop visualisation of very large phylogenetic trees in 3 dimensional hyperbolic space. This application is potentially useful for visualisation of the tree of life and for functional genomics derivatives, like The Adaptive Evolution Database (TAED.

  10. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  11. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  12. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  13. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  14. Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.

    Science.gov (United States)

    Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel

    2014-01-01

    Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.

  15. Mappings with closed range and finite dimensional linear spaces

    International Nuclear Information System (INIS)

    Iyahen, S.O.

    1984-09-01

    This paper looks at two settings, each of continuous linear mappings of linear topological spaces. In one setting, the domain space is fixed while the range space varies over a class of linear topological spaces. In the second setting, the range space is fixed while the domain space similarly varies. The interest is in when the requirement that the mappings have a closed range implies that the domain or range space is finite dimensional. Positive results are obtained for metrizable spaces. (author)

  16. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  17. Three-dimensional features on oscillating microbubbles streaming flows

    Science.gov (United States)

    Rossi, Massimiliano; Marin, Alvaro G.; Wang, Cheng; Hilgenfeldt, Sascha; Kähler, Christian J.

    2013-11-01

    Ultrasound-driven oscillating micro-bubbles have been used as active actuators in microfluidic devices to perform manifold tasks such as mixing, sorting and manipulation of microparticles. A common configuration consists in side-bubbles, created by trapping air pockets in blind channels perpendicular to the main channel direction. This configuration results in bubbles with a semi-cylindrical shape that creates a streaming flow generally considered quasi two-dimensional. However, recent experiments performed with three-dimensional velocimetry methods have shown how microparticles can present significant three-dimensional trajectories, especially in regions close to the bubble interface. Several reasons will be discussed such as boundary effects of the bottom/top wall, deformation of the bubble interface leading to more complex vibrational modes, or bubble-particle interactions. In the present investigation, precise measurements of particle trajectories close to the bubble interface will be performed by means of 3D Astigmatic Particle Tracking Velocimetry. The results will allow us to characterize quantitatively the three-dimensional features of the streaming flow and to estimate its implications in practical applications as particle trapping, sorting or mixing.

  18. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  19. Embedding of attitude determination in n-dimensional spaces

    Science.gov (United States)

    Bar-Itzhack, Itzhack Y.; Markley, F. Landis

    1988-01-01

    The problem of attitude determination in n-dimensional spaces is addressed. The proper parameters are found, and it is shown that not all three-dimensional methods have useful extensions to higher dimensions. It is demonstrated that Rodriguez parameters are conveniently extendable to other dimensions. An algorithm for using these parameters in the general n-dimensional case is developed and tested with a four-dimensional example. The correct mathematical description of angular velocities is addressed, showing that angular velocity in n dimensions cannot be represented by a vector but rather by a tensor of the second rank. Only in three dimensions can the angular velocity be described by a vector.

  20. Charged fluid distribution in higher dimensional spheroidal space-time

    Indian Academy of Sciences (India)

    A general solution of Einstein field equations corresponding to a charged fluid distribution on the background of higher dimensional spheroidal space-time is obtained. The solution generates several known solutions for superdense star having spheroidal space-time geometry.

  1. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  2. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  3. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  4. Statistical Identification of Composed Visual Features Indicating High Likelihood of Grasp Success

    DEFF Research Database (Denmark)

    Thomsen, Mikkel Tang; Bodenhagen, Leon; Krüger, Norbert

    2013-01-01

    configurations of three 3D surface features that predict grasping actions with a high success probability. The strategy is based on first computing spatial relations between visual entities and secondly, exploring the cross-space of these relational feature space and grasping actions. The data foundation...... for identifying such indicative feature constellations is generated in a simulated environment wherein visual features are extracted and a large amount of grasping actions are evaluated through dynamic simulation. Based on the identified feature constellations, we validate by applying the acquired knowledge...

  5. Feature extraction with deep neural networks by a generalized discriminant analysis.

    Science.gov (United States)

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  6. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  7. State-space dimensionality in short-memory hidden-variable theories

    International Nuclear Information System (INIS)

    Montina, Alberto

    2011-01-01

    Recently we have presented a hidden-variable model of measurements for a qubit where the hidden-variable state-space dimension is one-half the quantum-state manifold dimension. The absence of a short memory (Markov) dynamics is the price paid for this dimensional reduction. The conflict between having the Markov property and achieving the dimensional reduction was proved by Montina [A. Montina, Phys. Rev. A 77, 022104 (2008)] using an additional hypothesis of trajectory relaxation. Here we analyze in more detail this hypothesis introducing the concept of invertible process and report a proof that makes clearer the role played by the topology of the hidden-variable space. This is accomplished by requiring suitable properties of regularity of the conditional probability governing the dynamics. In the case of minimal dimension the set of continuous hidden variables is identified with an object living an N-dimensional Hilbert space whose dynamics is described by the Schroedinger equation. A method for generating the economical non-Markovian model for the qubit is also presented.

  8. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest.

    Science.gov (United States)

    Ma, Suliang; Chen, Mingxuan; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-04-16

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods.

  9. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  10. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  11. Higher-dimensional relativistic-fluid spheres

    International Nuclear Information System (INIS)

    Patel, L. K.; Ahmedabad, Gujarat Univ.

    1997-01-01

    They consider the hydrostatic equilibrium of relativistic-fluid spheres for a D-dimensional space-time. Three physically viable interior solutions of the Einstein field equations corresponding to perfect-fluid spheres in a D-dimensional space-time are obtained. When D = 4 they reduce to the Tolman IV solution, the Mehra solution and the Finch-Skea solution. The solutions are smoothly matched with the D-dimensional Schwarzschild exterior solution at the boundary r = a of the fluid sphere. Some physical features and other related details of the solutions are briefly discussed. A brief description of two other new solutions for higher-dimensional perfect-fluid spheres is also given

  12. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  13. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  14. Rolling Bearing Fault Diagnosis Using Modified Neighborhood Preserving Embedding and Maximal Overlap Discrete Wavelet Packet Transform with Sensitive Features Selection

    Directory of Open Access Journals (Sweden)

    Fei Dong

    2018-01-01

    Full Text Available In order to enhance the performance of bearing fault diagnosis and classification, features extraction and features dimensionality reduction have become more important. The original statistical feature set was calculated from single branch reconstruction vibration signals obtained by using maximal overlap discrete wavelet packet transform (MODWPT. In order to reduce redundancy information of original statistical feature set, features selection by adjusted rand index and sum of within-class mean deviations (FSASD was proposed to select fault sensitive features. Furthermore, a modified features dimensionality reduction method, supervised neighborhood preserving embedding with label information (SNPEL, was proposed to realize low-dimensional representations for high-dimensional feature space. Finally, vibration signals collected from two experimental test rigs were employed to evaluate the performance of the proposed procedure. The results show that the effectiveness, adaptability, and superiority of the proposed procedure can serve as an intelligent bearing fault diagnosis system.

  15. Vector calculus in non-integer dimensional space and its applications to fractal media

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  16. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  17. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  18. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  19. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    OpenAIRE

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...

  20. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  1. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  2. Space moving target detection using time domain feature

    Science.gov (United States)

    Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu

    2018-01-01

    The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.

  3. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  4. Topological properties of function spaces $C_k(X,2)$ over zero-dimensional metric spaces $X$

    OpenAIRE

    Gabriyelyan, S.

    2015-01-01

    Let $X$ be a zero-dimensional metric space and $X'$ its derived set. We prove the following assertions: (1) the space $C_k(X,2)$ is an Ascoli space iff $C_k(X,2)$ is $k_\\mathbb{R}$-space iff either $X$ is locally compact or $X$ is not locally compact but $X'$ is compact, (2) $C_k(X,2)$ is a $k$-space iff either $X$ is a topological sum of a Polish locally compact space and a discrete space or $X$ is not locally compact but $X'$ is compact, (3) $C_k(X,2)$ is a sequential space iff $X$ is a Pol...

  5. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    Science.gov (United States)

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep

  6. An Integrated Approach to Parameter Learning in Infinite-Dimensional Space

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, Zachary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wendelberger, Joanne Roth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-14

    The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations, high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the

  7. Convergent-beam electron diffraction study of incommensurately modulated crystals. Pt. 2. (3 + 1)-dimensional space groups

    International Nuclear Information System (INIS)

    Terauchi, Masami; Takahashi, Mariko; Tanaka, Michiyoshi

    1994-01-01

    The convergent-beam electron diffraction (CBED) method for determining three-dimensional space groups is extended to the determination of the (3 + 1)-dimensional space groups for one-dimensional incommensurately modulated crystals. It is clarified than an approximate dynamical extinction line appears in the CBED discs of the reflections caused by an incommensurate modulation. The extinction enables the space-group determination of the (3 + 1)-dimensional crystals or the one-dimensional incommensurately modulated crystals. An example of the dynamical extinction line is shown using an incommensurately modulated crystal of Sr 2 Nb 2 O 7 . Tables of the dynamical extinction lines appearing in CBED patterns are given for all the (3 + 1)-dimensional space groups of the incommensurately modulated crystal. (orig.)

  8. Three-dimensional space: locomotory style explains memory differences in rats and hummingbirds.

    Science.gov (United States)

    Flores-Abreu, I Nuri; Hurly, T Andrew; Ainge, James A; Healy, Susan D

    2014-06-07

    While most animals live in a three-dimensional world, they move through it to different extents depending on their mode of locomotion: terrestrial animals move vertically less than do swimming and flying animals. As nearly everything we know about how animals learn and remember locations in space comes from two-dimensional experiments in the horizontal plane, here we determined whether the use of three-dimensional space by a terrestrial and a flying animal was correlated with memory for a rewarded location. In the cubic mazes in which we trained and tested rats and hummingbirds, rats moved more vertically than horizontally, whereas hummingbirds moved equally in the three dimensions. Consistent with their movement preferences, rats were more accurate in relocating the horizontal component of a rewarded location than they were in the vertical component. Hummingbirds, however, were more accurate in the vertical dimension than they were in the horizontal, a result that cannot be explained by their use of space. Either as a result of evolution or ontogeny, it appears that birds and rats prioritize horizontal versus vertical components differently when they remember three-dimensional space.

  9. Dirac equation in 5- and 6-dimensional curved space-time manifolds

    International Nuclear Information System (INIS)

    Vladimirov, Yu.S.; Popov, A.D.

    1984-01-01

    The program of plotting unified multidimensional theory of gravitation, electromagnetism and electrically charged matter with transition from 5-dimensional variants to 6-dimensional theory possessing signature (+----+) is developed. For recording the Dirac equation in 5- and 6-dimensional curved space-time manifolds the tetrad formalism and γ-matrix formulation of the General Relativity Theory are used. It is shown that the 6-dimensional theory case unifies the two private cases of 5-dimensional theory and corresponds to two possibilities of the theory developed by Kadyshevski

  10. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  11. A biologically inspired scale-space for illumination invariant feature detection

    International Nuclear Information System (INIS)

    Vonikakis, Vasillios; Chrysostomou, Dimitrios; Kouskouridas, Rigas; Gasteratos, Antonios

    2013-01-01

    This paper presents a new illumination invariant operator, combining the nonlinear characteristics of biological center-surround cells with the classic difference of Gaussians operator. It specifically targets the underexposed image regions, exhibiting increased sensitivity to low contrast, while not affecting performance in the correctly exposed ones. The proposed operator can be used to create a scale-space, which in turn can be a part of a SIFT-based detector module. The main advantage of this illumination invariant scale-space is that, using just one global threshold, keypoints can be detected in both dark and bright image regions. In order to evaluate the degree of illumination invariance that the proposed, as well as other, existing, operators exhibit, a new benchmark dataset is introduced. It features a greater variety of imaging conditions, compared to existing databases, containing real scenes under various degrees and combinations of uniform and non-uniform illumination. Experimental results show that the proposed detector extracts a greater number of features, with a high level of repeatability, compared to other approaches, for both uniform and non-uniform illumination. This, along with its simple implementation, renders the proposed feature detector particularly appropriate for outdoor vision systems, working in environments under uncontrolled illumination conditions. (paper)

  12. Feature-space-based FMRI analysis using the optimal linear transformation.

    Science.gov (United States)

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  13. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  14. Construction of high-dimensional universal quantum logic gates using a Λ system coupled with a whispering-gallery-mode microresonator.

    Science.gov (United States)

    He, Ling Yan; Wang, Tie-Jun; Wang, Chuan

    2016-07-11

    High-dimensional quantum system provides a higher capacity of quantum channel, which exhibits potential applications in quantum information processing. However, high-dimensional universal quantum logic gates is difficult to achieve directly with only high-dimensional interaction between two quantum systems and requires a large number of two-dimensional gates to build even a small high-dimensional quantum circuits. In this paper, we propose a scheme to implement a general controlled-flip (CF) gate where the high-dimensional single photon serve as the target qudit and stationary qubits work as the control logic qudit, by employing a three-level Λ-type system coupled with a whispering-gallery-mode microresonator. In our scheme, the required number of interaction times between the photon and solid state system reduce greatly compared with the traditional method which decomposes the high-dimensional Hilbert space into 2-dimensional quantum space, and it is on a shorter temporal scale for the experimental realization. Moreover, we discuss the performance and feasibility of our hybrid CF gate, concluding that it can be easily extended to a 2n-dimensional case and it is feasible with current technology.

  15. Non-retinotopic feature processing in the absence of retinotopic spatial layout and the construction of perceptual space from motion.

    Science.gov (United States)

    Ağaoğlu, Mehmet N; Herzog, Michael H; Oğmen, Haluk

    2012-10-15

    The spatial representation of a visual scene in the early visual system is well known. The optics of the eye map the three-dimensional environment onto two-dimensional images on the retina. These retinotopic representations are preserved in the early visual system. Retinotopic representations and processing are among the most prevalent concepts in visual neuroscience. However, it has long been known that a retinotopic representation of the stimulus is neither sufficient nor necessary for perception. Saccadic Stimulus Presentation Paradigm and the Ternus-Pikler displays have been used to investigate non-retinotopic processes with and without eye movements, respectively. However, neither of these paradigms eliminates the retinotopic representation of the spatial layout of the stimulus. Here, we investigated how stimulus features are processed in the absence of a retinotopic layout and in the presence of retinotopic conflict. We used anorthoscopic viewing (slit viewing) and pitted a retinotopic feature-processing hypothesis against a non-retinotopic feature-processing hypothesis. Our results support the predictions of the non-retinotopic feature-processing hypothesis and demonstrate the ability of the visual system to operate non-retinotopically at a fine feature processing level in the absence of a retinotopic spatial layout. Our results suggest that perceptual space is actively constructed from the perceptual dimension of motion. The implications of these findings for normal ecological viewing conditions are discussed. 2012 Elsevier Ltd. All rights reserved

  16. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  17. Application of data mining in three-dimensional space time reactor model

    International Nuclear Information System (INIS)

    Jiang Botao; Zhao Fuyu

    2011-01-01

    A high-fidelity three-dimensional space time nodal method has been developed to simulate the dynamics of the reactor core for real time simulation. This three-dimensional reactor core mathematical model can be composed of six sub-models, neutron kinetics model, cay heat model, fuel conduction model, thermal hydraulics model, lower plenum model, and core flow distribution model. During simulation of each sub-model some operation data will be produced and lots of valuable, important information reflecting the reactor core operation status could be hidden in, so how to discovery these information becomes the primary mission people concern. Under this background, data mining (DM) is just created and developed to solve this problem, no matter what engineering aspects or business fields. Generally speaking, data mining is a process of finding some useful and interested information from huge data pool. Support Vector Machine (SVM) is a new technique of data mining appeared in recent years, and SVR is a transformed method of SVM which is applied in regression cases. This paper presents only two significant sub-models of three-dimensional reactor core mathematical model, the nodal space time neutron kinetics model and the thermal hydraulics model, based on which the neutron flux and enthalpy distributions of the core are obtained by solving the three-dimensional nodal space time kinetics equations and energy equations for both single and two-phase flows respectively. Moreover, it describes that the three-dimensional reactor core model can also be used to calculate and determine the reactivity effects of the moderator temperature, boron concentration, fuel temperature, coolant void, xenon worth, samarium worth, control element positions (CEAs) and core burnup status. Besides these, the main mathematic theory of SVR is introduced briefly next, on the basis of which SVR is applied to dealing with the data generated by two sample calculation, rod ejection transient and axial

  18. Faithful representation of similarities among three-dimensional shapes in human vision.

    Science.gov (United States)

    Cutzu, F; Edelman, S

    1996-01-01

    Efficient and reliable classification of visual stimuli requires that their representations reside a low-dimensional and, therefore, computationally manageable feature space. We investigated the ability of the human visual system to derive such representations from the sensory input-a highly nontrivial task, given the million or so dimensions of the visual signal at its entry point to the cortex. In a series of experiments, subjects were presented with sets of parametrically defined shapes; the points in the common high-dimensional parameter space corresponding to the individual shapes formed regular planar (two-dimensional) patterns such as a triangle, a square, etc. We then used multidimensional scaling to arrange the shapes in planar configurations, dictated by their experimentally determined perceived similarities. The resulting configurations closely resembled the original arrangements of the stimuli in the parameter space. This achievement of the human visual system was replicated by a computational model derived from a theory of object representation in the brain, according to which similarities between objects, and not the geometry of each object, need to be faithfully represented. Images Fig. 3 PMID:8876260

  19. Naked singularities in higher dimensional Vaidya space-times

    International Nuclear Information System (INIS)

    Ghosh, S. G.; Dadhich, Naresh

    2001-01-01

    We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension

  20. Large anterior temporal Virchow-Robin spaces: unique MR imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Anthony T. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Chandra, Ronil V. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Department of Surgery, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia); Trost, Nicholas M. [St Vincent' s Hospital, Neuroradiology Service, Melbourne (Australia); McKelvie, Penelope A. [St Vincent' s Hospital, Anatomical Pathology, Melbourne (Australia); Stuckey, Stephen L. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Southern Clinical School, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia)

    2015-05-01

    Large Virchow-Robin (VR) spaces may mimic cystic tumor. The anterior temporal subcortical white matter is a recently described preferential location, with only 18 reported cases. Our aim was to identify unique MR features that could increase prospective diagnostic confidence. Thirty-nine cases were identified between November 2003 and February 2014. Demographic, clinical data and the initial radiological report were retrospectively reviewed. Two neuroradiologists reviewed all MR imaging; a neuropathologist reviewed histological data. Median age was 58 years (range 24-86 years); the majority (69 %) was female. There were no clinical symptoms that could be directly referable to the lesion. Two thirds were considered to be VR spaces on the initial radiological report. Mean maximal size was 9 mm (range 5-17 mm); majority (79 %) had perilesional T2 or fluid-attenuated inversion recovery (FLAIR) hyperintensity. The following were identified as potential unique MR features: focal cortical distortion by an adjacent branch of the middle cerebral artery (92 %), smaller adjacent VR spaces (26 %), and a contiguous cerebrospinal fluid (CSF) intensity tract (21 %). Surgery was performed in three asymptomatic patients; histopathology confirmed VR spaces. Unique MR features were retrospectively identified in all three patients. Large anterior temporal lobe VR spaces commonly demonstrate perilesional T2 or FLAIR signal and can be misdiagnosed as cystic tumor. Potential unique MR features that could increase prospective diagnostic confidence include focal cortical distortion by an adjacent branch of the middle cerebral artery, smaller adjacent VR spaces, and a contiguous CSF intensity tract. (orig.)

  1. Boosting Discriminant Learners for Gait Recognition Using MPCA Features

    Directory of Open Access Journals (Sweden)

    Haiping Lu

    2009-01-01

    Full Text Available This paper proposes a boosted linear discriminant analysis (LDA solution on features extracted by the multilinear principal component analysis (MPCA to enhance gait recognition performance. Three-dimensional gait objects are projected in the MPCA space first to obtain low-dimensional tensorial features. Then, lower-dimensional vectorial features are obtained through discriminative feature selection. These feature vectors are then fed into an LDA-style booster, where several regularized and weakened LDA learners work together to produce a strong learner through a novel feature weighting and sampling process. The LDA learner employs a simple nearest-neighbor classifier with a weighted angle distance measure for classification. The experimental results on the NIST/USF “Gait Challenge” data-sets show that the proposed solution has successfully improved the gait recognition performance and outperformed several state-of-the-art gait recognition algorithms.

  2. Built spaces and features associated with user satisfaction in maternity waiting homes in Malawi.

    Science.gov (United States)

    McIntosh, Nathalie; Gruits, Patricia; Oppel, Eva; Shao, Amie

    2018-07-01

    To assess satisfaction with maternity waiting home built spaces and features in women who are at risk for underutilizing maternity waiting homes (i.e. residential facilities that temporarily house near-term pregnant mothers close to healthcare facilities that provide obstetrical care). Specifically we wanted to answer the questions: (1) Are built spaces and features associated with maternity waiting home user satisfaction? (2) Can built spaces and features designed to improve hygiene, comfort, privacy and function improve maternity waiting home user satisfaction? And (3) Which built spaces and features are most important for maternity waiting home user satisfaction? A cross-sectional study comparing satisfaction with standard and non-standard maternity waiting home designs. Between December 2016 and February 2017 we surveyed expectant mothers at two maternity waiting homes that differed in their design of built spaces and features. We used bivariate analyses to assess if built spaces and features were associated with satisfaction. We compared ratings of built spaces and features between the two maternity waiting homes using chi-squares and t-tests to assess if design features to improve hygiene, comfort, privacy and function were associated with higher satisfaction. We used exploratory robust regression analysis to examine the relationship between built spaces and features and maternity waiting home satisfaction. Two maternity waiting homes in Malawi, one that incorporated non-standardized design features to improve hygiene, comfort, privacy, and function (Kasungu maternity waiting home) and the other that had a standard maternity waiting home design (Dowa maternity waiting home). 322 expectant mothers at risk for underutilizing maternity waiting homes (i.e. first-time mothers and those with no pregnancy risk factors) who had stayed at the Kasungu or Dowa maternity waiting homes. There were significant differences in ratings of built spaces and features between the

  3. Feature Selection and Kernel Learning for Local Learning-Based Clustering.

    Science.gov (United States)

    Zeng, Hong; Cheung, Yiu-ming

    2011-08-01

    The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Schölkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets.

  4. Quantum vacuum energy in two dimensional space-times

    International Nuclear Information System (INIS)

    Davies, P.C.W.; Fulling, S.A.

    1977-01-01

    The paper presents in detail the renormalization theory of the energy-momentum tensor of a two dimensional massless scalar field which has been used elsewhere to study the local physics in a model of black hole evaporation. The treatment is generalized to include the Casimir effect occurring in spatially finite models. The essence of the method is evaluation of the field products in the tensor as functions of two points, followed by covariant subtraction of the discontinuous terms arising as the points coalesce. In two dimensional massless theories, conformal transformations permit exact calculations to be performed. The results are applied here to some special cases, primarily space-times of constant curvature, with emphasis on the existence of distinct 'vacuum' states associated naturally with different conformal coordinate systems. The relevance of the work to the general problems of defining observables and of classifying and interpreting states in curved-space quantum field theory is discussed. (author)

  5. Quantum vacuum energy in two dimensional space-times

    Energy Technology Data Exchange (ETDEWEB)

    Davies, P C.W.; Fulling, S A [King' s Coll., London (UK). Dept. of Mathematics

    1977-04-21

    The paper presents in detail the renormalization theory of the energy-momentum tensor of a two dimensional massless scalar field which has been used elsewhere to study the local physics in a model of black hole evaporation. The treatment is generalized to include the Casimir effect occurring in spatially finite models. The essence of the method is evaluation of the field products in the tensor as functions of two points, followed by covariant subtraction of the discontinuous terms arising as the points coalesce. In two dimensional massless theories, conformal transformations permit exact calculations to be performed. The results are applied here to some special cases, primarily space-times of constant curvature, with emphasis on the existence of distinct 'vacuum' states associated naturally with different conformal coordinate systems. The relevance of the work to the general problems of defining observables and of classifying and interpreting states in curved-space quantum field theory is discussed.

  6. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  7. Space Station services and design features for users

    Science.gov (United States)

    Kurzhals, Peter R.; Mckinney, Royce L.

    1987-01-01

    The operational design features and services planned for the NASA Space Station will furnish, in addition to novel opportunities and facilities, lower costs through interface standardization and automation and faster access by means of computer-aided integration and control processes. By furnishing a basis for large-scale space exploitation, the Space Station will possess industrial production and operational services capabilities that may be used by the private sector for commercial ventures; it could also ultimately support lunar and planetary exploration spacecraft assembly and launch facilities.

  8. Three-Dimensional Messages for Interstellar Communication

    Science.gov (United States)

    Vakoch, Douglas A.

    One of the challenges facing independently evolved civilizations separated by interstellar distances is to communicate information unique to one civilization. One commonly proposed solution is to begin with two-dimensional pictorial representations of mathematical concepts and physical objects, in the hope that this will provide a foundation for overcoming linguistic barriers. However, significant aspects of such representations are highly conventional, and may not be readily intelligible to a civilization with different conventions. The process of teaching conventions of representation may be facilitated by the use of three-dimensional representations redundantly encoded in multiple formats (e.g., as both vectors and as rasters). After having illustrated specific conventions for representing mathematical objects in a three-dimensional space, this method can be used to describe a physical environment shared by transmitter and receiver: a three-dimensional space defined by the transmitter--receiver axis, and containing stars within that space. This method can be extended to show three-dimensional representations varying over time. Having clarified conventions for representing objects potentially familiar to both sender and receiver, novel objects can subsequently be depicted. This is illustrated through sequences showing interactions between human beings, which provide information about human behavior and personality. Extensions of this method may allow the communication of such culture-specific features as aesthetic judgments and religious beliefs. Limitations of this approach will be noted, with specific reference to ETI who are not primarily visual.

  9. High-Resolution Reciprocal Space Mapping for Characterizing Deformation Structures

    DEFF Research Database (Denmark)

    Pantleon, Wolfgang; Wejdemann, Christian; Jakobsen, Bo

    2014-01-01

    With high-angular resolution three-dimensional X-ray diffraction (3DXRD), quantitative information is gained about dislocation structures in individual grains in the bulk of a macroscopic specimen by acquiring reciprocal space maps. In high-resolution 3D reciprocal space maps of tensile......-deformed copper, individual, almost dislocation-free subgrains are identified from high-intensity peaks and distinguished by their unique combination of orientation and elastic strain; dislocation walls manifest themselves as a smooth cloud of lower intensity. The elastic strain shows only minor variations within...... dynamics is followed in situ during varying loading conditions by reciprocal space mapping: during uninterrupted tensile deformation, formation of subgrains is observed concurrently with broadening of Bragg reflections shortly after the onset of plastic deformation. When the traction is terminated, stress...

  10. Two-dimensional echocardiographic features of right ventricular infarction

    International Nuclear Information System (INIS)

    D'Arcy, B.; Nanda, N.C.

    1982-01-01

    Real-time, two-dimensional echocardiographic studies were performed in 10 patients with acute myocardial infarction who had clinical features suggestive of right ventricular involvement. All patients showed right ventricular wall motion abnormalities. In the four-chamber view, seven patients showed akinesis of the entire right ventricular diaphragmatic wall and three showed akinesis of segments of the diaphragmatic wall. Segmental dyskinetic areas involving the right ventricular free wall were identified in four patients. One patient showed a large right ventricular apical aneurysm. Other echocardiographic features included enlargement of the right ventricle in eight cases, paradoxical ventricular septal motion in seven cases, tricuspid incompetence in eight cases, dilation of the stomach in four cases and localized pericardial effusion in two cases. Right ventricular infarction was confirmed by radionuclide methods in seven patients, at surgery in one patient and at autopsy in two patients

  11. How the flip target behaves in four-dimensional space

    International Nuclear Information System (INIS)

    Antillon, A.; Kats, J.

    1985-01-01

    We use available coupling theory for understanding how a flip target in a 4-dimensional phase space reduces a gaussian beam of particles. Experimental evidence at the AGS can be qualitatively explained by this theory

  12. Quantum interest in (3+1)-dimensional Minkowski space

    International Nuclear Information System (INIS)

    Abreu, Gabriel; Visser, Matt

    2009-01-01

    The so-called 'quantum inequalities', and the 'quantum interest conjecture', use quantum field theory to impose significant restrictions on the temporal distribution of the energy density measured by a timelike observer, potentially preventing the existence of exotic phenomena such as 'Alcubierre warp drives' or 'traversable wormholes'. Both the quantum inequalities and the quantum interest conjecture can be reduced to statements concerning the existence or nonexistence of bound states for a certain one-dimensional quantum mechanical pseudo-Hamiltonian. Using this approach, we shall provide a simple variational proof of one version of the quantum interest conjecture in (3+1)-dimensional Minkowski space.

  13. Normed kernel function-based fuzzy possibilistic C-means (NKFPCM) algorithm for high-dimensional breast cancer database classification with feature selection is based on Laplacian Score

    Science.gov (United States)

    Lestari, A. W.; Rustam, Z.

    2017-07-01

    In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.

  14. Method of solving conformal models in D-dimensional space I

    International Nuclear Information System (INIS)

    Fradkin, E.S.; Palchik, M.Y.

    1996-01-01

    We study the Hilbert space of conformal field theory in D-dimensional space. The latter is shown to have model-independent structure. The states of matter fields and gauge fields form orthogonal subspaces. The dynamical principle fixing the choice of model may be formulated either in each of these subspaces or in their direct sum. In the latter case, gauge interactions are necessarily present in the model. We formulate the conditions specifying the class of models where gauge interactions are being neglected. The anomalous Ward identities are derived. Different values of anomalous parameters (D-dimensional analogs of a central charge, including operator ones) correspond to different models. The structure of these models is analogous to that of 2-dimensional conformal theories. Each model is specified by D-dimensional analog of null vector. The exact solutions of the simplest models of this type are examined. It is shown that these models are equivalent to Lagrangian models of scalar fields with a triple interaction. The values of dimensions of such fields are calculated, and the closed sets of differential equations for higher Green functions are derived. Copyright copyright 1996 Academic Press, Inc

  15. Hyper dimensional phase-space solver and its application to laser-matter

    Energy Technology Data Exchange (ETDEWEB)

    Kondoh, Yoshiaki; Nakamura, Takashi; Yabe, Takashi [Department of Energy Sciences, Tokyo Institute of Technology, Yokohama, Kanagawa (Japan)

    2000-03-01

    A new numerical scheme for solving the hyper-dimensional Vlasov-Poisson equation in phase space is described. At each time step, the distribution function and its first derivatives are advected in phase space by the Cubic Interpolated Propagation (CIP) scheme. Although a cell within grid points is interpolated by a cubic-polynomial, any matrix solutions are not required. The scheme guarantees the exact conservation of the mass. The numerical results show good agreement with the theory. Even if we reduce the number of grid points in the v-direction, the scheme still gives stable, accurate and reasonable results with memory storage comparable to particle simulations. Owing to this fact, the scheme has succeeded to be generalized in a straightforward way to deal with the six-dimensional, or full-dimensional problems. (author)

  16. Hyper dimensional phase-space solver and its application to laser-matter

    International Nuclear Information System (INIS)

    Kondoh, Yoshiaki; Nakamura, Takashi; Yabe, Takashi

    2000-01-01

    A new numerical scheme for solving the hyper-dimensional Vlasov-Poisson equation in phase space is described. At each time step, the distribution function and its first derivatives are advected in phase space by the Cubic Interpolated Propagation (CIP) scheme. Although a cell within grid points is interpolated by a cubic-polynomial, any matrix solutions are not required. The scheme guarantees the exact conservation of the mass. The numerical results show good agreement with the theory. Even if we reduce the number of grid points in the v-direction, the scheme still gives stable, accurate and reasonable results with memory storage comparable to particle simulations. Owing to this fact, the scheme has succeeded to be generalized in a straightforward way to deal with the six-dimensional, or full-dimensional problems. (author)

  17. Elasticity of fractal materials using the continuum model with non-integer dimensional space

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-01-01

    Using a generalization of vector calculus for space with non-integer dimension, we consider elastic properties of fractal materials. Fractal materials are described by continuum models with non-integer dimensional space. A generalization of elasticity equations for non-integer dimensional space, and its solutions for the equilibrium case of fractal materials are suggested. Elasticity problems for fractal hollow ball and cylindrical fractal elastic pipe with inside and outside pressures, for rotating cylindrical fractal pipe, for gradient elasticity and thermoelasticity of fractal materials are solved.

  18. Possibilities of identifying cyber attack in noisy space of n-dimensional abstract system

    Energy Technology Data Exchange (ETDEWEB)

    Jašek, Roman; Dvořák, Jiří; Janková, Martina; Sedláček, Michal [Tomas Bata University in Zlin Nad Stranemi 4511, 760 05 Zlin, Czech republic jasek@fai.utb.cz, dvorakj@aconte.cz, martina.jankova@email.cz, michal.sedlacek@email.cz (Czech Republic)

    2016-06-08

    This article briefly mentions some selected options of current concept for identifying cyber attacks from the perspective of the new cyberspace of real system. In the cyberspace, there is defined n-dimensional abstract system containing elements of the spatial arrangement of partial system elements such as micro-environment of cyber systems surrounded by other suitably arranged corresponding noise space. This space is also gradually supplemented by a new image of dynamic processes in a discreet environment, and corresponding again to n-dimensional expression of time space defining existence and also the prediction for expected cyber attacksin the noise space. Noises are seen here as useful and necessary for modern information and communication technologies (e.g. in processes of applied cryptography in ICT) and then the so-called useless noises designed for initial (necessary) filtering of this highly aggressive environment and in future expectedly offensive background in cyber war (e.g. the destruction of unmanned means of an electromagnetic pulse, or for destruction of new safety barriers created on principles of electrostatic field or on other principles of modern physics, etc.). The key to these new options is the expression of abstract systems based on the models of microelements of cyber systems and their hierarchical concept in structure of n-dimensional system in given cyberspace. The aim of this article is to highlight the possible systemic expression of cyberspace of abstract system and possible identification in time-spatial expression of real environment (on microelements of cyber systems and their surroundings with noise characteristics and time dimension in dynamic of microelements’ own time and externaltime defined by real environment). The article was based on a partial task of faculty specific research.

  19. Possibilities of identifying cyber attack in noisy space of n-dimensional abstract system

    International Nuclear Information System (INIS)

    Jašek, Roman; Dvořák, Jiří; Janková, Martina; Sedláček, Michal

    2016-01-01

    This article briefly mentions some selected options of current concept for identifying cyber attacks from the perspective of the new cyberspace of real system. In the cyberspace, there is defined n-dimensional abstract system containing elements of the spatial arrangement of partial system elements such as micro-environment of cyber systems surrounded by other suitably arranged corresponding noise space. This space is also gradually supplemented by a new image of dynamic processes in a discreet environment, and corresponding again to n-dimensional expression of time space defining existence and also the prediction for expected cyber attacksin the noise space. Noises are seen here as useful and necessary for modern information and communication technologies (e.g. in processes of applied cryptography in ICT) and then the so-called useless noises designed for initial (necessary) filtering of this highly aggressive environment and in future expectedly offensive background in cyber war (e.g. the destruction of unmanned means of an electromagnetic pulse, or for destruction of new safety barriers created on principles of electrostatic field or on other principles of modern physics, etc.). The key to these new options is the expression of abstract systems based on the models of microelements of cyber systems and their hierarchical concept in structure of n-dimensional system in given cyberspace. The aim of this article is to highlight the possible systemic expression of cyberspace of abstract system and possible identification in time-spatial expression of real environment (on microelements of cyber systems and their surroundings with noise characteristics and time dimension in dynamic of microelements’ own time and externaltime defined by real environment). The article was based on a partial task of faculty specific research.

  20. Possibilities of identifying cyber attack in noisy space of n-dimensional abstract system

    Science.gov (United States)

    Jašek, Roman; Dvořák, Jiří; Janková, Martina; Sedláček, Michal

    2016-06-01

    This article briefly mentions some selected options of current concept for identifying cyber attacks from the perspective of the new cyberspace of real system. In the cyberspace, there is defined n-dimensional abstract system containing elements of the spatial arrangement of partial system elements such as micro-environment of cyber systems surrounded by other suitably arranged corresponding noise space. This space is also gradually supplemented by a new image of dynamic processes in a discreet environment, and corresponding again to n-dimensional expression of time space defining existence and also the prediction for expected cyber attacksin the noise space. Noises are seen here as useful and necessary for modern information and communication technologies (e.g. in processes of applied cryptography in ICT) and then the so-called useless noises designed for initial (necessary) filtering of this highly aggressive environment and in future expectedly offensive background in cyber war (e.g. the destruction of unmanned means of an electromagnetic pulse, or for destruction of new safety barriers created on principles of electrostatic field or on other principles of modern physics, etc.). The key to these new options is the expression of abstract systems based on the models of microelements of cyber systems and their hierarchical concept in structure of n-dimensional system in given cyberspace. The aim of this article is to highlight the possible systemic expression of cyberspace of abstract system and possible identification in time-spatial expression of real environment (on microelements of cyber systems and their surroundings with noise characteristics and time dimension in dynamic of microelements' own time and externaltime defined by real environment). The article was based on a partial task of faculty specific research.

  1. Predicting the bounds of large chaotic systems using low-dimensional manifolds.

    Directory of Open Access Journals (Sweden)

    Asger M Haugaard

    Full Text Available Predicting extrema of chaotic systems in high-dimensional phase space remains a challenge. Methods, which give extrema that are valid in the long term, have thus far been restricted to models of only a few variables. Here, a method is presented which treats extrema of chaotic systems as belonging to discretised manifolds of low dimension (low-D embedded in high-dimensional (high-D phase space. As a central feature, the method exploits that strange attractor dimension is generally much smaller than parent system phase space dimension. This is important, since the computational cost associated with discretised manifolds depends exponentially on their dimension. Thus, systems that would otherwise be associated with tremendous computational challenges, can be tackled on a laptop. As a test, bounding manifolds are calculated for high-D modifications of the canonical Duffing system. Parameters can be set such that the bounding manifold displays harmonic behaviour even if the underlying system is chaotic. Thus, solving for one post-transient forcing cycle of the bounding manifold predicts the extrema of the underlying chaotic problem indefinitely.

  2. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  3. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  4. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  5. A covariant form of the Maxwell's equations in four-dimensional spaces with an arbitrary signature

    International Nuclear Information System (INIS)

    Lukac, I.

    1991-01-01

    The concept of duality in the four-dimensional spaces with the arbitrary constant metric is strictly mathematically formulated. A covariant model for covariant and contravariant bivectors in this space based on three four-dimensional vectors is proposed. 14 refs

  6. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  7. Spinorial characterizations of surfaces into 3-dimensional psuedo-Riemannian space forms

    OpenAIRE

    Lawn , Marie-Amélie; Roth , Julien

    2011-01-01

    9 pages; We give a spinorial characterization of isometrically immersed surfaces of arbitrary signature into 3-dimensional pseudo-Riemannian space forms. For Lorentzian surfaces, this generalizes a recent work of the first author in $\\mathbb{R}^{2,1}$ to other Lorentzian space forms. We also characterize immersions of Riemannian surfaces in these spaces. From this we can deduce analogous results for timelike immersions of Lorentzian surfaces in space forms of corresponding signature, as well ...

  8. Three-dimensional studies on resorption spaces and developing osteons.

    Science.gov (United States)

    Tappen, N C

    1977-07-01

    Resorption spaces and their continuations as developing osteons were traced in serial cross sections from decalcified long bones of dogs, baboons and a man, and from a human rib. Processes of formation of osteons and transverse (Volkmann's) canals can be inferred from three-dimensional studies. Deposits of new osseous tissue begin to line the walls of the spaces soon after termination of resorption. The first deposits are osteoid, usually stained very darkly by the silver nitrate procedure utilized, but a lighter osteoid zone adjacent to the canals occurs frequently. Osteoid linings continue to be produced as lamellar bone forms around them; the large canals of immature osteons usually narrow very gradually. Frequently they terminate both proximally and distally as resorption spaces, indicating that osteons often advance in opposite directions as they develop. Osteoclasts of resorption spaces tunnel preferentially into highly mineralized bone, and usually do not use previously existing canals as templates for their advance. Osteons evidently originate by localized resorption of one side of the wall of an existing vascular channel in bone, with subsequent orientation of the resorption front along the axis of the shaft. Advancing resorption spaces also apparently stimulate the formation of numerous additional transverse canal connections to neighboring longitudinal canals. Serial tracing and silver nitrate differential staining combine to reveal many of the processes of bone remodeling at work, and facilitate quantitative treatment of the data. Further uses in studies of bone tissue and associated cells are recommended.

  9. A systematic exploration of the micro-blog feature space for teens stress detection.

    Science.gov (United States)

    Zhao, Liang; Li, Qi; Xue, Yuanyuan; Jia, Jia; Feng, Ling

    2016-01-01

    In the modern stressful society, growing teenagers experience severe stress from different aspects from school to friends, from self-cognition to inter-personal relationship, which negatively influences their smooth and healthy development. Being timely and accurately aware of teenagers psychological stress and providing effective measures to help immature teenagers to cope with stress are highly valuable to both teenagers and human society. Previous work demonstrates the feasibility to sense teenagers' stress from their tweeting contents and context on the open social media platform-micro-blog. However, a tweet is still too short for teens to express their stressful status in a comprehensive way. Considering the topic continuity from the tweeting content to the follow-up comments and responses between the teenager and his/her friends, we combine the content of comments and responses under the tweet to supplement the tweet content. Also, such friends' caring comments like "what happened?", "Don't worry!", "Cheer up!", etc. provide hints to teenager's stressful status. Hence, in this paper, we propose to systematically explore the micro-blog feature space, comprised of four kinds of features [tweeting content features (FW), posting features (FP), interaction features (FI), and comment-response features (FC) between teenagers and friends] for teenager' stress category and stress level detection. We extract and analyze these feature values and their impacts on teens stress detection. We evaluate the framework through a real user study of 36 high school students aged 17. Different classifiers are employed to detect potential stress categories and corresponding stress levels. Experimental results show that all the features in the feature space positively affect stress detection, and linguistic negative emotion, proportion of negative sentences, friends' caring comments and teen's reply rate play more significant roles than the rest features. Micro-blog platform provides

  10. Asymptotic analysis of fundamental solutions of Dirac operators on even dimensional Euclidean spaces

    International Nuclear Information System (INIS)

    Arai, A.

    1985-01-01

    We analyze the short distance asymptotic behavior of some quantities formed out of fundamental solutions of Dirac operators on even dimensional Euclidean spaces with finite dimensional matrix-valued potentials. (orig.)

  11. Non-Euclidean geometry and curvature two-dimensional spaces, volume 3

    CERN Document Server

    Cannon, James W

    2017-01-01

    This is the final volume of a three volume collection devoted to the geometry, topology, and curvature of 2-dimensional spaces. The collection provides a guided tour through a wide range of topics by one of the twentieth century's masters of geometric topology. The books are accessible to college and graduate students and provide perspective and insight to mathematicians at all levels who are interested in geometry and topology. Einstein showed how to interpret gravity as the dynamic response to the curvature of space-time. Bill Thurston showed us that non-Euclidean geometries and curvature are essential to the understanding of low-dimensional spaces. This third and final volume aims to give the reader a firm intuitive understanding of these concepts in dimension 2. The volume first demonstrates a number of the most important properties of non-Euclidean geometry by means of simple infinite graphs that approximate that geometry. This is followed by a long chapter taken from lectures the author gave at MSRI, wh...

  12. Three-dimensional space charge distribution measurement in electron beam irradiated PMMA

    International Nuclear Information System (INIS)

    Imaizumi, Yoichi; Suzuki, Ken; Tanaka, Yasuhiro; Takada, Tatsuo

    1996-01-01

    The localized space charge distribution in electron beam irradiated PMMA was investigated using pulsed electroacoustic method. Using a conventional space charge measurement system, the distribution only in the depth direction (Z) can be measured assuming the charges distributed uniformly in the horizontal (X-Y) plane. However, it is difficult to measure the distribution of space charge accumulated in small area. Therefore, we have developed the new system to measure the three-dimensional space charge distribution using pulsed electroacoustic method. The system has a small electrode with a diameter of 1mm and a motor-drive X-Y stage to move the sample. Using the data measured at many points, the three-dimensional distribution were obtained. To estimate the system performance, the electron beam irradiated PMMA was used. The electron beam was irradiated from transmission electron microscope (TEM). The depth of injected electron was controlled using the various metal masks. The measurement results were compared with theoretically calculated values of electron range. (author)

  13. Self-dual phase space for (3 +1 )-dimensional lattice Yang-Mills theory

    Science.gov (United States)

    Riello, Aldo

    2018-01-01

    I propose a self-dual deformation of the classical phase space of lattice Yang-Mills theory, in which both the electric and magnetic fluxes take value in the compact gauge Lie group. A local construction of the deformed phase space requires the machinery of "quasi-Hamiltonian spaces" by Alekseev et al., which is reviewed here. The results is a full-fledged finite-dimensional and gauge-invariant phase space, the self-duality properties of which are largely enhanced in (3 +1 ) spacetime dimensions. This enhancement is due to a correspondence with the moduli space of an auxiliary noncommutative flat connection living on a Riemann surface defined from the lattice itself, which in turn equips the duality between electric and magnetic fluxes with a neat geometrical interpretation in terms of a Heegaard splitting of the space manifold. Finally, I discuss the consequences of the proposed deformation on the quantization of the phase space, its quantum gravitational interpretation, as well as its relevance for the construction of (3 +1 )-dimensional topological field theories with defects.

  14. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  15. A qualitative numerical study of high dimensional dynamical systems

    Science.gov (United States)

    Albers, David James

    Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional

  16. CONFRONTING THREE-DIMENSIONAL TIME-DEPENDENT JET SIMULATIONS WITH HUBBLE SPACE TELESCOPE OBSERVATIONS

    International Nuclear Information System (INIS)

    Staff, Jan E.; Niebergal, Brian P.; Ouyed, Rachid; Pudritz, Ralph E.; Cai, Kai

    2010-01-01

    We perform state-of-the-art, three-dimensional, time-dependent simulations of magnetized disk winds, carried out to simulation scales of 60 AU, in order to confront optical Hubble Space Telescope observations of protostellar jets. We 'observe' the optical forbidden line emission produced by shocks within our simulated jets and compare these with actual observations. Our simulations reproduce the rich structure of time-varying jets, including jet rotation far from the source, an inner (up to 400 km s -1 ) and outer (less than 100 km s -1 ) component of the jet, and jet widths of up to 20 AU in agreement with observed jets. These simulations when compared with the data are able to constrain disk wind models. In particular, models featuring a disk magnetic field with a modest radial spatial variation across the disk are favored.

  17. Space discretization in SN methods: Features, improvements and convergence patterns

    International Nuclear Information System (INIS)

    Coppa, G.G.M.; Lapenta, G.; Ravetto, P.

    1990-01-01

    A comparative analysis of the space discretization schemes currently used in S N methods is performed and special attention is devoted to direct integration techniques. Some improvements are proposed in one- and two-dimensional applications, which are based on suitable choices for the spatial variation of the collision source. A study of the convergence pattern is carried out for eigenvalue calculations within the space asymptotic approximation by means of both analytical and numerical investigations. (orig.) [de

  18. Geodesics on a hot plate: an example of a two-dimensional curved space

    International Nuclear Information System (INIS)

    Erkal, Cahit

    2006-01-01

    The equation of the geodesics on a hot plate with a radially symmetric temperature profile is derived using the Lagrangian approach. Numerical solutions are presented with an eye towards (a) teaching two-dimensional curved space and the metric used to determine the geodesics (b) revealing some characteristics of two-dimensional curved spacetime and (c) providing insight into understanding the curved space which emerges in teaching relativity. In order to provide a deeper insight, we also present the analytical solutions and show that they represent circles whose characteristics depend on curvature of the space, conductivity and the coefficient of thermal expansion

  19. Geodesics on a hot plate: an example of a two-dimensional curved space

    Energy Technology Data Exchange (ETDEWEB)

    Erkal, Cahit [Department of Geology, Geography, and Physics, University of Tennessee, Martin, TN 38238 (United States)

    2006-07-01

    The equation of the geodesics on a hot plate with a radially symmetric temperature profile is derived using the Lagrangian approach. Numerical solutions are presented with an eye towards (a) teaching two-dimensional curved space and the metric used to determine the geodesics (b) revealing some characteristics of two-dimensional curved spacetime and (c) providing insight into understanding the curved space which emerges in teaching relativity. In order to provide a deeper insight, we also present the analytical solutions and show that they represent circles whose characteristics depend on curvature of the space, conductivity and the coefficient of thermal expansion.

  20. Absolute continuity of autophage measures on finite-dimensional vector spaces

    Energy Technology Data Exchange (ETDEWEB)

    Raja, C R.E. [Stat-Math Unit, Indian Statistical Institute, Bangalore (India); [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)]. E-mail: creraja@isibang.ac.in

    2002-06-01

    We consider a class of measures called autophage which was introduced and studied by Szekely for measures on the real line. We show that the autophage measures on finite-dimensional vector spaces over real or Q{sub p} are infinitely divisible without idempotent factors and are absolutely continuous with bounded continuous density. We also show that certain semistable measures on such vector spaces are absolutely continuous. (author)

  1. Intertwined Hamiltonians in two-dimensional curved spaces

    International Nuclear Information System (INIS)

    Aghababaei Samani, Keivan; Zarei, Mina

    2005-01-01

    The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincare half plane (AdS 2 ), de Sitter plane (dS 2 ), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle

  2. Numerical relativity for D dimensional space-times: Head-on collisions of black holes and gravitational wave extraction

    International Nuclear Information System (INIS)

    Witek, Helvi; Nerozzi, Andrea; Zilhao, Miguel; Herdeiro, Carlos; Gualtieri, Leonardo; Cardoso, Vitor; Sperhake, Ulrich

    2010-01-01

    Higher dimensional black holes play an exciting role in fundamental physics, such as high energy physics. In this paper, we use the formalism and numerical code reported in [1] to study the head-on collision of two black holes. For this purpose we provide a detailed treatment of gravitational wave extraction in generic D dimensional space-times, which uses the Kodama-Ishibashi formalism. For the first time, we present the results of numerical simulations of the head-on collision in five space-time dimensions, together with the relevant physical quantities. We show that the total radiated energy, when two black holes collide from rest at infinity, is approximately (0.089±0.006)% of the center of mass energy, slightly larger than the 0.055% obtained in the four-dimensional case, and that the ringdown signal at late time is in very good agreement with perturbative calculations.

  3. Introducing the Dimensional Continuous Space-Time Theory

    International Nuclear Information System (INIS)

    Martini, Luiz Cesar

    2013-01-01

    This article is an introduction to a new theory. The name of the theory is justified by the dimensional description of the continuous space-time of the matter, energy and empty space, that gathers all the real things that exists in the universe. The theory presents itself as the consolidation of the classical, quantum and relativity theories. A basic equation that describes the formation of the Universe, relating time, space, matter, energy and movement, is deduced. The four fundamentals physics constants, light speed in empty space, gravitational constant, Boltzmann's constant and Planck's constant and also the fundamentals particles mass, the electrical charges, the energies, the empty space and time are also obtained from this basic equation. This theory provides a new vision of the Big-Bang and how the galaxies, stars, black holes and planets were formed. Based on it, is possible to have a perfect comprehension of the duality between wave-particle, which is an intrinsic characteristic of the matter and energy. It will be possible to comprehend the formation of orbitals and get the equationing of atomics orbits. It presents a singular comprehension of the mass relativity, length and time. It is demonstrated that the continuous space-time is tridimensional, inelastic and temporally instantaneous, eliminating the possibility of spatial fold, slot space, worm hole, time travels and parallel universes. It is shown that many concepts, like dark matter and strong forces, that hypothetically keep the cohesion of the atomics nucleons, are without sense.

  4. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2016-01-01

    Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  5. Three-dimensional reciprocal space x-ray coherent scattering tomography of two-dimensional object.

    Science.gov (United States)

    Zhu, Zheyuan; Pang, Shuo

    2018-04-01

    X-ray coherent scattering tomography is a powerful tool in discriminating biological tissues and bio-compatible materials. Conventional x-ray scattering tomography framework can only resolve isotropic scattering profile under the assumption that the material is amorphous or in powder form, which is not true especially for biological samples with orientation-dependent structure. Previous tomography schemes based on x-ray coherent scattering failed to preserve the scattering pattern from samples with preferred orientations, or required elaborated data acquisition scheme, which could limit its application in practical settings. Here, we demonstrate a simple imaging modality to preserve the anisotropic scattering signal in three-dimensional reciprocal (momentum transfer) space of a two-dimensional sample layer. By incorporating detector movement along the direction of x-ray beam, combined with a tomographic data acquisition scheme, we match the five dimensions of the measurements with the five dimensions (three in momentum transfer domain, and two in spatial domain) of the object. We employed a collimated pencil beam of a table-top copper-anode x-ray tube, along with a panel detector to investigate the feasibility of our method. We have demonstrated x-ray coherent scattering tomographic imaging at a spatial resolution ~2 mm and momentum transfer resolution 0.01 Å -1 for the rotation-invariant scattering direction. For any arbitrary, non-rotation-invariant direction, the same spatial and momentum transfer resolution can be achieved based on the spatial information from the rotation-invariant direction. The reconstructed scattering profile of each pixel from the experiment is consistent with the x-ray diffraction profile of each material. The three-dimensional scattering pattern recovered from the measurement reveals the partially ordered molecular structure of Teflon wrap in our sample. We extend the applicability of conventional x-ray coherent scattering tomography to

  6. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  7. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    International Nuclear Information System (INIS)

    Zilhao, Miguel; Herdeiro, Carlos; Witek, Helvi; Nerozzi, Andrea; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo

    2010-01-01

    The numerical evolution of Einstein's field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  8. Nonrenormalizable quantum field models in four-dimensional space-time

    International Nuclear Information System (INIS)

    Raczka, R.

    1978-01-01

    The construction of no-cutoff Euclidean Green's functions for nonrenormalizable interactions L/sub I/(phi) = lambda∫ddelta (epsilon): expepsilonphi: in four-dimensional space-time is carried out. It is shown that all axioms for the generating functional of the Euclidean Green's function are satisfied except perhaps SO(4) invariance

  9. The dimensionality of stellar chemical space using spectra from the Apache Point Observatory Galactic Evolution Experiment

    Science.gov (United States)

    Price-Jones, Natalie; Bovy, Jo

    2018-03-01

    Chemical tagging of stars based on their similar compositions can offer new insights about the star formation and dynamical history of the Milky Way. We investigate the feasibility of identifying groups of stars in chemical space by forgoing the use of model derived abundances in favour of direct analysis of spectra. This facilitates the propagation of measurement uncertainties and does not pre-suppose knowledge of which elements are important for distinguishing stars in chemical space. We use ˜16 000 red giant and red clump H-band spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and perform polynomial fits to remove trends not due to abundance-ratio variations. Using expectation maximized principal component analysis, we find principal components with high signal in the wavelength regions most important for distinguishing between stars. Different subsamples of red giant and red clump stars are all consistent with needing about 10 principal components to accurately model the spectra above the level of the measurement uncertainties. The dimensionality of stellar chemical space that can be investigated in the H band is therefore ≲10. For APOGEE observations with typical signal-to-noise ratios of 100, the number of chemical space cells within which stars cannot be distinguished is approximately 1010±2 × (5 ± 2)n - 10 with n the number of principal components. This high dimensionality and the fine-grained sampling of chemical space are a promising first step towards chemical tagging based on spectra alone.

  10. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    Science.gov (United States)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  11. Independent screening for single-index hazard rate models with ultrahigh dimensional features

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2013-01-01

    can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...

  12. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  13. Geometry of quantum dynamics in infinite-dimensional Hilbert space

    Science.gov (United States)

    Grabowski, Janusz; Kuś, Marek; Marmo, Giuseppe; Shulman, Tatiana

    2018-04-01

    We develop a geometric approach to quantum mechanics based on the concept of the Tulczyjew triple. Our approach is genuinely infinite-dimensional, i.e. we do not restrict considerations to finite-dimensional Hilbert spaces, contrary to many other works on the geometry of quantum mechanics, and include a Lagrangian formalism in which self-adjoint (Schrödinger) operators are obtained as Lagrangian submanifolds associated with the Lagrangian. As a byproduct we also obtain results concerning coadjoint orbits of the unitary group in infinite dimensions, embedding of pure states in the unitary group, and self-adjoint extensions of symmetric relations.

  14. Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise

    Science.gov (United States)

    Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En

    2018-01-01

    In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method. PMID:29565288

  15. Influence of cusps and intersections on the Wilson loop in ν-dimensional space

    International Nuclear Information System (INIS)

    Bezerra, V.B.

    1984-01-01

    A discussion is given about the influence of cusps and intersections on the calculation of the Wilson loop in ν-dimensional space. In particular, for the two-dimensional case, it is shown that there are no divergences. (Author) [pt

  16. Dimensional Analysis with space discrimination applied to Fickian difussion phenomena

    International Nuclear Information System (INIS)

    Diaz Sanchidrian, C.; Castans, M.

    1989-01-01

    Dimensional Analysis with space discrimination is applied to Fickian difussion phenomena in order to transform its partial differen-tial equations into ordinary ones, and also to obtain in a dimensionl-ess fom the Ficks second law. (Author)

  17. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  18. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  19. The new Big Bang Theory according to dimensional continuous space-time theory

    International Nuclear Information System (INIS)

    Martini, Luiz Cesar

    2014-01-01

    This New View of the Big Bang Theory results from the Dimensional Continuous Space-Time Theory, for which the introduction was presented in [1]. This theory is based on the concept that the primitive Universe before the Big Bang was constituted only from elementary cells of potential energy disposed side by side. In the primitive Universe there were no particles, charges, movement and the Universe temperature was absolute zero Kelvin. The time was always present, even in the primitive Universe, time is the integral part of the empty space, it is the dynamic energy of space and it is responsible for the movement of matter and energy inside the Universe. The empty space is totally stationary; the primitive Universe was infinite and totally occupied by elementary cells of potential energy. In its event, the Big Bang started a production of matter, charges, energy liberation, dynamic movement, temperature increase and the conformation of galaxies respecting a specific formation law. This article presents the theoretical formation of the Galaxies starting from a basic equation of the Dimensional Continuous Space-time Theory.

  20. The New Big Bang Theory according to Dimensional Continuous Space-Time Theory

    Science.gov (United States)

    Martini, Luiz Cesar

    2014-04-01

    This New View of the Big Bang Theory results from the Dimensional Continuous Space-Time Theory, for which the introduction was presented in [1]. This theory is based on the concept that the primitive Universe before the Big Bang was constituted only from elementary cells of potential energy disposed side by side. In the primitive Universe there were no particles, charges, movement and the Universe temperature was absolute zero Kelvin. The time was always present, even in the primitive Universe, time is the integral part of the empty space, it is the dynamic energy of space and it is responsible for the movement of matter and energy inside the Universe. The empty space is totally stationary; the primitive Universe was infinite and totally occupied by elementary cells of potential energy. In its event, the Big Bang started a production of matter, charges, energy liberation, dynamic movement, temperature increase and the conformation of galaxies respecting a specific formation law. This article presents the theoretical formation of the Galaxies starting from a basic equation of the Dimensional Continuous Space-time Theory.

  1. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  2. On construction of two-dimensional Riemannian manifolds embedded into enveloping Euclidean (pseudo-Euclidean) space

    International Nuclear Information System (INIS)

    Saveliev, M.V.

    1983-01-01

    In the framework of the algebraic approach a construction of exactly integrable two-dimensional Riemannian manifolds embedded into enveloping Euclidean (pseudo-Euclidean) space Rsub(N) of an arbitrary dimension is presented. The construction is based on a reformulation of the Gauss, Peterson-Codazzi and Ricci equations in the form of a Lax-type representation in two-dimensional space. Here the Lax pair operators take the values in algebra SO(N)

  3. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  4. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  5. The features of space-planning and outfitting decisions

    Energy Technology Data Exchange (ETDEWEB)

    Voronov, N.A.; Bezrukov, A.K.

    1982-01-01

    The features of space-planning and outfitting solutions for a primary housing which was assembled with the No 1 auxillary housing are examined. The primary factors which have given rise to an unusual design decision on the depth of the structure of the main housing (12 meters) are noted.

  6. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    Science.gov (United States)

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  7. Time-dependent gravitating solitons in five dimensional warped space-times

    CERN Document Server

    Giovannini, Massimo

    2007-01-01

    Time-dependent soliton solutions are explicitly derived in a five-dimensional theory endowed with one (warped) extra-dimension. Some of the obtained geometries, everywhere well defined and technically regular, smoothly interpolate between two five-dimensional anti-de Sitter space-times for fixed value of the conformal time coordinate. Time dependent solutions containing both topological and non-topological sectors are also obtained. Supplementary degrees of freedom can be also included and, in this case, the resulting multi-soliton solutions may describe time-dependent kink-antikink systems.

  8. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... even in high-dimensional space. In addition, the latent connection between Rényi quadratic entropy and the mapping data in kernel feature space further facilitates us to capture the geometric structure as well as the information about the underlying labels of the CKD using CSQMI. Thus the resulting...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....

  9. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  10. A modular CUDA-based framework for scale-space feature detection in video streams

    International Nuclear Information System (INIS)

    Kinsner, M; Capson, D; Spence, A

    2010-01-01

    Multi-scale image processing techniques enable extraction of features where the size of a feature is either unknown or changing, but the requirement to process image data at multiple scale levels imposes a substantial computational load. This paper describes the architecture and emerging results from the implementation of a GPGPU-accelerated scale-space feature detection framework for video processing. A discrete scale-space representation is generated for image frames within a video stream, and multi-scale feature detection metrics are applied to detect ridges and Gaussian blobs at video frame rates. A modular structure is adopted, in which common feature extraction tasks such as non-maximum suppression and local extrema search may be reused across a variety of feature detectors. Extraction of ridge and blob features is achieved at faster than 15 frames per second on video sequences from a machine vision system, utilizing an NVIDIA GTX 480 graphics card. By design, the framework is easily extended to additional feature classes through the inclusion of feature metrics to be applied to the scale-space representation, and using common post-processing modules to reduce the required CPU workload. The framework is scalable across multiple and more capable GPUs, and enables previously intractable image processing at video frame rates using commodity computational hardware.

  11. Improving feature selection process resistance to failures caused by curse-of-dimensionality effects

    Czech Academy of Sciences Publication Activity Database

    Somol, Petr; Grim, Jiří; Novovičová, Jana; Pudil, P.

    2011-01-01

    Roč. 47, č. 3 (2011), s. 401-425 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection * curse of dimensionality * over-fitting * stability * machine learning * dimensionality reduction Subject RIV: IN - Informatics, Computer Science Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/RO/somol-0368741.pdf

  12. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  13. Space-dependent step features: Transient breakdown of slow-roll, homogeneity, and isotropy during inflation

    International Nuclear Information System (INIS)

    Lerner, Rose N.; McDonald, John

    2009-01-01

    A step feature in the inflaton potential can model a transient breakdown of slow-roll inflation. Here we generalize the step feature to include space-dependence, allowing it also to model a breakdown of homogeneity and isotropy. The space-dependent inflaton potential generates a classical curvature perturbation mode characterized by the wave number of the step inhomogeneity. For inhomogeneities small compared with the horizon at the step, space-dependence has a small effect on the curvature perturbation. Therefore, the smoothly oscillating quantum power spectrum predicted by the homogeneous step is robust with respect to subhorizon space-dependence. For inhomogeneities equal to or greater than the horizon at the step, the space-dependent classical mode can dominate, producing a curvature perturbation in which modes of wave number determined by the step inhomogeneity are superimposed on the oscillating power spectrum. Generation of a space-dependent step feature may therefore provide a mechanism to introduce primordial anisotropy into the curvature perturbation. Space-dependence also modifies the quantum fluctuations, in particular, via resonancelike features coming from mode coupling to amplified superhorizon modes. However, these effects are small relative to the classical modes.

  14. Development of the three dimensional flow model in the SPACE code

    International Nuclear Information System (INIS)

    Oh, Myung Taek; Park, Chan Eok; Kim, Shin Whan

    2014-01-01

    SPACE (Safety and Performance Analysis CodE) is a nuclear plant safety analysis code, which has been developed in the Republic of Korea through a joint research between the Korean nuclear industry and research institutes. The SPACE code has been developed with multi-dimensional capabilities as a requirement of the next generation safety code. It allows users to more accurately model the multi-dimensional flow behavior that can be exhibited in components such as the core, lower plenum, upper plenum and downcomer region. Based on generalized models, the code can model any configuration or type of fluid system. All the geometric quantities of mesh are described in terms of cell volume, centroid, face area, and face center, so that it can naturally represent not only the one dimensional (1D) or three dimensional (3D) Cartesian system, but also the cylindrical mesh system. It is possible to simulate large and complex domains by modelling the complex parts with a 3D approach and the rest of the system with a 1D approach. By 1D/3D co-simulation, more realistic conditions and component models can be obtained, providing a deeper understanding of complex systems, and it is expected to overcome the shortcomings of 1D system codes. (author)

  15. Spinorial Characterizations of Surfaces into 3-dimensional Pseudo-Riemannian Space Forms

    International Nuclear Information System (INIS)

    Lawn, Marie-Amélie; Roth, Julien

    2011-01-01

    We give a spinorial characterization of isometrically immersed surfaces of arbitrary signature into 3-dimensional pseudo-Riemannian space forms. This generalizes a recent work of the first author for spacelike immersed Lorentzian surfaces in ℝ 2,1 to other Lorentzian space forms. We also characterize immersions of Riemannian surfaces in these spaces. From this we can deduce analogous results for timelike immersions of Lorentzian surfaces in space forms of corresponding signature, as well as for spacelike and timelike immersions of surfaces of signature (0, 2), hence achieving a complete spinorial description for this class of pseudo-Riemannian immersions.

  16. Three-Dimensional, Transgenic Cell Models to Quantify Space Genotoxic Effects

    Science.gov (United States)

    Gonda, S. R.; Sognier, M. A.; Wu, H.; Pingerelli, P. L.; Glickman, B. W.; Dawson, David L. (Technical Monitor)

    1999-01-01

    The space environment contains radiation and chemical agents known to be mutagenic and carcinogenic to humans. Additionally, microgravity is a complicating factor that may modify or synergize induced genotoxic effects. Most in vitro models fail to use human cells (making risk extrapolation to humans more difficult), overlook the dynamic effect of tissue intercellular interactions on genotoxic damage, and lack the sensitivity required to measure low-dose effects. Currently a need exists for a model test system that simulates cellular interactions present in tissue, and can be used to quantify genotoxic damage induced by low levels of radiation and chemicals, and extrapolate assessed risk to humans. A state-of-the-art, three-dimensional, multicellular tissue equivalent cell culture model will be presented. It consists of mammalian cells genetically engineered to contain multiple copies of defined target genes for genotoxic assessment,. NASA-designed bioreactors were used to coculture mammalian cells into spheroids, The cells used were human mammary epithelial cells (H184135) and Stratagene's (Austin, Texas) Big Blue(TM) Rat 2 lambda fibroblasts. The fibroblasts were genetically engineered to contain -a high-density target gene for mutagenesis (60 copies of lacl/LacZ per cell). Tissue equivalent spheroids were routinely produced by inoculation of 2 to 7 X 10(exp 5) fibroblasts with Cytodex 3 beads (150 micrometers in diameter). at a 20:1 cell:bead ratio, into 50-ml HARV bioreactors (Synthecon, Inc.). Fibroblasts were cultured for 5 days, an equivalent number of epithelial cells added, and the fibroblast/epithelial cell coculture continued for 21 days. Three-dimensional spheroids with diameters ranging from 400 to 600 micrometers were obtained. Histological and immunohistochemical Characterization revealed i) both cell types present in the spheroids, with fibroblasts located primarily in the center, surrounded by epithelial cells; ii) synthesis of extracellular matrix

  17. High dimension feature extraction based visualized SOM fault diagnosis method and its application in p-xylene oxidation process☆

    Institute of Scientific and Technical Information of China (English)

    Ying Tian; Wenli Du; Feng Qian

    2015-01-01

    Purified terephthalic acid (PTA) is an important chemical raw material. P-xylene (PX) is transformed to terephthalic acid (TA) through oxidation process and TA is refined to produce PTA. The PX oxidation reaction is a complex process involving three-phase reaction of gas, liquid and solid. To monitor the process and to im-prove the product quality, as wel as to visualize the fault type clearly, a fault diagnosis method based on self-organizing map (SOM) and high dimensional feature extraction method, local tangent space alignment (LTSA), is proposed. In this method, LTSA can reduce the dimension and keep the topology information simultaneously, and SOM distinguishes various states on the output map. Monitoring results of PX oxidation reaction process in-dicate that the LTSA–SOM can wel detect and visualize the fault type.

  18. Quantum theory of spinor field in four-dimensional Riemannian space-time

    International Nuclear Information System (INIS)

    Shavokhina, N.S.

    1996-01-01

    The review deals with the spinor field in the four-dimensional Riemannian space-time. The field beys the Dirac-Fock-Ivanenko equation. Principles of quantization of the spinor field in the Riemannian space-time are formulated which in a particular case of the plane space-time are equivalent to the canonical rules of quantization. The formulated principles are exemplified by the De Sitter space-time. The study of quantum field theory in the De Sitter space-time is interesting because it itself leads to a method of an invariant well for plane space-time. However, the study of the quantum spinor field theory in an arbitrary Riemannian space-time allows one to take into account the influence of the external gravitational field on the quantized spinor field. 60 refs

  19. Quantization of coset space σ-models coupled to two-dimensional gravity

    International Nuclear Information System (INIS)

    Korotkin, D.; Samtleben, H.

    1996-07-01

    The mathematical framework for an exact quantization of the two-dimensional coset space σ-models coupled to dilaton gravity, that arise from dimensional reduction of gravity and supergravity theories, is presented. The two-time Hamiltonian formulation is obtained, which describes the complete phase space of the model in the whole isomonodromic sector. The Dirac brackets arising from the coset constraints are calculated. Their quantization allows to relate exact solutions of the corresponding Wheeler-DeWitt equations to solutions of a modified (Coset) Knizhnik-Zamolodchikov system. On the classical level, a set of observables is identified, that is complete for essential sectors of the theory. Quantum counterparts of these observables and their algebraic structure are investigated. Their status in alternative quantization procedures is discussed, employing the link with Hamiltonian Chern-Simons theory. (orig.)

  20. Online Feature Selection for Classifying Emphysema in HRCT Images

    Directory of Open Access Journals (Sweden)

    M. Prasad

    2008-06-01

    Full Text Available Feature subset selection, applied as a pre- processing step to machine learning, is valuable in dimensionality reduction, eliminating irrelevant data and improving classifier performance. In the classic formulation of the feature selection problem, it is assumed that all the features are available at the beginning. However, in many real world problems, there are scenarios where not all features are present initially and must be integrated as they become available. In such scenarios, online feature selection provides an efficient way to sort through a large space of features. It is in this context that we introduce online feature selection for the classification of emphysema, a smoking related disease that appears as low attenuation regions in High Resolution Computer Tomography (HRCT images. The technique was successfully evaluated on 61 HRCT scans and compared with different online feature selection approaches, including hill climbing, best first search, grafting, and correlation-based feature selection. The results were also compared against ldensity maskr, a standard approach used for emphysema detection in medical image analysis.

  1. Research of features and structure of electoral space of Ukraine in 2014 with the use of synthetic approach

    Directory of Open Access Journals (Sweden)

    M. M. Shelemba

    2015-02-01

    Full Text Available The article is aimed at the ground of expediency of the use of synthetic authorial model for research of features and structure of electoral space of Ukraine in 2014 year. Methodological principles of the use of synthetic model are expounded with the use of quality and quantitative methods researches of electoral space, among that methods of factor and cross­correlation analysis. A synthetic model (approach that is built on the basis of the use of the best scientific approaches takes into account features and progress of electoral space of Ukraine trends. The analysis of features and structure of electoral space of Ukraine is conducted in 2014 with the use of an offer model. The application author synthetic model allows the study of the use of association factor and correlation analysis to justify support to political parties during election campaigns, respectively, depending on the factors and the most important correlates. It was found that electoral choice depends on the actions of those factors in the highest degree the expectations of the region. This article has shown that the use of Ukraine at this stage of the investigated during election campaigns as the most significant social correlates of «Human Development Index» is reasonable and one that makes it possible to obtain reliable results. It is proved that a high level of correlation holds at a high level of support the party and, consequently, high sense of social correlates all variants of expert research.

  2. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  3. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  4. Three-dimensional theory for interaction between atomic ensembles and free-space light

    International Nuclear Information System (INIS)

    Duan, L.-M.; Cirac, J.I.; Zoller, P.

    2002-01-01

    Atomic ensembles have shown to be a promising candidate for implementations of quantum information processing by many recently discovered schemes. All these schemes are based on the interaction between optical beams and atomic ensembles. For description of these interactions, one assumed either a cavity-QED model or a one-dimensional light propagation model, which is still inadequate for a full prediction and understanding of most of the current experimental efforts that are actually taken in the three-dimensional free space. Here, we propose a perturbative theory to describe the three-dimensional effects in interaction between atomic ensembles and free-space light with a level configuration important for several applications. The calculations reveal some significant effects that were not known before from the other approaches, such as the inherent mode-mismatching noise and the optimal mode-matching conditions. The three-dimensional theory confirms the collective enhancement of the signal-to-noise ratio which is believed to be one of the main advantages of the ensemble-based quantum information processing schemes, however, it also shows that this enhancement needs to be understood in a more subtle way with an appropriate mode-matching method

  5. Feature fusion using kernel joint approximate diagonalization of eigen-matrices for rolling bearing fault identification

    Science.gov (United States)

    Liu, Yongbin; He, Bing; Liu, Fang; Lu, Siliang; Zhao, Yilei

    2016-12-01

    Fault pattern identification is a crucial step for the intelligent fault diagnosis of real-time health conditions in monitoring a mechanical system. However, many challenges exist in extracting the effective feature from vibration signals for fault recognition. A new feature fusion method is proposed in this study to extract new features using kernel joint approximate diagonalization of eigen-matrices (KJADE). In the method, the input space that is composed of original features is mapped into a high-dimensional feature space by nonlinear mapping. Then, the new features can be estimated through the eigen-decomposition of the fourth-order cumulative kernel matrix obtained from the feature space. Therefore, the proposed method could be used to reduce data redundancy because it extracts the inherent pattern structure of different fault classes as it is nonlinear by nature. The integration evaluation factor of between-class and within-class scatters (SS) is employed to depict the clustering performance quantitatively, and the new feature subset extracted by the proposed method is fed into a multi-class support vector machine for fault pattern identification. Finally, the effectiveness of the proposed method is verified by experimental vibration signals with different bearing fault types and severities. Results of several cases show that the KJADE algorithm is efficient in feature fusion for bearing fault identification.

  6. Gauge fields in nonlinear group realizations involving two-dimensional space-time symmetry

    International Nuclear Information System (INIS)

    Machacek, M.E.; McCliment, E.R.

    1975-01-01

    It is shown that gauge fields may be consistently introduced into a model Lagrangian previously considered by the authors. The model is suggested by the spontaneous breaking of a Lorentz-type group into a quasiphysical two-dimensional space-time and one internal degree of freedom, loosely associated with charge. The introduction of zero-mass gauge fields makes possible the absorption via the Higgs mechanism of the Goldstone fields that appear in the model despite the fact that the Goldstone fields do not transform as scalars. Specifically, gauge invariance of the Yang-Mills type requires the introduction of two sets of massless gauge fields. The transformation properties in two-dimensional space-time suggest that one set is analogous to a charge doublet that behaves like a second-rank tensor in real four-dimensional space time. The other set suggests a spin-one-like charge triplet. Via the Higgs mechanism, the first set absorbs the Goldstone fields and acquires mass. The second set remains massless. If massive gauge fields are introduced, the associated currents are not conserved and the Higgs mechanism is no longer fully operative. The Goldstone fields are not eliminated, but coupling between the Goldstone fields and the gauge fields does shift the mass of the antisymmetric second-rank-tensor gauge field components

  7. Execution spaces for simple higher dimensional automata

    DEFF Research Database (Denmark)

    Raussen, Martin

    2012-01-01

    Higher dimensional automata (HDA) are highly expressive models for concurrency in Computer Science, cf van Glabbeek (Theor Comput Sci 368(1–2): 168–194, 2006). For a topologist, they are attractive since they can be modeled as cubical complexes—with an inbuilt restriction for directions of allowa......Higher dimensional automata (HDA) are highly expressive models for concurrency in Computer Science, cf van Glabbeek (Theor Comput Sci 368(1–2): 168–194, 2006). For a topologist, they are attractive since they can be modeled as cubical complexes—with an inbuilt restriction for directions...

  8. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  9. Training astronauts using three-dimensional visualisations of the International Space Station.

    Science.gov (United States)

    Rycroft, M; Houston, A; Barker, A; Dahlstron, E; Lewis, N; Maris, N; Nelles, D; Bagaoutdinov, R; Bodrikov, G; Borodin, Y; Cheburkov, M; Ivanov, D; Karpunin, P; Katargin, R; Kiselyev, A; Kotlayarevsky, Y; Schetinnikov, A; Tylerov, F

    1999-03-01

    Recent advances in personal computer technology have led to the development of relatively low-cost software to generate high-resolution three-dimensional images. The capability both to rotate and zoom in on these images superposed on appropriate background images enables high-quality movies to be created. These developments have been used to produce realistic simulations of the International Space Station on CD-ROM. This product is described and its potentialities demonstrated. With successive launches, the ISS is gradually built up, and visualised over a rotating Earth against the star background. It is anticipated that this product's capability will be useful when training astronauts to carry out EVAs around the ISS. Simulations inside the ISS are also very realistic. These should prove invaluable when familiarising the ISS crew with their future workplace and home. Operating procedures can be taught and perfected. "What if" scenario models can be explored and this facility should be useful when training the crew to deal with emergency situations which might arise. This CD-ROM product will also be used to make the general public more aware of, and hence enthusiastic about, the International Space Station programme.

  10. Do features of public open spaces vary according to neighbourhood socio-economic status?

    Science.gov (United States)

    Crawford, David; Timperio, Anna; Giles-Corti, Billie; Ball, Kylie; Hume, Clare; Roberts, Rebecca; Andrianopoulos, Nick; Salmon, Jo

    2008-12-01

    This study examined the relations between neighbourhood socio-economic status and features of public open spaces (POS) hypothesised to influence children's physical activity. Data were from the first follow-up of the Children Living in Active Neighbourhoods (CLAN) Study, which involved 540 families of 5-6 and 10-12-year-old children in Melbourne, Australia. The Socio-Economic Index for Areas Index (SEIFA) of Relative Socio-economic Advantage/Disadvantage was used to assign a socioeconomic index score to each child's neighbourhood, based on postcode. Participant addresses were geocoded using a Geographic Information System. The Open Space 2002 spatial data set was used to identify all POS within an 800 m radius of each participant's home. The features of each of these POS (1497) were audited. Variability of POS features was examined across quintiles of neighbourhood SEIFA. Compared with POS in lower socioeconomic neighbourhoods, POS in the highest socioeconomic neighbourhoods had more amenities (e.g. picnic tables and drink fountains) and were more likely to have trees that provided shade, a water feature (e.g. pond, creek), walking and cycling paths, lighting, signage regarding dog access and signage restricting other activities. There were no differences across neighbourhoods in the number of playgrounds or the number of recreation facilities (e.g. number of sports catered for on courts and ovals, the presence of other facilities such as athletics tracks, skateboarding facility and swimming pool). This study suggests that POS in high socioeconomic neighbourhoods possess more features that are likely to promote physical activity amongst children.

  11. Quantum scattering theory of a single-photon Fock state in three-dimensional spaces.

    Science.gov (United States)

    Liu, Jingfeng; Zhou, Ming; Yu, Zongfu

    2016-09-15

    A quantum scattering theory is developed for Fock states scattered by two-level systems in three-dimensional free space. It is built upon the one-dimensional scattering theory developed in waveguide quantum electrodynamics. The theory fully quantizes the incident light as Fock states and uses a non-perturbative method to calculate the scattering matrix.

  12. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    Science.gov (United States)

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...

  14. Model space dimensionalities for multiparticle fermion systems

    International Nuclear Information System (INIS)

    Draayer, J.P.; Valdes, H.T.

    1985-01-01

    A menu driven program for determining the dimensionalities of fixed-(J) [or (J,T)] model spaces built by distributing identical fermions (electrons, neutrons, protons) or two distinguihable fermion types (neutron-proton and isospin formalisms) among any mixture of positive and negative parity spherical orbitals is presented. The algorithm, built around the elementary difference formula d(J)=d(M=J)-d(M=J+1), takes full advantage of M->-M and particle-hole symmetries. A 96 K version of the program suffices for as compilated a case as d[(+1/2, +3/2, + 5/2, + 7/2-11/2)sup(n-26)J=2 + ,T=7]=210,442,716,722 found in the 0hω valence space of 56 126 Ba 70 . The program calculates the total fixed-(Jsup(π)) or fixed-(Jsup(π),T) dimensionality of a model space generated by distributing a specified number of fermions among a set of input positive and negative parity (π) spherical (j) orbitals. The user is queried at each step to select among various options: 1. formalism - identical particle, neutron-proton, isospin; 2. orbits -bumber, +/-2*J of all orbits; 3. limits -minimum/maximum number of particles of each parity; 4. specifics - number of particles, +/-2*J (total), 2*T; 5. continue - same orbit structure, new case quit. Though designed for nuclear applications (jj-coupling), the program can be used in the atomic case (LS-coupling) so long as half integer spin values (j=l+-1/2) are input for the valnce orbitals. Mutiple occurrences of a given j value are properly taken into account. A minor extension provides labelling information for a generalized seniority classification scheme. The program logic is an adaption of methods used in statistical spectroscopy to evaluate configuration averages. Indeed, the need for fixed symmetry leve densities in spectral distribution theory motivated this work. The methods extend to other group structures where there are M-like additive quantum labels. (orig.)

  15. Feature selection for domain knowledge representation through multitask learning

    CSIR Research Space (South Africa)

    Rosman, Benjamin S

    2014-10-01

    Full Text Available represent stimuli of interest, and rich feature sets which increase the dimensionality of the space and thus the difficulty of the learning problem. We focus on a multitask reinforcement learning setting, where the agent is learning domain knowledge...

  16. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  17. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  18. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  19. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  20. Dimensionality reduction of collective motion by principal manifolds

    Science.gov (United States)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  1. Coherent states on horospheric three-dimensional Lobachevsky space

    Energy Technology Data Exchange (ETDEWEB)

    Kurochkin, Yu., E-mail: y.kurochkin@ifanbel.bas-net.by; Shoukavy, Dz., E-mail: shoukavy@ifanbel.bas-net.by [Institute of Physics, National Academy of Sciences of Belarus, 68 Nezalezhnasci Ave., Minsk 220072 (Belarus); Rybak, I., E-mail: Ivan.Rybak@astro.up.pt [Institute of Physics, National Academy of Sciences of Belarus, 68 Nezalezhnasci Ave., Minsk 220072 (Belarus); Instituto de Astrofísica e Ciências do Espaço, CAUP, Rua das Estrelas, 4150-762 Porto (Portugal); Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal)

    2016-08-15

    In the paper it is shown that due to separation of variables in the Laplace-Beltrami operator (Hamiltonian of a free quantum particle) in horospheric and quasi-Cartesian coordinates of three dimensional Lobachevsky space, it is possible to introduce standard (“conventional” according to Perelomov [Generalized Coherent States and Their Applications (Springer-Verlag, 1986), p. 320]) coherent states. Some problems (oscillator on horosphere, charged particle in analogy of constant uniform magnetic field) where coherent states are suitable for treating were considered.

  2. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  3. Aspects of high-dimensional theories in embedding spaces

    International Nuclear Information System (INIS)

    Maia, M.D.; Mecklenburg, W.

    1983-01-01

    The question of whether physical meaning may be attributed to the extra dimensions provided by embedding procedures as applied to physical space-times is discussed. The similarities and differences of the present picture to that of conventional Kaluza-Klein pictures are commented. (Author) [pt

  4. Non extensive statistics and entropic gravity in a non-integer dimensional space

    International Nuclear Information System (INIS)

    Abreu, Everton M.C.; Ananias Neto, Jorge; Godinho, Cresus F.L.

    2013-01-01

    Full text: The idea that gravity can be originated from thermodynamics features has begun with the discovering that black hole physics is connected to the thermodynamics laws. These concepts were strongly boosted after Jacobson's work, where the Einstein equations were obtained from general thermodynamics approaches. In a recent work, Padmanabhan obtained an interpretation of gravity as an equipartition law. In Verlinde's thermo gravitational formalism, the temperature and the acceleration are connected via Unruh effect. At the same time, he combined the holographic principle with an equipartition law, where the number of bits is proportional to the area of the holographic surface. Bits were used to define the microscopic degrees of freedom. With these ingredients, the entropic force combined with the holographic principle and the equipartition law originated the Newton's law of gravitation. The possible interpretation of Verlinde's result is that gravity is not an underlying concept, but an emergent one. It originates from the statistical behavior of the holographic screen microscopic degrees of freedom. Following these ideas, the current literature has grown in an accelerated production from Coulomb force and symmetry considerations of entropic force to cosmology and loop quantum. In this work we introduced the Newton's constant in a fractal space as a function of the non extensive one. With this result we established a relation between the Tsallis non extensive parameter and the dimension of this fractal space. Using Verlinde's formalism we used these fractal ideas combined with the concept of entropic gravity to calculate the number of bits of an holographic surface in this non-integer dimensional space, a fractal holographic screen. We introduced a fundamental length, a Planck-like length, into this space as a function of this fractal holographic screen radius. Finally, we consider superior dimensions in this analysis. (author)

  5. SHOVAV-JUEL. A one dimensional space-time kinetic code for pebble-bed high-temperature reactors with temperature and Xenon feedback

    International Nuclear Information System (INIS)

    Nabbi, R.; Meister, G.; Finken, R.; Haben, M.

    1982-09-01

    The present report describes the modelling basis and the structure of the neutron kinetics-code SHOVAV-Juel. Information for users is given regarding the application of the code and the generation of the input data. SHOVAV-Juel is a one-dimensional space-time-code based on a multigroup diffusion approach for four energy groups and six groups of delayed neutrons. It has been developed for the analysis of the transient behaviour of high temperature reactors with pebble-bed core. The reactor core is modelled by horizontal segments to which different materials compositions can be assigned. The temperature dependence of the reactivity is taken into account by using temperature dependent neutron cross sections. For the simulation of transients in an extended time range the time dependence of the reactivity absorption by Xenon-135 is taken into account. (orig./RW)

  6. Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation

    Science.gov (United States)

    Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao

    2017-09-01

    Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.

  7. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  8. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  9. Intelligent feature selection techniques for pattern classification of Lamb wave signals

    International Nuclear Information System (INIS)

    Hinders, Mark K.; Miller, Corey A.

    2014-01-01

    Lamb wave interaction with flaws is a complex, three-dimensional phenomenon, which often frustrates signal interpretation schemes based on mode arrival time shifts predicted by dispersion curves. As the flaw severity increases, scattering and mode conversion effects will often dominate the time-domain signals, obscuring available information about flaws because multiple modes may arrive on top of each other. Even for idealized flaw geometries the scattering and mode conversion behavior of Lamb waves is very complex. Here, multi-mode Lamb waves in a metal plate are propagated across a rectangular flat-bottom hole in a sequence of pitch-catch measurements corresponding to the double crosshole tomography geometry. The flaw is sequentially deepened, with the Lamb wave measurements repeated at each flaw depth. Lamb wave tomography reconstructions are used to identify which waveforms have interacted with the flaw and thereby carry information about its depth. Multiple features are extracted from each of the Lamb wave signals using wavelets, which are then fed to statistical pattern classification algorithms that identify flaw severity. In order to achieve the highest classification accuracy, an optimal feature space is required but it’s never known a priori which features are going to be best. For structural health monitoring we make use of the fact that physical flaws, such as corrosion, will only increase over time. This allows us to identify feature vectors which are topologically well-behaved by requiring that sequential classes “line up” in feature vector space. An intelligent feature selection routine is illustrated that identifies favorable class distributions in multi-dimensional feature spaces using computational homology theory. Betti numbers and formal classification accuracies are calculated for each feature space subset to establish a correlation between the topology of the class distribution and the corresponding classification accuracy

  10. Solution of the two-dimensional space-time reactor kinetics equation by a locally one-dimensional method

    International Nuclear Information System (INIS)

    Chen, G.S.; Christenson, J.M.

    1985-01-01

    In this paper, the authors present some initial results from an investigation of the application of a locally one-dimensional (LOD) finite difference method to the solution of the two-dimensional, two-group reactor kinetics equations. Although the LOD method is relatively well known, it apparently has not been previously applied to the space-time kinetics equations. In this investigation, the LOD results were benchmarked against similar computational results (using the same computing environment, the same programming structure, and the same sample problems) obtained by the TWIGL program. For all of the problems considered, the LOD method provided accurate results in one-half to one-eight of the time required by the TWIGL program

  11. Optical asymmetric cryptography using a three-dimensional space-based model

    International Nuclear Information System (INIS)

    Chen, Wen; Chen, Xudong

    2011-01-01

    In this paper, we present optical asymmetric cryptography combined with a three-dimensional (3D) space-based model. An optical multiple-random-phase-mask encoding system is developed in the Fresnel domain, and one random phase-only mask and the plaintext are combined as a series of particles. Subsequently, the series of particles is translated along an axial direction, and is distributed in a 3D space. During image decryption, the robustness and security of the proposed method are further analyzed. Numerical simulation results are presented to show the feasibility and effectiveness of the proposed optical image encryption method

  12. Dynamics of a neuron model in different two-dimensional parameter-spaces

    Science.gov (United States)

    Rech, Paulo C.

    2011-03-01

    We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.

  13. The curvature and the algebra of Killing vectors in five-dimensional space

    International Nuclear Information System (INIS)

    Rcheulishvili, G.

    1990-12-01

    This paper presents the Killing vectors for a five-dimensional space with the line element. The algebras which are formed by these vectors are written down. The curvature two-forms are described. (author). 10 refs

  14. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  15. A three-dimensional phase space dynamical model of the Earth's radiation belt

    International Nuclear Information System (INIS)

    Boscher, D. M.; Beutier, T.; Bourdarie, S.

    1996-01-01

    A three dimensional phase space model of the Earth's radiation belt is presented. We have taken into account the magnetic and electric radial diffusions, the pitch angle diffusions due to Coulomb interactions and interactions with the plasmaspheric hiss, and the Coulomb drag. First, a steady state of the belt is presented. Two main maxima are obtained, corresponding to the inner and outer parts of the belt. Then, we have modelled a simple injection at the external boundary. The particle transport seems like what was measured aboard satellites. A high energy particle loss is found, by comparing the model results and the measurements. It remains to be explained

  16. Canonical Groups for Quantization on the Two-Dimensional Sphere and One-Dimensional Complex Projective Space

    International Nuclear Information System (INIS)

    Sumadi A H A; H, Zainuddin

    2014-01-01

    Using Isham's group-theoretic quantization scheme, we construct the canonical groups of the systems on the two-dimensional sphere and one-dimensional complex projective space, which are homeomorphic. In the first case, we take SO(3) as the natural canonical Lie group of rotations of the two-sphere and find all the possible Hamiltonian vector fields, and followed by verifying the commutator and Poisson bracket algebra correspondences with the Lie algebra of the group. In the second case, the same technique is resumed to define the Lie group, in this case SU (2), of CP'.We show that one can simply use a coordinate transformation from S 2 to CP 1 to obtain all the Hamiltonian vector fields of CP 1 . We explicitly show that the Lie algebra structures of both canonical groups are locally homomorphic. On the other hand, globally their corresponding canonical groups are acting on different geometries, the latter of which is almost complex. Thus the canonical group for CP 1 is the double-covering group of SO(3), namely SU(2). The relevance of the proposed formalism is to understand the idea of CP 1 as a space of where the qubit lives which is known as a Bloch sphere

  17. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    Science.gov (United States)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  18. Gauge constructs and immersions of four-dimensional spacetimes in (4 + k)-dimensional flat spaces: algebraic evaluation of gravity fields

    International Nuclear Information System (INIS)

    Edelen, Dominic G B

    2003-01-01

    Local action of the fundamental group SO(a, 4 + k - a) is used to show that any solution of an algebraically closed differential system, that is generated from matrix Lie algebra valued 1-forms on a four-dimensional parameter space, will generate families of immersions of four-dimensional spacetimes R 4 in flat (4 + k)-dimensional spaces M 4+k with compatible signature. The algorithm is shown to work with local action of SO(a, 4 + k - a) replaced by local action of GL(4 + k). Immersions generated by local action of the Poincare group on the target spacetime are also obtained. Evaluations of the line elements, immersion loci and connection and curvature forms of these immersions are algebraic. Families of immersions that depend on one or more arbitrary functions are calculated for 1 ≤ k ≤ 4. Appropriate sections of graphs of the conformal factor for two and three interacting line singularities immersed in M 6 are given in appendix A. The local immersion theorem given in appendix B shows that all local solutions of the immersion problem are obtained by use of this method and an algebraic extension in exceptional cases

  19. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  20. On renormalisation of the quantum stress tensor in curved space-time by dimensional regularisation

    International Nuclear Information System (INIS)

    Bunch, T.S.

    1979-01-01

    Using dimensional regularisation, a prescription is given for obtaining a finite renormalised stress tensor in curved space-time. Renormalisation is carried out by renormalising coupling constants in the n-dimensional Einstein equation generalised to include tensors which are fourth order in derivatives of the metric. Except for the special case of a massless conformal field in a conformally flat space-time, this procedure is not unique. There exists an infinite one-parameter family of renormalisation ansatze differing from each other in the finite renormalisation that takes place. Nevertheless, the renormalised stress tensor for a conformally invariant field theory acquires a nonzero trace which is independent of the renormalisation ansatz used and which has a value in agreement with that obtained by other methods. A comparison is made with some earlier work using dimensional regularisation which is shown to be in error. (author)

  1. Quadcopter control in three-dimensional space using a noninvasive motor imagery based brain-computer interface

    Science.gov (United States)

    LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin

    2013-01-01

    Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712

  2. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators

    NARCIS (Netherlands)

    Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2010-01-01

    Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.

  3. CT features of invasion of sublingual space by malignant oropharyngeal tumors

    International Nuclear Information System (INIS)

    Wei Yi; Xiao Jiahe; Zhou Xiangping; Deng Kaihong

    2003-01-01

    Objective: To investigate the CT features of the invasion of sublingual space by malignant oropharyngeal tumors in order to provide more accurate information for clinical treatment. Methods: Fifty-eight cases of pathologically proven malignant oropharyngeal tumors were collected and retrospectively analyzed. Results: Among all the cases, invasion of sublingual space by malignant oropharyngeal tumors could be seen in 14 cases, of which, 7 cases got access to sublingual space through tongue base, 3 cases through parapharyngeal space, 2 cases through pterygomandibular raphe, 2 cases through uncertain routes. Invasion of sublingual space manifested on CT scan as obliteration of fat plane in sublingual space and involvement of the sublingual vessels in the space. Conclusion: Malignant oropharyngeal tumors can invade the adjacent sublingual space via tongue base, pterygomandibular raphe, and parapharyngeal space. The invasion of sublingual space by malignant oropharyngeal tumors manifests in CT as effacement of sublingual fat plane and envelopment of hyoid artery

  4. Classification Influence of Features on Given Emotions and Its Application in Feature Selection

    Science.gov (United States)

    Xing, Yin; Chen, Chuang; Liu, Li-Long

    2018-04-01

    In order to solve the problem that there is a large amount of redundant data in high-dimensional speech emotion features, we analyze deeply the extracted speech emotion features and select better features. Firstly, a given emotion is classified by each feature. Secondly, the recognition rate is ranked in descending order. Then, the optimal threshold of features is determined by rate criterion. Finally, the better features are obtained. When applied in Berlin and Chinese emotional data set, the experimental results show that the feature selection method outperforms the other traditional methods.

  5. Static stability of a three-dimensional space truss. M.S. Thesis - Case Western Reserve Univ., 1994

    Science.gov (United States)

    Shaker, John F.

    1995-01-01

    In order to deploy large flexible space structures it is necessary to develop support systems that are strong and lightweight. The most recent example of this aerospace design need is vividly evident in the space station solar array assembly. In order to accommodate both weight limitations and strength performance criteria, ABLE Engineering has developed the Folding Articulating Square Truss (FASTMast) support structure. The FASTMast is a space truss/mechanism hybrid that can provide system support while adhering to stringent packaging demands. However, due to its slender nature and anticipated loading, stability characterization is a critical part of the design process. Furthermore, the dire consequences surely to result from a catastrophic instability quickly provide the motivation for careful examination of this problem. The fundamental components of the space station solar array system are the (1) solar array blanket system, (2) FASTMast support structure, and (3) mast canister assembly. The FASTMast once fully deployed from the canister will provide support to the solar array blankets. A unique feature of this structure is that the system responds linearly within a certain range of operating loads and nonlinearly when that range is exceeded. The source of nonlinear behavior in this case is due to a changing stiffness state resulting from an inability of diagonal members to resist applied loads. The principal objective of this study was to establish the failure modes involving instability of the FASTMast structure. Also of great interest during this effort was to establish a reliable analytical approach capable of effectively predicting critical values at which the mast becomes unstable. Due to the dual nature of structural response inherent to this problem, both linear and nonlinear analyses are required to characterize the mast in terms of stability. The approach employed herein is one that can be considered systematic in nature. The analysis begins with one

  6. Quantum theory of string in the four-dimensional space-time

    International Nuclear Information System (INIS)

    Pron'ko, G.P.

    1986-01-01

    The Lorentz invariant quantum theory of string is constructed in four-dimensional space-time. Unlike the traditional approach whose result was breaking of Lorentz invariance, our method is based on the usage of other variables for description of string configurations. The method of an auxiliary spectral problem for periodic potentials is the main tool in construction of these new variables

  7. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling.

    Directory of Open Access Journals (Sweden)

    Daniel P Riordan

    Full Text Available Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the

  8. Plasma and process characterization of high power magnetron physical vapor deposition with integrated plasma equipment--feature profile model

    International Nuclear Information System (INIS)

    Zhang Da; Stout, Phillip J.; Ventzek, Peter L.G.

    2003-01-01

    High power magnetron physical vapor deposition (HPM-PVD) has recently emerged for metal deposition into deep submicron features in state of the art integrated circuit fabrication. However, the plasma characteristics and process mechanism are not well known. An integrated plasma equipment-feature profile modeling infrastructure has therefore been developed for HPM-PVD deposition, and it has been applied to simulating copper seed deposition with an Ar background gas for damascene metalization. The equipment scale model is based on the hybrid plasma equipment model [M. Grapperhaus et al., J. Appl. Phys. 83, 35 (1998); J. Lu and M. J. Kushner, ibid., 89, 878 (2001)], which couples a three-dimensional Monte Carlo sputtering module within a two-dimensional fluid model. The plasma kinetics of thermalized, athermal, and ionized metals and the contributions of these species in feature deposition are resolved. A Monte Carlo technique is used to derive the angular distribution of athermal metals. Simulations show that in typical HPM-PVD processing, Ar + is the dominant ionized species driving sputtering. Athermal metal neutrals are the dominant deposition precursors due to the operation at high target power and low pressure. The angular distribution of athermals is off axis and more focused than thermal neutrals. The athermal characteristics favor sufficient and uniform deposition on the sidewall of the feature, which is the critical area in small feature filling. In addition, athermals lead to a thick bottom coverage. An appreciable fraction (∼10%) of the metals incident to the wafer are ionized. The ionized metals also contribute to bottom deposition in the absence of sputtering. We have studied the impact of process and equipment parameters on HPM-PVD. Simulations show that target power impacts both plasma ionization and target sputtering. The Ar + ion density increases nearly linearly with target power, different from the behavior of typical ionized PVD processing. The

  9. Discriminative topological features reveal biological network mechanisms

    Directory of Open Access Journals (Sweden)

    Levovitz Chaya

    2004-11-01

    Full Text Available Abstract Background Recent genomic and bioinformatic advances have motivated the development of numerous network models intending to describe graphs of biological, technological, and sociological origin. In most cases the success of a model has been evaluated by how well it reproduces a few key features of the real-world data, such as degree distributions, mean geodesic lengths, and clustering coefficients. Often pairs of models can reproduce these features with indistinguishable fidelity despite being generated by vastly different mechanisms. In such cases, these few target features are insufficient to distinguish which of the different models best describes real world networks of interest; moreover, it is not clear a priori that any of the presently-existing algorithms for network generation offers a predictive description of the networks inspiring them. Results We present a method to assess systematically which of a set of proposed network generation algorithms gives the most accurate description of a given biological network. To derive discriminative classifiers, we construct a mapping from the set of all graphs to a high-dimensional (in principle infinite-dimensional "word space". This map defines an input space for classification schemes which allow us to state unambiguously which models are most descriptive of a given network of interest. Our training sets include networks generated from 17 models either drawn from the literature or introduced in this work. We show that different duplication-mutation schemes best describe the E. coli genetic network, the S. cerevisiae protein interaction network, and the C. elegans neuronal network, out of a set of network models including a linear preferential attachment model and a small-world model. Conclusions Our method is a first step towards systematizing network models and assessing their predictability, and we anticipate its usefulness for a number of communities.

  10. K-dimensional trio coherent states

    International Nuclear Information System (INIS)

    Yi, Hyo Seok; Nguyen, Ba An; Kim, Jaewan

    2004-01-01

    We introduce a novel class of higher-order, three-mode states called K-dimensional trio coherent states. We study their mathematical properties and prove that they form a complete set in a truncated Fock space. We also study their physical content by explicitly showing that they exhibit nonclassical features such as oscillatory number distribution, sub-Poissonian statistics, Cauchy-Schwarz inequality violation and phase-space quantum interferences. Finally, we propose an experimental scheme to realize the state with K = 2 in the quantized vibronic motion of a trapped ion

  11. A study of metaheuristic algorithms for high dimensional feature selection on microarray data

    Science.gov (United States)

    Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna

    2017-11-01

    Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.

  12. The (2+1)-dimensional axial universes—solutions to the Einstein equations, dimensional reduction points and Klein–Fock–Gordon waves

    International Nuclear Information System (INIS)

    Fiziev, P P; Shirkov, D V

    2012-01-01

    The paper presents a generalization and further development of our recent publications, where solutions of the Klein–Fock–Gordon equation defined on a few particular D = (2 + 1)-dimensional static spacetime manifolds were considered. The latter involve toy models of two-dimensional spaces with axial symmetry, including dimensional reduction to the one-dimensional space as a singular limiting case. Here, the non-static models of space geometry with axial symmetry are under consideration. To make these models closer to physical reality, we define a set of ‘admissible’ shape functions ρ(t, z) as the (2 + 1)-dimensional Einstein equation solutions in the vacuum spacetime, in the presence of the Λ-term and for the spacetime filled with the standard ‘dust’. It is curious that in the last case the Einstein equations reduce to the well-known Monge–Ampère equation, thus enabling one to obtain the general solution of the Cauchy problem, as well as a set of other specific solutions involving one arbitrary function. A few explicit solutions of the Klein–Fock–Gordon equation in this set are given. An interesting qualitative feature of these solutions relates to the dimensional reduction points, their classification and time behavior. In particular, these new entities could provide us with novel insight into the nature of P- and T-violations and of the Big Bang. A short comparison with other attempts to utilize the dimensional reduction of the spacetime is given. (paper)

  13. On Kubo-Martin-Schwinger states of classical dynamical systems with the infinite-dimensional phase space

    International Nuclear Information System (INIS)

    Arsen'ev, A.A.

    1979-01-01

    Example of a classical dynamical system with the infinite-dimensional phase space, satisfying the analogue of the Kubo-Martin-Schwinger conditions for classical dynamics, is constructed explicitly. Connection between the system constructed and the Fock space dynamics is pointed out

  14. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  15. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Alessandra Caggiano

    2018-03-01

    Full Text Available Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA is proposed. PCA allowed to identify a smaller number of features (k = 2 features, the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax was achieved, with predicted values very close to the measured tool wear values.

  16. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition

    Science.gov (United States)

    2018-01-01

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443

  17. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition.

    Science.gov (United States)

    Caggiano, Alessandra

    2018-03-09

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.

  18. Anatomical features of acute mitral valve repair dysfunction: Additional value of three-dimensional echocardiography.

    Science.gov (United States)

    Derkx, Salomé; Nguyen, Virginia; Cimadevilla, Claire; Verdonk, Constance; Lepage, Laurent; Raffoul, Richard; Nataf, Patrick; Vahanian, Alec; Messika-Zeitoun, David

    2017-03-01

    Recurrence of mitral regurgitation after mitral valve repair is correlated with unfavourable left ventricular remodelling and poor outcome. This pictorial review describes the echocardiographic features of three types of acute mitral valve repair dysfunction, and the additional value of three-dimensional echocardiography. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  19. Influence of cusps and intersections on the calculation of the Wilson loop in ν-dimensional space

    International Nuclear Information System (INIS)

    Bezerra, V.B.

    1984-01-01

    A discussion is given about the influence of cusps and intersections on the calculation of the Wilson Loop in ν-dimensional space. In particular, for the two-dimensional case, it is shown that there are no divergences. (Author) [pt

  20. Three-Dimensional Navier-Stokes Calculations Using the Modified Space-Time CESE Method

    Science.gov (United States)

    Chang, Chau-lyan

    2007-01-01

    The space-time conservation element solution element (CESE) method is modified to address the robustness issues of high-aspect-ratio, viscous, near-wall meshes. In this new approach, the dependent variable gradients are evaluated using element edges and the corresponding neighboring solution elements while keeping the original flux integration procedure intact. As such, the excellent flux conservation property is retained and the new edge-based gradients evaluation significantly improves the robustness for high-aspect ratio meshes frequently encountered in three-dimensional, Navier-Stokes calculations. The order of accuracy of the proposed method is demonstrated for oblique acoustic wave propagation, shock-wave interaction, and hypersonic flows over a blunt body. The confirmed second-order convergence along with the enhanced robustness in handling hypersonic blunt body flow calculations makes the proposed approach a very competitive CFD framework for 3D Navier-Stokes simulations.

  1. 3D radiation sensors with three dimensional electrodes

    CERN Document Server

    Da Via, Cinzia; Parker, Sherwood

    2018-01-01

    This book covers the technical properties, fabrication details, measurement results and applications of three-dimensional silicon radiation sensors. Such devices are currently used in the ATLAS experiment at the European Centre for Particle Physics (CERN) for particle tracking in high energy physics. They are the radiation hardest devices ever fabricated. They have applications in neutron detection, medical dosimetry and space. Written by the leading names in this field, the book explains to non-experts the essential features of silicon particle detectors, interactions of radiation with matter, radiation damage effects, and micro-fabrication. It also provides an historical view of the above.

  2. Neutrino stress tensor regularization in two-dimensional space-time

    International Nuclear Information System (INIS)

    Davies, P.C.W.; Unruh, W.G.

    1977-01-01

    The method of covariant point-splitting is used to regularize the stress tensor for a massless spin 1/2 (neutrino) quantum field in an arbitrary two-dimensional space-time. A thermodynamic argument is used as a consistency check. The result shows that the physical part of the stress tensor is identical with that of the massless scalar field (in the absence of Casimir-type terms) even though the formally divergent expression is equal to the negative of the scalar case. (author)

  3. Coset Space Dimensional Reduction approach to the Standard Model

    International Nuclear Information System (INIS)

    Farakos, K.; Kapetanakis, D.; Koutsoumbas, G.; Zoupanos, G.

    1988-01-01

    We present a unified theory in ten dimensions based on the gauge group E 8 , which is dimensionally reduced to the Standard Mode SU 3c xSU 2 -LxU 1 , which breaks further spontaneously to SU 3L xU 1em . The model gives similar predictions for sin 2 θ w and proton decay as the minimal SU 5 G.U.T., while a natural choice of the coset space radii predicts light Higgs masses a la Coleman-Weinberg

  4. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    Science.gov (United States)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  5. Highly-Expressive Spaces of Well-Behaved Transformations: Keeping It Simple

    DEFF Research Database (Denmark)

    Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan

    We propose novel finite-dimensional spaces of Rn → Rn transformations, n ∈ {1, 2, 3}, derived from (continuously-defined) parametric stationary velocity fields. Particularly, we obtain these transformations, which are diffeomorphisms, by fast and highly-accurate integration of continuous piecewise...... transformations). Its applications include, but are not limited to: unconstrained optimization over monotonic functions; modeling cumulative distribution functions or histograms; time warping; image registration; landmark-based warping; real-time diffeomorphic image editing....

  6. The role of extreme orbits in the global organization of periodic regions in parameter space for one dimensional maps

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Diogo Ricardo da, E-mail: diogo_cost@hotmail.com [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Hansen, Matheus [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Instituto de Física, Univ. São Paulo, Rua do Matão, Cidade Universitária, 05314-970, São Paulo – SP (Brazil); Guarise, Gustavo [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Medrano-T, Rene O. [Departamento de Ciências Exatas e da Terra, UNIFESP – Universidade Federal de São Paulo, Rua São Nicolau, 210, Centro, 09913-030, Diadema, SP (Brazil); Department of Mathematics, Imperial College London, London SW7 2AZ (United Kingdom); Leonel, Edson D. [Departamento de Física, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34151 Trieste (Italy)

    2016-04-22

    We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems. - Highlights: • Extreme orbits and the organization of periodic regions in parameter space. • One-dimensional dissipative mappings. • The circle map and also a time perturbed logistic map were studied.

  7. The role of extreme orbits in the global organization of periodic regions in parameter space for one dimensional maps

    International Nuclear Information System (INIS)

    Costa, Diogo Ricardo da; Hansen, Matheus; Guarise, Gustavo; Medrano-T, Rene O.; Leonel, Edson D.

    2016-01-01

    We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems. - Highlights: • Extreme orbits and the organization of periodic regions in parameter space. • One-dimensional dissipative mappings. • The circle map and also a time perturbed logistic map were studied.

  8. 3-dimensional interactive space (3DIS)

    International Nuclear Information System (INIS)

    Veitch, S.; Veitch, J.; West, S.J.

    1991-01-01

    This paper reports on the 3DIS security system which uses standard CCTV cameras to create 3-Dimensional detection zones around valuable assets within protected areas. An intrusion into a zone changes light values and triggers an alarm that is annunciated, while images from multiple cameras are recorded. 3DIS lowers nuisance alarm rates and provides superior automated surveillance capability. Performance is improved over 2-D systems because activity around, above or below the zone does to cause an alarm. Invisible 3-D zones protect assets as small as a pin or as large as a 747 jetliner. Detection zones are created by excising subspaces from the overlapping fields of view of two or more video cameras. Hundred of zones may co-exist, operating simultaneously. Intrusion into any 3-D zone will cause a coincidental change in light values, triggering an alarm specific to that space

  9. A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.

    Science.gov (United States)

    Xue, Xiaoming; Zhou, Jianzhong

    2017-01-01

    To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  11. Modeling Dispersion of Chemical-Biological Agents in Three Dimensional Living Space

    International Nuclear Information System (INIS)

    William S. Winters

    2002-01-01

    This report documents a series of calculations designed to demonstrate Sandia's capability in modeling the dispersal of chemical and biological agents in complex three-dimensional spaces. The transport of particles representing biological agents is modeled in a single room and in several connected rooms. The influence of particle size, particle weight and injection method are studied

  12. Eigenmodes of three-dimensional spherical spaces and their application to cosmology

    International Nuclear Information System (INIS)

    Lehoucq, Roland; Weeks, Jeffrey; Uzan, Jean-Philippe; Gausmann, Evelise; Luminet, Jean-Pierre

    2002-01-01

    This paper investigates the computation of the eigenmodes of the Laplacian operator in multi-connected three-dimensional spherical spaces. General mathematical results and analytical solutions for lens and prism spaces are presented. Three complementary numerical methods are developed and compared with our analytic results and previous investigations. The cosmological applications of these results are discussed, focusing on the cosmic microwave background (CMB) anisotropies. In particular, whereas in the Euclidean case too-small universes are excluded by present CMB data, in the spherical case, candidate topologies will always exist even if the total energy density parameter of the universe is very close to unity

  13. Eigenmodes of three-dimensional spherical spaces and their application to cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Lehoucq, Roland [CE-Saclay, DSM/DAPNIA/Service d' Astrophysique, F-91191 Gif sur Yvette (France); Weeks, Jeffrey [15 Farmer St, Canton, NY 13617-1120 (United States); Uzan, Jean-Philippe [Institut d' Astrophysique de Paris, GReCO, CNRS-FRE 2435, 98 bis, Bd Arago, 75014 Paris (France); Gausmann, Evelise [Instituto de Fisica Teorica, Rua Pamplona, 145 Bela Vista - Sao Paulo - SP, CEP 01405-900 (Brazil); Luminet, Jean-Pierre [Laboratoire Univers et Theories, CNRS-FRE 2462, Observatoire de Paris, F-92195 Meudon (France)

    2002-09-21

    This paper investigates the computation of the eigenmodes of the Laplacian operator in multi-connected three-dimensional spherical spaces. General mathematical results and analytical solutions for lens and prism spaces are presented. Three complementary numerical methods are developed and compared with our analytic results and previous investigations. The cosmological applications of these results are discussed, focusing on the cosmic microwave background (CMB) anisotropies. In particular, whereas in the Euclidean case too-small universes are excluded by present CMB data, in the spherical case, candidate topologies will always exist even if the total energy density parameter of the universe is very close to unity.

  14. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

    Science.gov (United States)

    Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang

    2017-12-01

    Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

  15. Room Scanner representation and measurement of three-dimensional spaces using a smartphone

    International Nuclear Information System (INIS)

    Bejarano Rodriguez, Mauricio

    2013-01-01

    An algorithm was designed to measure and represent three-dimensional spaces using the resources available on a smartphone. The implementation of the fusion sensor has enabled to use basic trigonometry to calculate the lengths of the walls and the corners of the room. The OpenGL library was used to create and visualize the three-dimensional model of the measured internal space. A library was created to export the represented model to other commercial formats. A certain level of degradation is obtained once an attempt is made to measure long distances because the algorithm depends on the degree of inclination of the smarthphone to perform the measurements. For this reason, at higher elevations are obtained more accurate measurements. The capture process was changed in order to correct the margin of error to measure soccer field. The algorithm has recorded measurements less than 3% margin of error through the process of subdividing the measurement area. (author) [es

  16. A HYBRID FILTER AND WRAPPER FEATURE SELECTION APPROACH FOR DETECTING CONTAMINATION IN DRINKING WATER MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    S. VISALAKSHI

    2017-07-01

    Full Text Available Feature selection is an important task in predictive models which helps to identify the irrelevant features in the high - dimensional dataset. In this case of water contamination detection dataset, the standard wrapper algorithm alone cannot be applied because of the complexity. To overcome this computational complexity problem and making it lighter, filter-wrapper based algorithm has been proposed. In this work, reducing the feature space is a significant component of water contamination. The main findings are as follows: (1 The main goal is speeding up the feature selection process, so the proposed filter - based feature pre-selection is applied and guarantees that useful data are improbable to be detached in the initial stage which discussed briefly in this paper. (2 The resulting features are again filtered by using the Genetic Algorithm coded with Support Vector Machine method, where it facilitates to nutshell the subset of features with high accuracy and decreases the expense. Experimental results show that the proposed methods trim down redundant features effectively and achieved better classification accuracy.

  17. Three-Dimensional Elastomeric Scaffolds Designed with Cardiac-Mimetic Structural and Mechanical Features

    Science.gov (United States)

    Neal, Rebekah A.; Jean, Aurélie; Park, Hyoungshin; Wu, Patrick B.; Hsiao, James; Engelmayr, George C.; Langer, Robert

    2013-01-01

    Tissue-engineered constructs, at the interface of material science, biology, engineering, and medicine, have the capacity to improve outcomes for cardiac patients by providing living cells and degradable biomaterials that can regenerate the native myocardium. With an ultimate goal of both delivering cells and providing mechanical support to the healing heart, we designed three-dimensional (3D) elastomeric scaffolds with (1) stiffnesses and anisotropy mimicking explanted myocardial specimens as predicted by finite-element (FE) modeling, (2) systematically varied combinations of rectangular pore pattern, pore aspect ratio, and strut width, and (3) structural features approaching tissue scale. Based on predicted mechanical properties, three scaffold designs were selected from eight candidates for fabrication from poly(glycerol sebacate) by micromolding from silicon wafers. Large 20×20 mm scaffolds with high aspect ratio features (5:1 strut height:strut width) were reproducibly cast, cured, and demolded at a relatively high throughput. Empirically measured mechanical properties demonstrated that scaffolds were cardiac mimetic and validated FE model predictions. Two-layered scaffolds providing fully interconnected pore networks were fabricated by layer-by-layer assembly. C2C12 myoblasts cultured on one-layered scaffolds exhibited specific patterns of cell elongation and interconnectivity that appeared to be guided by the scaffold pore pattern. Neonatal rat heart cells cultured on two-layered scaffolds for 1 week were contractile, both spontaneously and in response to electrical stimulation, and expressed sarcomeric α-actinin, a cardiac biomarker. This work not only demonstrated several scaffold designs that promoted functional assembly of rat heart cells, but also provided the foundation for further computational and empirical investigations of 3D elastomeric scaffolds for cardiac tissue engineering. PMID:23190320

  18. Twistor Cosmology and Quantum Space-Time

    International Nuclear Information System (INIS)

    Brody, D.C.; Hughston, L.P.

    2005-01-01

    The purpose of this paper is to present a model of a 'quantum space-time' in which the global symmetries of space-time are unified in a coherent manner with the internal symmetries associated with the state space of quantum-mechanics. If we take into account the fact that these distinct families of symmetries should in some sense merge and become essentially indistinguishable in the unified regime, our framework may provide an approximate description of or elementary model for the structure of the universe at early times. The quantum elements employed in our characterisation of the geometry of space-time imply that the pseudo-Riemannian structure commonly regarded as an essential feature in relativistic theories must be dispensed with. Nevertheless, the causal structure and the physical kinematics of quantum space-time are shown to persist in a manner that remains highly analogous to the corresponding features of the classical theory. In the case of the simplest conformally flat cosmological models arising in this framework, the twistorial description of quantum space-time is shown to be effective in characterising the various physical and geometrical properties of the theory. As an example, a sixteen-dimensional analogue of the Friedmann-Robertson-Walker cosmologies is constructed, and its chronological development is analysed in some detail. More generally, whenever the dimension of a quantum space-time is an even perfect square, there exists a canonical way of breaking the global quantum space-time symmetry so that a generic point of quantum space-time can be consistently interpreted as a quantum operator taking values in Minkowski space. In this scenario, the breakdown of the fundamental symmetry of the theory is due to a loss of quantum entanglement between space-time and internal quantum degrees of freedom. It is thus possible to show in a certain specific sense that the classical space-time description is an emergent feature arising as a consequence of a

  19. Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces

    DEFF Research Database (Denmark)

    Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel

    The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...

  20. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Dynamics of a neuron model in different two-dimensional parameter-spaces

    International Nuclear Information System (INIS)

    Rech, Paulo C.

    2011-01-01

    We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades. - Research highlights: → We report parameter-spaces obtained for the Hindmarsh-Rose neuron model. → Regardless of the combination of parameters, a typical scenario is preserved. → The scenario presents a comb-shaped chaotic region immersed in a periodic region. → Periodic regions near the chaotic region are in period-adding bifurcation cascades.

  2. On spinless null propagation in five-dimensional space-times with approximate space-like Killing symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Breban, Romulus [Institut Pasteur, Paris Cedex 15 (France)

    2016-09-15

    Five-dimensional (5D) space-time symmetry greatly facilitates how a 4D observer perceives the propagation of a single spinless particle in a 5D space-time. In particular, if the 5D geometry is independent of the fifth coordinate then the 5D physics may be interpreted as 4D quantum mechanics. In this work we address the case where the symmetry is approximate, focusing on the case where the 5D geometry depends weakly on the fifth coordinate. We show that concepts developed for the case of exact symmetry approximately hold when other concepts such as decaying quantum states, resonant quantum scattering, and Stokes drag are adopted, as well. We briefly comment on the optical model of the nuclear interactions and Millikan's oil drop experiment. (orig.)

  3. Pair production of Dirac particles in a d + 1-dimensional noncommutative space-time

    Energy Technology Data Exchange (ETDEWEB)

    Ousmane Samary, Dine [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada); University of Abomey-Calavi, International Chair in Mathematical Physics and Applications (ICMPA-UNESCO Chair), Cotonou (Benin); N' Dolo, Emanonfi Elias; Hounkonnou, Mahouton Norbert [University of Abomey-Calavi, International Chair in Mathematical Physics and Applications (ICMPA-UNESCO Chair), Cotonou (Benin)

    2014-11-15

    This work addresses the computation of the probability of fermionic particle pair production in d + 1-dimensional noncommutative Moyal space. Using Seiberg-Witten maps, which establish relations between noncommutative and commutative field variables, up to the first order in the noncommutative parameter θ, we derive the probability density of vacuum-vacuum pair production of Dirac particles. The cases of constant electromagnetic, alternating time-dependent, and space-dependent electric fields are considered and discussed. (orig.)

  4. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    Science.gov (United States)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  5. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  6. Spiral CT features and anatomic basis of posterior pararenal space involvement in acute pancreatitis

    International Nuclear Information System (INIS)

    Min Pengqiu; Yan Zhihan; Yang Hengxuan; Liu Zaiyi; Song Bin; Wu Bing; Zhang Jin; Liu Rongbo

    2005-01-01

    Objective: To evaluate spiral CT features and anatomic basis of the posterior pararenal space (PPS) involvement in acute pancreatitis (AP). Methods: CT images of 87 cases with AP were retrospectively studied with focus on spiral CT features, incidence of the PPS involvement, and its correlations with the posterior renal fascia or lateroconal fascia. Results: Our study showed that the incidence of the PPS involvement was 47% (41/87), with Grade A 53% (46/87), Grade B 24%(21/87), and Grade C 23% (20/87), and Grade 0 53% (46/87), Grade I 22% (19/87), and Grade II 25% (22/87), respectively. The pancreatitis fluid collection in the PPS was continuous with that in the anterior pararenal space or with the fluid between the two laminae of the posterior renal fascia. In 3 follow-up cases, pseudocysts in the PPS were continuous with that in anterior pararenal space below the cone of renal fascia. Conclusion: Spiral CT features of the PPS involvement varies from mild inflammatory changes to fluid collection or phlegmonous mass. Fluid within anterior pararenal space in AP flows into the PPS by three routes. (authors)

  7. Irreducible quantum group modules with finite dimensional weight spaces

    DEFF Research Database (Denmark)

    Pedersen, Dennis Hasselstrøm

    a finitely generated U q -module which has finite dimensional weight spaces and is a sum of those. Our approach follows the procedures used by S. Fernando and O. Mathieu to solve the corresponding problem for semisimple complex Lie algebra modules. To achieve this we have to overcome a number of obstacles...... not present in the classical case. In the process we also construct twisting functors rigerously for quantum group modules, study twisted Verma modules and show that these admit a Jantzen filtration with corresponding Jantzen sum formula....

  8. Three-dimensional labeling program for elucidation of the geometric properties of biological particles in three-dimensional space.

    Science.gov (United States)

    Nomura, A; Yamazaki, Y; Tsuji, T; Kawasaki, Y; Tanaka, S

    1996-09-15

    For all biological particles such as cells or cellular organelles, there are three-dimensional coordinates representing the centroid or center of gravity. These coordinates and other numerical parameters such as volume, fluorescence intensity, surface area, and shape are referred to in this paper as geometric properties, which may provide critical information for the clarification of in situ mechanisms of molecular and cellular functions in living organisms. We have established a method for the elucidation of these properties, designated the three-dimensional labeling program (3DLP). Algorithms of 3DLP are so simple that this method can be carried out through the use of software combinations in image analysis on a personal computer. To evaluate 3DLP, it was applied to a 32-cell-stage sea urchin embryo, double stained with FITC for cellular protein of blastomeres and propidium iodide for nuclear DNA. A stack of optical serial section images was obtained by confocal laser scanning microscopy. The method was found effective for determining geometric properties and should prove applicable to the study of many different kinds of biological particles in three-dimensional space.

  9. Three-body problem in d-dimensional space: Ground state, (quasi)-exact-solvability

    Science.gov (United States)

    Turbiner, Alexander V.; Miller, Willard; Escobar-Ruiz, M. A.

    2018-02-01

    As a straightforward generalization and extension of our previous paper [A. V. Turbiner et al., "Three-body problem in 3D space: Ground state, (quasi)-exact-solvability," J. Phys. A: Math. Theor. 50, 215201 (2017)], we study the aspects of the quantum and classical dynamics of a 3-body system with equal masses, each body with d degrees of freedom, with interaction depending only on mutual (relative) distances. The study is restricted to solutions in the space of relative motion which are functions of mutual (relative) distances only. It is shown that the ground state (and some other states) in the quantum case and the planar trajectories (which are in the interaction plane) in the classical case are of this type. The quantum (and classical) Hamiltonian for which these states are eigenfunctions is derived. It corresponds to a three-dimensional quantum particle moving in a curved space with special d-dimension-independent metric in a certain d-dependent singular potential, while at d = 1, it elegantly degenerates to a two-dimensional particle moving in flat space. It admits a description in terms of pure geometrical characteristics of the interaction triangle which is defined by the three relative distances. The kinetic energy of the system is d-independent; it has a hidden sl(4, R) Lie (Poisson) algebra structure, alternatively, the hidden algebra h(3) typical for the H3 Calogero model as in the d = 3 case. We find an exactly solvable three-body S3-permutationally invariant, generalized harmonic oscillator-type potential as well as a quasi-exactly solvable three-body sextic polynomial type potential with singular terms. For both models, an extra first order integral exists. For d = 1, the whole family of 3-body (two-dimensional) Calogero-Moser-Sutherland systems as well as the Tremblay-Turbiner-Winternitz model is reproduced. It is shown that a straightforward generalization of the 3-body (rational) Calogero model to d > 1 leads to two primitive quasi

  10. Execution spaces for simple higher dimensional automata

    DEFF Research Database (Denmark)

    Raussen, Martin

    Higher Dimensional Automata (HDA) are highly expressive models for concurrency in Computer Science, cf van Glabbeek [26]. For a topologist, they are attractive since they can be modeled as cubical complexes - with an inbuilt restriction for directions´of allowable (d-)paths. In Raussen [25], we...

  11. Filaments of Meaning in Word Space

    OpenAIRE

    Karlgren, Jussi; Holst, Anders; Sahlgren, Magnus

    2008-01-01

    Word space models, in the sense of vector space models built on distributional data taken from texts, are used to model semantic relations between words. We argue that the high dimensionality of typical vector space models lead to unintuitive effects on modeling likeness of meaning and that the local structure of word spaces is where interesting semantic relations reside. We show that the local structure of word spaces has substantially different dimensionality and character than the global s...

  12. A Meta-Heuristic Regression-Based Feature Selection for Predictive Analytics

    Directory of Open Access Journals (Sweden)

    Bharat Singh

    2014-11-01

    Full Text Available A high-dimensional feature selection having a very large number of features with an optimal feature subset is an NP-complete problem. Because conventional optimization techniques are unable to tackle large-scale feature selection problems, meta-heuristic algorithms are widely used. In this paper, we propose a particle swarm optimization technique while utilizing regression techniques for feature selection. We then use the selected features to classify the data. Classification accuracy is used as a criterion to evaluate classifier performance, and classification is accomplished through the use of k-nearest neighbour (KNN and Bayesian techniques. Various high dimensional data sets are used to evaluate the usefulness of the proposed approach. Results show that our approach gives better results when compared with other conventional feature selection algorithms.

  13. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  14. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  15. The Three-Dimensional Morphology of VY Canis Majoris. II. Polarimetry and the Line-of-Sight Distribution of the Ejecta

    Science.gov (United States)

    Jones, Terry Jay; Humphreys, Roberta M.; Helton, L. Andrew; Gui, Changfeng; Huang, Xiang

    2007-06-01

    We use imaging polarimetry taken with the HST Advanced Camera for Surveys High Resolution Camera to explore the three-dimensional structure of the circumstellar dust distribution around the red supergiant VY Canis Majoris. The polarization vectors of the nebulosity surrounding VY CMa show a strong centrosymmetric pattern in all directions except directly east and range from 10% to 80% in fractional polarization. In regions that are optically thin, and therefore likely to have only single scattering, we use the fractional polarization and photometric color to locate the physical position of the dust along the line of sight. Most of the individual arclike features and clumps seen in the intensity image are also features in the fractional polarization map. These features must be distinct geometric objects. If they were just local density enhancements, the fractional polarization would not change so abruptly at the edge of the feature. The location of these features in the ejecta of VY CMa using polarimetry provides a determination of their three-dimensional geometry independent of, but in close agreement with, the results from our study of their kinematics (Paper I). Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  16. Learning Low-Dimensional Metrics

    OpenAIRE

    Jain, Lalit; Mason, Blake; Nowak, Robert

    2017-01-01

    This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy ...

  17. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  18. TSAR: a program for automatic resonance assignment using 2D cross-sections of high dimensionality, high-resolution spectra

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)

    2012-09-15

    While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.

  19. Search of wormholes in different dimensional non-commutative inspired space-times with Lorentzian distribution

    Energy Technology Data Exchange (ETDEWEB)

    Bhar, Piyali; Rahaman, Farook [Jadavpur University, Department of Mathematics, Kolkata, West Bengal (India)

    2014-12-01

    In this paper we ask whether the wormhole solutions exist in different dimensional noncommutativity-inspired spacetimes. It is well known that the noncommutativity of the space is an outcome of string theory and it replaced the usual point-like object by a smeared object. Here we have chosen the Lorentzian distribution as the density function in the noncommutativity-inspired spacetime. We have observed that the wormhole solutions exist only in four and five dimensions; however, in higher than five dimensions no wormhole exists. For five dimensional spacetime, we get a wormhole for a restricted region. In the usual four dimensional spacetime, we get a stable wormhole which is asymptotically flat. (orig.)

  20. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation...... and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...... with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method...

  1. Horizontal biases in rats’ use of three-dimensional space

    Science.gov (United States)

    Jovalekic, Aleksandar; Hayman, Robin; Becares, Natalia; Reid, Harry; Thomas, George; Wilson, Jonathan; Jeffery, Kate

    2011-01-01

    Rodent spatial cognition studies allow links to be made between neural and behavioural phenomena, and much is now known about the encoding and use of horizontal space. However, the real world is three dimensional, providing cognitive challenges that have yet to be explored. Motivated by neural findings suggesting weaker encoding of vertical than horizontal space, we examined whether rats show a similar behavioural anisotropy when distributing their time freely between vertical and horizontal movements. We found that in two- or three-dimensional environments with a vertical dimension, rats showed a prioritization of horizontal over vertical movements in both foraging and detour tasks. In the foraging tasks, the animals executed more horizontal than vertical movements and adopted a “layer strategy” in which food was collected from one horizontal level before moving to the next. In the detour tasks, rats preferred the routes that allowed them to execute the horizontal leg first. We suggest three possible reasons for this behavioural bias. First, as suggested by Grobety and Schenk [5], it allows minimisation of energy expenditure, inasmuch as costly vertical movements are minimised. Second, it may be a manifestation of the temporal discounting of effort, in which animals value delayed effort as less costly than immediate effort. Finally, it may be that at the neural level rats encode the vertical dimension less precisely, and thus prefer to bias their movements in the more accurately encoded horizontal dimension. We suggest that all three factors are related, and all play a part. PMID:21419172

  2. Three-Dimensional Finite Element Analysis on Stress Distribution of Internal Implant-Abutment Engagement Features.

    Science.gov (United States)

    Cho, Sung-Yong; Huh, Yun-Hyuk; Park, Chan-Jin; Cho, Lee-Ra

    To investigate the stress distribution in an implant-abutment complex with a preloaded abutment screw by comparing implant-abutment engagement features using three-dimensional finite element analysis (FEA). For FEA modeling, two implants-one with a single (S) engagement system and the other with a double (D) engagement system-were placed in the human mandibular molar region. Two types of abutments (hexagonal, conical) were connected to the implants. Different implant models (a single implant, two parallel implants, and mesial and tilted distal implants with 1-mm bone loss) were assumed. A static axial force and a 45-degree oblique force of 200 N were applied as the sum of vectors to the top of the prosthetic occlusal surface with a preload of 30 Ncm in the abutment screw. The von Mises stresses at the implant-abutment and abutment-screw interfaces were measured. In the single implant model, the S-conical abutment type exhibited broader stress distribution than the S-hexagonal abutment. In the double engagement system, the stress concentration was high in the lower contact area of the implant-abutment engagement. In the tilted implant model, the stress concentration point was different from that in the parallel implant model because of the difference in the bone level. The double engagement system demonstrated a high stress concentration at the lower contact area of the implant-abutment interface. To decrease the stress concentration, the type of engagement features of the implant-abutment connection should be carefully considered.

  3. Exploring space-time structure of human mobility in urban space

    Science.gov (United States)

    Sun, J. B.; Yuan, J.; Wang, Y.; Si, H. B.; Shan, X. M.

    2011-03-01

    Understanding of human mobility in urban space benefits the planning and provision of municipal facilities and services. Due to the high penetration of cell phones, mobile cellular networks provide information for urban dynamics with a large spatial extent and continuous temporal coverage in comparison with traditional approaches. The original data investigated in this paper were collected by cellular networks in a southern city of China, recording the population distribution by dividing the city into thousands of pixels. The space-time structure of urban dynamics is explored by applying Principal Component Analysis (PCA) to the original data, from temporal and spatial perspectives between which there is a dual relation. Based on the results of the analysis, we have discovered four underlying rules of urban dynamics: low intrinsic dimensionality, three categories of common patterns, dominance of periodic trends, and temporal stability. It implies that the space-time structure can be captured well by remarkably few temporal or spatial predictable periodic patterns, and the structure unearthed by PCA evolves stably over time. All these features play a critical role in the applications of forecasting and anomaly detection.

  4. Neural representations of emotion are organized around abstract event features.

    Science.gov (United States)

    Skerry, Amy E; Saxe, Rebecca

    2015-08-03

    Research on emotion attribution has tended to focus on the perception of overt expressions of at most five or six basic emotions. However, our ability to identify others' emotional states is not limited to perception of these canonical expressions. Instead, we make fine-grained inferences about what others feel based on the situations they encounter, relying on knowledge of the eliciting conditions for different emotions. In the present research, we provide convergent behavioral and neural evidence concerning the representations underlying these concepts. First, we find that patterns of activity in mentalizing regions contain information about subtle emotional distinctions conveyed through verbal descriptions of eliciting situations. Second, we identify a space of abstract situation features that well captures the emotion discriminations subjects make behaviorally and show that this feature space outperforms competing models in capturing the similarity space of neural patterns in these regions. Together, the data suggest that our knowledge of others' emotions is abstract and high dimensional, that brain regions selective for mental state reasoning support relatively subtle distinctions between emotion concepts, and that the neural representations in these regions are not reducible to more primitive affective dimensions such as valence and arousal. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Features in chemical kinetics. I. Signatures of self-emerging dimensional reduction from a general format of the evolution law.

    Science.gov (United States)

    Nicolini, Paolo; Frezzato, Diego

    2013-06-21

    Simplification of chemical kinetics description through dimensional reduction is particularly important to achieve an accurate numerical treatment of complex reacting systems, especially when stiff kinetics are considered and a comprehensive picture of the evolving system is required. To this aim several tools have been proposed in the past decades, such as sensitivity analysis, lumping approaches, and exploitation of time scales separation. In addition, there are methods based on the existence of the so-called slow manifolds, which are hyper-surfaces of lower dimension than the one of the whole phase-space and in whose neighborhood the slow evolution occurs after an initial fast transient. On the other hand, all tools contain to some extent a degree of subjectivity which seems to be irremovable. With reference to macroscopic and spatially homogeneous reacting systems under isothermal conditions, in this work we shall adopt a phenomenological approach to let self-emerge the dimensional reduction from the mathematical structure of the evolution law. By transforming the original system of polynomial differential equations, which describes the chemical evolution, into a universal quadratic format, and making a direct inspection of the high-order time-derivatives of the new dynamic variables, we then formulate a conjecture which leads to the concept of an "attractiveness" region in the phase-space where a well-defined state-dependent rate function ω has the simple evolution ω[over dot]=-ω(2) along any trajectory up to the stationary state. This constitutes, by itself, a drastic dimensional reduction from a system of N-dimensional equations (being N the number of chemical species) to a one-dimensional and universal evolution law for such a characteristic rate. Step-by-step numerical inspections on model kinetic schemes are presented. In the companion paper [P. Nicolini and D. Frezzato, J. Chem. Phys. 138, 234102 (2013)] this outcome will be naturally related to the

  6. Doubly sparse factor models for unifying feature transformation and feature selection

    International Nuclear Information System (INIS)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato; Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko

    2010-01-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  7. Doubly sparse factor models for unifying feature transformation and feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato [ERATO, Okanoya Emotional Information Project, Japan Science Technology Agency, Saitama (Japan); Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko, E-mail: okada@k.u-tokyo.ac.j [Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan)

    2010-06-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  8. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    Science.gov (United States)

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer

  9. Collapsing perfect fluid in self-similar five dimensional space-time and cosmic censorship

    International Nuclear Information System (INIS)

    Ghosh, S.G.; Sarwe, S.B.; Saraykar, R.V.

    2002-01-01

    We investigate the occurrence and nature of naked singularities in the gravitational collapse of a self-similar adiabatic perfect fluid in a five dimensional space-time. The naked singularities are found to be gravitationally strong in the sense of Tipler and thus violate the cosmic censorship conjecture

  10. A non-Abelian SO(8) monopole as generalization of Dirac-Yang monopoles for a 9-dimensional space

    International Nuclear Information System (INIS)

    Le, Van-Hoang; Nguyen, Thanh-Son

    2011-01-01

    We establish an explicit form of a non-Abelian SO(8) monopole in a 9-dimensional space and show that it is indeed a direct generalization of Dirac and Yang monopoles. Using the generalized Hurwitz transformation, we have found a connection between a 16-dimensional harmonic oscillator and a 9-dimensional hydrogenlike atom in the field of the SO(8) monopole (MICZ-Kepler problem). Using the built connection the group of dynamical symmetry of the 9-dimensional MICZ-Kepler problem is found as SO(10, 2).

  11. Nonlinear sigma models with compact hyperbolic target spaces

    Energy Technology Data Exchange (ETDEWEB)

    Gubser, Steven [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States); Saleem, Zain H. [Department of Physics and Astronomy, University of Pennsylvania,Philadelphia, PA 19104 (United States); National Center for Physics, Quaid-e-Azam University Campus,Islamabad 4400 (Pakistan); Schoenholz, Samuel S. [Department of Physics and Astronomy, University of Pennsylvania,Philadelphia, PA 19104 (United States); Stoica, Bogdan [Walter Burke Institute for Theoretical Physics, California Institute of Technology,452-48, Pasadena, CA 91125 (United States); Stokes, James [Department of Physics and Astronomy, University of Pennsylvania,Philadelphia, PA 19104 (United States)

    2016-06-23

    We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model V.L. Berezinskii, Destruction of long-range order in one-dimensional and two-dimensional systems having a continuous symmetry group II. Quantum systems, Sov. Phys. JETP 34 (1972) 610. J.M. Kosterlitz and D.J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, J. Phys. C 6 (1973) 1181 [http://inspirehep.net/search?p=find+J+%22J.Phys.,C6,1181%22]. . Unlike in the O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. The diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.

  12. Nonlinear sigma models with compact hyperbolic target spaces

    International Nuclear Information System (INIS)

    Gubser, Steven; Saleem, Zain H.; Schoenholz, Samuel S.; Stoica, Bogdan; Stokes, James

    2016-01-01

    We explore the phase structure of nonlinear sigma models with target spaces corresponding to compact quotients of hyperbolic space, focusing on the case of a hyperbolic genus-2 Riemann surface. The continuum theory of these models can be approximated by a lattice spin system which we simulate using Monte Carlo methods. The target space possesses interesting geometric and topological properties which are reflected in novel features of the sigma model. In particular, we observe a topological phase transition at a critical temperature, above which vortices proliferate, reminiscent of the Kosterlitz-Thouless phase transition in the O(2) model V.L. Berezinskii, Destruction of long-range order in one-dimensional and two-dimensional systems having a continuous symmetry group II. Quantum systems, Sov. Phys. JETP 34 (1972) 610. J.M. Kosterlitz and D.J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, J. Phys. C 6 (1973) 1181 [http://inspirehep.net/search?p=find+J+%22J.Phys.,C6,1181%22]. . Unlike in the O(2) case, there are many different types of vortices, suggesting a possible analogy to the Hagedorn treatment of statistical mechanics of a proliferating number of hadron species. Below the critical temperature the spins cluster around six special points in the target space known as Weierstrass points. The diversity of compact hyperbolic manifolds suggests that our model is only the simplest example of a broad class of statistical mechanical models whose main features can be understood essentially in geometric terms.

  13. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    Science.gov (United States)

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  14. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  15. Detection of relationships among multi-modal brain imaging meta-features via information flow.

    Science.gov (United States)

    Miller, Robyn L; Vergara, Victor M; Calhoun, Vince D

    2018-01-15

    Neuroscientists and clinical researchers are awash in data from an ever-growing number of imaging and other bio-behavioral modalities. This flow of brain imaging data, taken under resting and various task conditions, combines with available cognitive measures, behavioral information, genetic data plus other potentially salient biomedical and environmental information to create a rich but diffuse data landscape. The conditions being studied with brain imaging data are often extremely complex and it is common for researchers to employ more than one imaging, behavioral or biological data modality (e.g., genetics) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features or modalities. We propose an intuitive framework based on conditional probabilities for understanding information exchange between features in what we are calling a feature meta-space; that is, a space consisting of many individual featurae spaces. Features can have any dimension and can be drawn from any data source or modality. No a priori assumptions are made about the functional form (e.g., linear, polynomial, exponential) of captured inter-feature relationships. We demonstrate the framework's ability to identify relationships between disparate features of varying dimensionality by applying it to a large multi-site, multi-modal clinical dataset, balance between schizophrenia patients and controls. In our application it exposes both expected (previously observed) relationships, and novel relationships rarely considered investigated by clinical researchers. To the best of our knowledge there is not presently a comparably efficient way to capture relationships of indeterminate functional form between features of arbitrary dimension and type. We are introducing this method as an initial foray into a space that remains relatively underpopulated. The framework we propose is

  16. Parametric analysis of diffuser requirements for high expansion ratio space engine

    Science.gov (United States)

    Wojciechowski, C. J.; Anderson, P. G.

    1981-01-01

    A supersonic diffuser ejector design computer program was developed. Using empirically modified one dimensional flow methods the diffuser ejector geometry is specified by the code. The design code results for calculations up to the end of the diffuser second throat were verified. Diffuser requirements for sea level testing of high expansion ratio space engines were defined. The feasibility of an ejector system using two commonly available turbojet engines feeding two variable area ratio ejectors was demonstrated.

  17. Enhancement of Solar Cell Efficiency for Space Applications Using Two-Dimensional Photonic Crystals

    Directory of Open Access Journals (Sweden)

    Postigo P.A.

    2017-01-01

    Full Text Available The effects of having a nanopatterned photonic crystal (PC structure in the surface of a solar cell can be usefully employed to increase the energy conversion efficiency, which may be critical for space applications. In this work, we have measured the reflectance (R and transmittance (T of thin InP layers (270 nm thick bonded to a glass substrate and nanopatterned with holes down to the glass in a triangular symmetry lattice separated by a lattice parameter a=450nm and maintaining a value of r/a=0.32. The optical spectra were measured with angular resolution in the range from 0.55 to 2.0 eV. There are noticeable changes in the spectra of the PC sample, with minima and maxima of the R and T clearly shifted with respect to the unpatterned sample, and new features that alter significantly the overall lineshape of each spectrum. Those features correspond in a first approximation to the well-known Fano-like resonances of the discrete photonic modes of the PC lattice and they have been used before to determine experimentally the position of the PC bands. The observed features can be translated to the optical absorption (A defined as A=1-R-T provided there are low or negligible scattering effects. The generated absorption spectra show enhancements above and below the electronic band edge of the InP that can be correlated with the photonic band structure. Even using a thicker semiconductor layer, the abovementioned effects can justify to use a photonic crystal front surface with sub-wavelength motifs. In this way, we have fabricated and characterized a complete Ge/InGaP solar cell with a 2D-PC on its front surface. An increase in the photocurrent up to a 8% was achieved on a solar cell with a 40% of its surface covered with a PC pattern. Enhancements of the external quantum efficiency (EQE of 22% for a wide range of wavelengths and up to a 46% for specific wavelengths have been measured, without use of any anti-reflection coating (ARC. A correlation

  18. Mapping the Indonesian territory, based on pollution, social demography and geographical data, using self organizing feature map

    Science.gov (United States)

    Hernawati, Kuswari; Insani, Nur; Bambang S. H., M.; Nur Hadi, W.; Sahid

    2017-08-01

    This research aims to mapping the 33 (thirty-three) provinces in Indonesia, based on the data on air, water and soil pollution, as well as social demography and geography data, into a clustered model. The method used in this study was unsupervised method that combines the basic concept of Kohonen or Self-Organizing Feature Maps (SOFM). The method is done by providing the design parameters for the model based on data related directly/ indirectly to pollution, which are the demographic and social data, pollution levels of air, water and soil, as well as the geographical situation of each province. The parameters used consists of 19 features/characteristics, including the human development index, the number of vehicles, the availability of the plant's water absorption and flood prevention, as well as geographic and demographic situation. The data used were secondary data from the Central Statistics Agency (BPS), Indonesia. The data are mapped into SOFM from a high-dimensional vector space into two-dimensional vector space according to the closeness of location in term of Euclidean distance. The resulting outputs are represented in clustered grouping. Thirty-three provinces are grouped into five clusters, where each cluster has different features/characteristics and level of pollution. The result can used to help the efforts on prevention and resolution of pollution problems on each cluster in an effective and efficient way.

  19. Fault diagnosis of rotating machine by isometric feature mapping

    International Nuclear Information System (INIS)

    Zhang, Yun; Li, Benwei; Wang, Lin; Wang, Wen; Wang, Zibin

    2013-01-01

    Principal component analysis (PCA) and linear discriminate analysis (LDA) are well-known linear dimensionality reductions for fault classification. However, since they are linear methods, they perform not well for high-dimensional data that has the nonlinear geometric structure. As kernel extension of PCA, Kernel PCA is used for nonlinear fault classification. However, the performance of Kernel PCA largely depends on its kernel function which can only be empirically selected from finite candidates. Thus, a novel rotating machine fault diagnosis approach based on geometrically motivated nonlinear dimensionality reduction named isometric feature mapping (Isomap) is proposed. The approach can effectively extract the intrinsic nonlinear manifold features embedded in high-dimensional fault data sets. Experimental results with rotor and rolling bearing data show that the proposed approach overcomes the flaw of conventional fault pattern recognition approaches and obviously improves the fault classification performance.

  20. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  1. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  2. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  3. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  4. Advancing three-dimensional MEMS by complimentary laser micro manufacturing

    Science.gov (United States)

    Palmer, Jeremy A.; Williams, John D.; Lemp, Tom; Lehecka, Tom M.; Medina, Francisco; Wicker, Ryan B.

    2006-01-01

    This paper describes improvements that enable engineers to create three-dimensional MEMS in a variety of materials. It also provides a means for selectively adding three-dimensional, high aspect ratio features to pre-existing PMMA micro molds for subsequent LIGA processing. This complimentary method involves in situ construction of three-dimensional micro molds in a stand-alone configuration or directly adjacent to features formed by x-ray lithography. Three-dimensional micro molds are created by micro stereolithography (MSL), an additive rapid prototyping technology. Alternatively, three-dimensional features may be added by direct femtosecond laser micro machining. Parameters for optimal femtosecond laser micro machining of PMMA at 800 nanometers are presented. The technical discussion also includes strategies for enhancements in the context of material selection and post-process surface finish. This approach may lead to practical, cost-effective 3-D MEMS with the surface finish and throughput advantages of x-ray lithography. Accurate three-dimensional metal microstructures are demonstrated. Challenges remain in process planning for micro stereolithography and development of buried features following femtosecond laser micro machining.

  5. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  6. Resolving molecular vibronic structure using high-sensitivity two-dimensional electronic spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Bizimana, Laurie A.; Brazard, Johanna; Carbery, William P.; Gellen, Tobias; Turner, Daniel B., E-mail: dturner@nyu.edu [Department of Chemistry, New York University, 100 Washington Square East, New York, New York 10003 (United States)

    2015-10-28

    Coherent multidimensional optical spectroscopy is an emerging technique for resolving structure and ultrafast dynamics of molecules, proteins, semiconductors, and other materials. A current challenge is the quality of kinetics that are examined as a function of waiting time. Inspired by noise-suppression methods of transient absorption, here we incorporate shot-by-shot acquisitions and balanced detection into coherent multidimensional optical spectroscopy. We demonstrate that implementing noise-suppression methods in two-dimensional electronic spectroscopy not only improves the quality of features in individual spectra but also increases the sensitivity to ultrafast time-dependent changes in the spectral features. Measurements on cresyl violet perchlorate are consistent with the vibronic pattern predicted by theoretical models of a highly displaced harmonic oscillator. The noise-suppression methods should benefit research into coherent electronic dynamics, and they can be adapted to multidimensional spectroscopies across the infrared and ultraviolet frequency ranges.

  7. Three-Dimensional Precession Feature Extraction of Ballistic Targets Based on Narrowband Radar Network

    Directory of Open Access Journals (Sweden)

    Zhao Shuang

    2017-02-01

    Full Text Available Micro-motion is a crucial feature used in ballistic target recognition. To address the problem that single-view observations cannot extract true micro-motion parameters, we propose a novel algorithm based on the narrowband radar network to extract three-dimensional precession features. First, we construct a precession model of the cone-shaped target, and as a precondition, we consider the invisible problem of scattering centers. We then analyze in detail the micro-Doppler modulation trait caused by the precession. Then, we match each scattering center in different perspectives based on the ratio of the top scattering center’s micro-Doppler frequency modulation coefficient and extract the 3D coning vector of the target by establishing associated multi-aspect equation systems. In addition, we estimate feature parameters by utilizing the correlation of the micro-Doppler frequency modulation coefficient of the three scattering centers combined with the frequency compensation method. We then calculate the coordinates of the conical point in each moment and reconstruct the 3D spatial portion. Finally, we provide simulation results to validate the proposed algorithm.

  8. Edge Detection from High Resolution Remote Sensing Images using Two-Dimensional log Gabor Filter in Frequency Domain

    International Nuclear Information System (INIS)

    Wang, K; Yu, T; Meng, Q Y; Wang, G K; Li, S P; Liu, S H

    2014-01-01

    Edges are vital features to describe the structural information of images, especially high spatial resolution remote sensing images. Edge features can be used to define the boundaries between different ground objects in high spatial resolution remote sensing images. Thus edge detection is important in the remote sensing image processing. Even though many different edge detection algorithms have been proposed, it is difficult to extract the edge features from high spatial resolution remote sensing image including complex ground objects. This paper introduces a novel method to detect edges from the high spatial resolution remote sensing image based on frequency domain. Firstly, the high spatial resolution remote sensing images are Fourier transformed to obtain the magnitude spectrum image (frequency image) by FFT. Then, the frequency spectrum is analyzed by using the radius and angle sampling. Finally, two-dimensional log Gabor filter with optimal parameters is designed according to the result of spectrum analysis. Finally, dot product between the result of Fourier transform and the log Gabor filter is inverse Fourier transformed to obtain the detections. The experimental result shows that the proposed algorithm can detect edge features from the high resolution remote sensing image commendably

  9. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  10. Feature Selection via Chaotic Antlion Optimization.

    Directory of Open Access Journals (Sweden)

    Hossam M Zawbaa

    Full Text Available Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting while minimizing the number of features used.We propose an optimization approach for the feature selection problem that considers a "chaotic" version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local minima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biological datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics.

  11. Numerical Study of Three Dimensional Effects in Longitudinal Space-Charge Impedance

    Energy Technology Data Exchange (ETDEWEB)

    Halavanau, A. [NICADD, DeKalb; Piot, P. [NICADD, DeKalb

    2015-06-01

    Longitudinal space-charge (LSC) effects are generally considered as detrimental in free-electron lasers as they can seed instabilities. Such “microbunching instabilities” were recently shown to be potentially useful to support the generation of broadband coherent radiation pulses [1, 2]. Therefore there has been an increasing interest in devising accelerator beamlines capable of sustaining this LSC instability as a mechanism to produce a coherent light source. To date most of these studies have been carried out with a one-dimensional impedance model for the LSC. In this paper we use a N-body “Barnes-Hut” algorithm [3] to simulate the 3D space charge force in the beam combined with elegant [4] and explore the limitation of the 1D model often used

  12. State-space representation of instationary two-dimensional airfoil aerodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Marcus; Matthies, Hermann G. [Institute of Scientific Computing, Technical University Braunschweig, Hans-Sommer-Str. 65, Braunschweig 38106 (Germany)

    2004-03-01

    In the aero-elastic analysis of wind turbines the need to include a model of the local, two-dimensional instationary aerodynamic loads, commonly referred to as dynamic stall model, has become obvious in the last years. In this contribution an alternative choice for such a model is described, based on the DLR model. Its derivation is governed by the flow physics, thus enabling interpolation between different profile geometries. An advantage of the proposed model is its state-space form, i.e. a system of differential equations, which facilitates the important tasks of aeroelastic stability and sensitivity investigations. The model is validated with numerical calculations.

  13. Fast multi-dimensional NMR by minimal sampling

    Science.gov (United States)

    Kupče, Ēriks; Freeman, Ray

    2008-03-01

    A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.

  14. The dimensional reduction in a multi-dimensional cosmology

    International Nuclear Information System (INIS)

    Demianski, M.; Golda, Z.A.; Heller, M.; Szydlowski, M.

    1986-01-01

    Einstein's field equations are solved for the case of the eleven-dimensional vacuum spacetime which is the product R x Bianchi V x T 7 , where T 7 is a seven-dimensional torus. Among all possible solutions, the authors identify those in which the macroscopic space expands and the microscopic space contracts to a finite size. The solutions with this property are 'typical' within the considered class. They implement the idea of a purely dynamical dimensional reduction. (author)

  15. Dimensional control of die castings

    Science.gov (United States)

    Karve, Aniruddha Ajit

    The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of

  16. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2017-01-01

    BACKGROUND: This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. DATA...... Central Register of Controlled Trials database. CONCLUSIONS: Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D......-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced....

  17. Simulation of the diffraction pattern of one dimensional quasicrystal ...

    African Journals Online (AJOL)

    The effects of the variation of atomic spacing ratio of a one dimensional quasicrystal material are investigated. The work involves the use of the solid state simulation code, Laue written by Silsbee and Drager. We are able to observe the general features of the diffraction pattern by a quasicrystal. In addition, it has been found ...

  18. The Euclidean scalar Green function in the five-dimensional Kaluza-Klein magnetic monopole space-time

    International Nuclear Information System (INIS)

    Bezerra de Mello, E.R.

    2006-01-01

    In this paper we present, in a integral form, the Euclidean Green function associated with a massless scalar field in the five-dimensional Kaluza-Klein magnetic monopole superposed to a global monopole, admitting a nontrivial coupling between the field with the geometry. This Green function is expressed as the sum of two contributions: the first one related with uncharged component of the field, is similar to the Green function associated with a scalar field in a four-dimensional global monopole space-time. The second contains the information of all the other components. Using this Green function it is possible to study the vacuum polarization effects on this space-time. Explicitly we calculate the renormalized vacuum expectation value * (x)Φ(x)> Ren , which by its turn is also expressed as the sum of two contributions

  19. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  20. Scattering of three-dimensional plane waves in a self-reinforced half-space lying over a triclinic half-space

    Science.gov (United States)

    Gupta, Shishir; Pramanik, Abhijit; Smita; Pramanik, Snehamoy

    2018-06-01

    The phenomenon of plane waves at the intersecting plane of a triclinic half-space and a self-reinforced half-space is discussed with possible applications during wave propagation. Analytical expressions of the phase velocities of reflection and refraction for quasi-compressional and quasi-shear waves under initial stress are discussed carefully. The closest form of amplitude proportions on reflection and refraction factors of three quasi-plane waves are developed mathematically by applying appropriate boundary conditions. Graphics are sketched to exhibit the consequences of initial stress in the three-dimensional plane wave on reflection and refraction coefficients. Some special cases that coincide with the fundamental properties of several layers are designed to express the reflection and refraction coefficients.

  1. Quantum limits to information about states for finite dimensional Hilbert space

    International Nuclear Information System (INIS)

    Jones, K.R.W.

    1990-01-01

    A refined bound for the correlation information of an N-trial apparatus is developed via an heuristic argument for Hilbert spaces of arbitrary finite dimensionality. Conditional upon the proof of an easily motivated inequality it was possible to find the optimal apparatus for large ensemble quantum Inference, thereby solving the asymptotic optimal state determination problem. In this way an alternative inferential uncertainty principle, is defined which is then contrasted with the usual Heisenberg uncertainty principle. 6 refs

  2. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  3. EEMD Independent Extraction for Mixing Features of Rotating Machinery Reconstructed in Phase Space

    Directory of Open Access Journals (Sweden)

    Zaichao Ma

    2015-04-01

    Full Text Available Empirical Mode Decomposition (EMD, due to its adaptive decomposition property for the non-linear and non-stationary signals, has been widely used in vibration analyses for rotating machinery. However, EMD suffers from mode mixing, which is difficult to extract features independently. Although the improved EMD, well known as the ensemble EMD (EEMD, has been proposed, mode mixing is alleviated only to a certain degree. Moreover, EEMD needs to determine the amplitude of added noise. In this paper, we propose Phase Space Ensemble Empirical Mode Decomposition (PSEEMD integrating Phase Space Reconstruction (PSR and Manifold Learning (ML for modifying EEMD. We also provide the principle and detailed procedure of PSEEMD, and the analyses on a simulation signal and an actual vibration signal derived from a rubbing rotor are performed. The results show that PSEEMD is more efficient and convenient than EEMD in extracting the mixing features from the investigated signal and in optimizing the amplitude of the necessary added noise. Additionally PSEEMD can extract the weak features interfered with a certain amount of noise.

  4. A preliminary feasibility study of passive in-core thermionic reactors for highly compact space nuclear power systems

    International Nuclear Information System (INIS)

    Parlos, A.G.; Khan, E.U.; Frymire, R.; Negron, S.; Thomas, J.K.; Peddicord, K.L.

    1991-01-01

    Results of a preliminary feasibility study on a new concept for a highly compact space reactor power systems are presented. Notwithstanding the preliminary nature of the present study, the results which include a new space reactor configuration and its associated technologies indicate promising avenues for the devleopment of highly compact space reactors. The calculations reported in this study include a neutronic design trade-off study using a two-dimensioinal neutron transport model, as well as a simplified one-dimensional thermal analysis of the reactor core. In arriving at the most desirable configuration, various options have been considered and analyzed, and their advantages/disadvantages have been compared. However, because of space limitation, only the most favorable reactor configuration is presented in this summary

  5. An optical flow-based state-space model of the vocal folds

    DEFF Research Database (Denmark)

    Granados, Alba; Brunskog, Jonas

    2017-01-01

    High-speed movies of the vocal fold vibration are valuable data to reveal vocal fold features for voice pathology diagnosis. This work presents a suitable Bayesian model and a purely theoretical discussion for further development of a framework for continuum biomechanical features estimation. A l...... to capture different deformation patterns between the computed optical flow and the finite element deformation, controlled by the choice of the model tissue parameters........ A linear and Gaussian nonstationary state-space model is proposed and thoroughly discussed. The evolution model is based on a self-sustained three-dimensional finite element model of the vocal folds, and the observation model involves a dense optical flow algorithm. The results show that the method is able...

  6. An optical flow-based state-space model of the vocal folds.

    Science.gov (United States)

    Granados, Alba; Brunskog, Jonas

    2017-06-01

    High-speed movies of the vocal fold vibration are valuable data to reveal vocal fold features for voice pathology diagnosis. This work presents a suitable Bayesian model and a purely theoretical discussion for further development of a framework for continuum biomechanical features estimation. A linear and Gaussian nonstationary state-space model is proposed and thoroughly discussed. The evolution model is based on a self-sustained three-dimensional finite element model of the vocal folds, and the observation model involves a dense optical flow algorithm. The results show that the method is able to capture different deformation patterns between the computed optical flow and the finite element deformation, controlled by the choice of the model tissue parameters.

  7. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    Science.gov (United States)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  8. Evaluation of Superficial and Dimensional Quality Features in Metallic Micro-Channels Manufactured by Micro-End-Milling

    Directory of Open Access Journals (Sweden)

    Claudio Giardini

    2013-04-01

    Full Text Available Miniaturization encourages the development of new manufacturing processes capable of fabricating features, like micro-channels, in order to use them for different applications, such as in fuel cells, heat exchangers, microfluidic devices and micro-electromechanical systems (MEMS. Many studies have been conducted on heat and fluid transfer in micro-channels, and they appeared significantly deviated from conventional theory, due to measurement errors and fabrication methods. The present research, in order to deal with this opportunity, is focused on a set of experiments in the micro-milling of channels made of aluminum, titanium alloys and stainless steel, varying parameters, such as spindle speed, depth of cut per pass (ap, channel depth (d, feed per tooth (fz and coolant application. The experimental results were analyzed in terms of dimensional error, channel profile shape deviation from rectangular and surface quality (burr and roughness. The micro-milling process was capable of offering quality features required on the micro-channeled devices. Critical phenomena, like run-out, ploughing, minimum chip thickness and tool wear, were encountered as an explanation for the deviations in shape and for the surface quality of the micro-channels. The application of coolant and a low depth of cut per pass were significant to obtain better superficial quality features and a smaller dimensional error. In conclusion, the integration of superficial and geometrical features on the study of the quality of micro-channeled devices made of different metallic materials contributes to the understanding of the impact of calibrated cutting conditions in MEMS applications.

  9. Tracking in Object Action Space

    DEFF Research Database (Denmark)

    Krüger, Volker; Herzog, Dennis

    2013-01-01

    the space of the object affordances, i.e., the space of possible actions that are applied on a given object. This way, 3D body tracking reduces to action tracking in the object (and context) primed parameter space of the object affordances. This reduces the high-dimensional joint-space to a low...

  10. Application of space-angle synthesis to two-dimensional neutral-particle transport problems of weapon physics

    International Nuclear Information System (INIS)

    Roberds, R.M.

    1975-01-01

    A space-angle synthesis (SAS) method has been developed for treating the steady-state, two-dimensional transport of neutrons and gamma rays from a point source of simulated nuclear weapon radiation in air. The method was validated by applying it to the problem of neutron transport from a point source in air over a ground interface, and then comparing the results to those obtained by DOT, a state-of-the-art, discrete-ordinates code. In the SAS method, the energy dependence of the Boltzmann transport equation was treated in the standard multigroup manner. The angular dependence was treated by expanding the flux in specially tailored trial functions and applying the method of weighted residuals which analytically integrated the transport equation over all angles. The weighted-residual approach was analogous to the conventional spherical-harmonics (P/sub N/) method with the exception that the tailored expansion allowed for more rapid convergence than a spherical-harmonics P 1 expansion and resulted in a greater degree of accuracy. The trial functions used in the expansion were odd and even combinations of selected trial solutions, the trial solutions being shaped ellipsoids which approximated the angular distribution of the neutron flux in one-dimensional space. The parameters which described the shape of the ellipsoid varied with energy group and the spatial medium, only, and were obtained from a one-dimensional discrete-ordinates calculation. Thus, approximate transport solutions were made available for all two-dimensional problems of a certain class by using tabulated parameters obtained from a single, one-dimensional calculation

  11. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  12. Low dimensionality semiconductors: modelling of excitons via a fractional-dimensional space

    Science.gov (United States)

    Christol, P.; Lefebvre, P.; Mathieu, H.

    1993-09-01

    An interaction space with a fractionnal dimension is used to calculate in a simple way the binding energies of excitons confined in quantum wells, superlattices and quantum well wires. A very simple formulation provides this energy versus the non-integer dimensionality of the physical environment of the electron-hole pair. The problem then comes to determining the dimensionality α. We show that the latter can be expressed from the characteristics of the microstructure. α continuously varies from 3 (bulk material) to 2 for quantum wells and superlattices, and from 3 to 1 for quantum well wires. Quite a fair agreement is obtained with other theoretical calculations and experimental data, and this model coherently describes both three-dimensional limiting cases for quantum wells (L_wrightarrow 0 and L_wrightarrow infty) and the whole range of periods of the superlattice. Such a simple model presents a great interest for spectroscopists though it does not aim to compete with accurate but often tedious variational calculations. Nous utilisons un espace des interactions doté d'une dimension fractionnaire pour calculer simplement l'énergie de liaison des excitons confinés dans les puits quantiques, superréseaux et fils quantiques. Une formulation très simple donne cette énergie en fonction de la dimensionalité non-entière de l'environnement physique de la paire électron-trou. Le problème revient alors à déterminer cette dimensionalité α, dont nous montrons qu'une expression peut être déduite des caractéristiques de la microstructure. α varie continûment de 3 (matériau massif) à 2 pour un puits quantique ou un superréseau, et de 3 à 1 pour un fil quantique, selon le confinement du mouvement des porteurs. Les comparaisons avec d'autres calculs théoriques et données expérimentales sont toujours très convenables, et cette théorie décrit d'une façon cohérente les limites tridimensionnelles du puits quantique (L_wrightarrow 0 et L

  13. Evaluating Stability and Comparing Output of Feature Selectors that Optimize Feature Subset Cardinality

    Czech Academy of Sciences Publication Activity Database

    Somol, Petr; Novovičová, Jana

    2010-01-01

    Roč. 32, č. 11 (2010), s. 1921-1939 ISSN 0162-8828 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593; GA ČR GA102/07/1594 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection * feature stability * stability measures * similarity measures * sequential search * individual ranking * feature subset-size optimization * high dimensionality * small sample size Subject RIV: BD - Theory of Information Impact factor: 5.027, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/somol-0348726.pdf

  14. The N=4 supersymmetric E8 gauge theory and coset space dimensional reduction

    International Nuclear Information System (INIS)

    Olive, D.; West, P.

    1983-01-01

    Reasons are given to suggest that the N=4 supersymmetric E 8 gauge theory be considered as a serious candidate for a physical theory. The symmetries of this theory are broken by a scheme based on coset space dimensional reduction. The resulting theory possesses four conventional generations of low-mass fermions together with their mirror particles. (orig.)

  15. Large parallel volumes of finite and compact sets in d-dimensional Euclidean space

    DEFF Research Database (Denmark)

    Kampf, Jürgen; Kiderlen, Markus

    The r-parallel volume V (Cr) of a compact subset C in d-dimensional Euclidean space is the volume of the set Cr of all points of Euclidean distance at most r > 0 from C. According to Steiner’s formula, V (Cr) is a polynomial in r when C is convex. For finite sets C satisfying a certain geometric...

  16. High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.

    Science.gov (United States)

    Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung

    2018-05-04

    Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Holography in three-dimensional Kerr-de Sitter space with a gravitational Chern-Simons term

    International Nuclear Information System (INIS)

    Park, Mu-In

    2008-01-01

    The holographic description of the three-dimensional Kerr-de Sitter space with a gravitational Chern-Simons term is studied, in the context of dS/CFT correspondence. The space has only one (cosmological) event horizon and its mass and angular momentum are identified from the holographic energy-momentum tensor at the asymptotic infinity. The thermodynamic entropy of the cosmological horizon is computed directly from the first law of thermodynamics, with the conventional Hawking temperature, and it is found that the usual Gibbons-Hawking entropy is modified. It is remarked that, due to the gravitational Chern-Simons term, (a) the results go beyond the analytic continuation from AdS, (b) the maximum-mass/N-bound conjecture may be violated and (c) the three-dimensional cosmology is chiral. A statistical mechanical computation of the entropy, from a Cardy-like formula for a dual CFT at the asymptotic boundary, is discussed. Some remarks on the technical differences in the Chern-Simons energy-momentum tensor, from the literature, are also made

  18. Direct observation of strain in bulk subgrains and dislocation walls by high angular resolution three-dimensional X-ray diffraction

    DEFF Research Database (Denmark)

    Jakobsen, Bo; Lienert, U.; Almer, J.

    2008-01-01

    The X-ray diffraction (XRD) method "high angular resolution 3DXRD" is briefly introduced, and results are presented for a single bulk grain in a polycrystalline copper sample deformed in tension. It is found that the three-dimensional reciprocal-space intensity distribution of a 400 reflection...

  19. Half-integer resonance crossing in high-intensity rings

    Directory of Open Access Journals (Sweden)

    A. V. Fedotov

    2002-02-01

    Full Text Available A detailed study of the influence of space charge on the crossing of second-order resonances is presented and associated with the space-charge limit of high-intensity rings. Two-dimensional simulation studies are compared with envelope models, which agree in the finding of an increased intensity limit due to the coherent frequency shift. This result is also found for realistic bunched beams with multiturn injection painting. Characteristic features such as the influence of tune splitting, structure resonances, and the role of envelope instabilities are discussed in detail. The theoretical limits are found to be in good agreement with the performance of high-intensity proton machines.

  20. Quantum correlation of high dimensional system in a dephasing environment

    Science.gov (United States)

    Ji, Yinghua; Ke, Qiang; Hu, Juju

    2018-05-01

    For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.

  1. A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations

    Science.gov (United States)

    Chen, Hao; Lv, Wen; Zhang, Tongtong

    2018-05-01

    We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.

  2. High-resolution non-destructive three-dimensional imaging of integrated circuits.

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-15

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  3. High-resolution non-destructive three-dimensional imaging of integrated circuits

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-01

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  4. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  5. Mixed Interaction Spaces

    DEFF Research Database (Denmark)

    Lykke-Olesen, Andreas; Eriksson, E.; Hansen, T.R.

    In this paper, we describe a new interaction technique for mobile devices named Mixed Interaction Space that uses the camera of the mobile device to track the position, size and rotation of a fixed-point. In this demonstration we will present a system that uses a hand-drawn circle, colored object...... or a person’s face as a fixed-point to determine the location of the device. We use these features as a 4 dimensional input vector to a set of different applications....

  6. Laurent series expansion of sunrise-type diagrams using configuration space techniques

    International Nuclear Information System (INIS)

    Groote, S.; Koerner, J.G.; Pivovarov, A.A.

    2004-01-01

    We show that configuration space techniques can be used to efficiently calculate the complete Laurent series ε-expansion of sunrise-type diagrams to any loop order in D-dimensional space-time for any external momentum and for arbitrary mass configurations. For negative powers of ε the results are obtained in analytical form. For positive powers of ε including the finite ε 0 contribution the result is obtained numerically in terms of low-dimensional integrals. We present general features of the calculation and provide exemplary results up to five-loop order which are compared to available results in the literature. (orig.)

  7. Quantum trajectories in complex space: One-dimensional stationary scattering problems

    International Nuclear Information System (INIS)

    Chou, C.-C.; Wyatt, Robert E.

    2008-01-01

    One-dimensional time-independent scattering problems are investigated in the framework of the quantum Hamilton-Jacobi formalism. The equation for the local approximate quantum trajectories near the stagnation point of the quantum momentum function is derived, and the first derivative of the quantum momentum function is related to the local structure of quantum trajectories. Exact complex quantum trajectories are determined for two examples by numerically integrating the equations of motion. For the soft potential step, some particles penetrate into the nonclassical region, and then turn back to the reflection region. For the barrier scattering problem, quantum trajectories may spiral into the attractors or from the repellers in the barrier region. Although the classical potentials extended to complex space show different pole structures for each problem, the quantum potentials present the same second-order pole structure in the reflection region. This paper not only analyzes complex quantum trajectories and the total potentials for these examples but also demonstrates general properties and similar structures of the complex quantum trajectories and the quantum potentials for one-dimensional time-independent scattering problems

  8. Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Weixun Zhou

    2017-05-01

    Full Text Available Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNNs for high-resolution remote sensing image retrieval (HRRSIR. To this end, several effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, a CNN pre-trained on a different problem is treated as a feature extractor since there are no sufficiently-sized remote sensing datasets to train a CNN from scratch. In the second scheme, we investigate learning features that are specific to our problem by first fine-tuning the pre-trained CNN on a remote sensing dataset and then proposing a novel CNN architecture based on convolutional layers and a three-layer perceptron. The novel CNN has fewer parameters than the pre-trained and fine-tuned CNNs and can learn low dimensional features from limited labelled images. The schemes are evaluated on several challenging, publicly available datasets. The results indicate that the proposed schemes, particularly the novel CNN, achieve state-of-the-art performance.

  9. Entanglement of arbitrary superpositions of modes within two-dimensional orbital angular momentum state spaces

    International Nuclear Information System (INIS)

    Jack, B.; Leach, J.; Franke-Arnold, S.; Ireland, D. G.; Padgett, M. J.; Yao, A. M.; Barnett, S. M.; Romero, J.

    2010-01-01

    We use spatial light modulators (SLMs) to measure correlations between arbitrary superpositions of orbital angular momentum (OAM) states generated by spontaneous parametric down-conversion. Our technique allows us to fully access a two-dimensional OAM subspace described by a Bloch sphere, within the higher-dimensional OAM Hilbert space. We quantify the entanglement through violations of a Bell-type inequality for pairs of modal superpositions that lie on equatorial, polar, and arbitrary great circles of the Bloch sphere. Our work shows that SLMs can be used to measure arbitrary spatial states with a fidelity sufficient for appropriate quantum information processing systems.

  10. A proposed framework on hybrid feature selection techniques for handling high dimensional educational data

    Science.gov (United States)

    Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd

    2017-10-01

    Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.

  11. Features of public open spaces and physical activity among children: findings from the CLAN study.

    Science.gov (United States)

    Timperio, Anna; Giles-Corti, Billie; Crawford, David; Andrianopoulos, Nick; Ball, Kylie; Salmon, Jo; Hume, Clare

    2008-11-01

    To examine associations between features of public open spaces, and children's physical activity. 163 children aged 8-9 years and 334 adolescents aged 13-15 years from Melbourne, Australia participated in 2004. A Geographic Information System was used to identify all public open spaces (POS) within 800 m of participants' homes and their closest POS. The features of all POS identified were audited in 2004/5. Accelerometers measured moderate-to-vigorous physical activity (MVPA) after school and on weekends. Linear regression analyses examined associations between features of the closest POS and participants' MVPA. Most participants had a POS within 800 m of their home. The presence of playgrounds was positively associated with younger boys' weekend MVPA (B=24.9 min/day; pPOS were associated with participants' MVPA, although mixed associations were evident. Further research is required to clarify these complex relationships.

  12. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  13. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  14. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding.

    Science.gov (United States)

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-07-06

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches.

  15. Classical testing particles and (4 + N)-dimensional theories of space-time

    International Nuclear Information System (INIS)

    Nieto-Garcia, J.A.

    1986-01-01

    The Lagrangian theory of a classical relativistic spinning test particle (top) developed by Hanson and Regge and by Hojman is briefly reviewed. Special attention is devoted to the constraints imposed on the dynamical variables associated with the system of this theory. The equations for a relativistic top are formulated in a way suitable for use in the study of geometrical properties of the 4 + N-dimensional Kaluza-Klein background. It is shown that the equations of motion of a top in five dimensions reduce to the Hanson-Regge generalization of the Bargmann-Michel-Telegdi equations of motion in four dimensions when suitable conditions on the spin tensor are imposed. The classical bosonic relativistic string theory is discussed and the connection of this theory with the top theory is examined. It is found that the relation between the string and the top leads naturally to the consideration of a 3-dimensional extended system (called terron) which sweeps out a 4-dimensional surface as it evolves in a space-time. By using a square root procedure based on ideas by Teitelboim a theory of a supersymmetric top is developed. The quantization of the new supersymmetric system is discussed. Conclusions and suggestions for further research are given

  16. Convergence rates and finite-dimensional approximations for nonlinear ill-posed problems involving monotone operators in Banach spaces

    International Nuclear Information System (INIS)

    Nguyen Buong.

    1992-11-01

    The purpose of this paper is to investigate convergence rates for an operator version of Tikhonov regularization constructed by dual mapping for nonlinear ill-posed problems involving monotone operators in real reflective Banach spaces. The obtained results are considered in combination with finite-dimensional approximations for the space. An example is considered for illustration. (author). 15 refs

  17. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  18. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  19. The consensus in the two-feature two-state one-dimensional Axelrod model revisited

    International Nuclear Information System (INIS)

    Biral, Elias J P; Tilles, Paulo F C; Fontanari, José F

    2015-01-01

    The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains. (paper)

  20. The consensus in the two-feature two-state one-dimensional Axelrod model revisited

    Science.gov (United States)

    Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.

    2015-04-01

    The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.

  1. 3D Embedded Reconfigurable Riometer for Heliospheric Space Missions

    Science.gov (United States)

    Dekoulis, George

    2016-07-01

    This paper describes the development of a new three-dimensional embedded reconfigurable Riometer for performing remote sensing of planetary magnetospheres. The system couples the in situ measurements of probe or orbiter magnetospheric space missions. The new prototype features a multi-frequency mode that allows measurements at frequencies, where heliospheric physics events' signatures are distinct on the ionized planetary plasma. For our planet similar measurements are meaningful for frequencies below 55 MHz. Observation frequencies above 55 MHz yield to direct measurements of the Cosmic Microwave Background intensity. The system acts as a prototyping platform for subsequent space exploration phased-array imaging experiments, due to its high-intensity scientific processing capabilities. The performance improvement over existing systems in operation is in the range of 80%, due to the state-of-the-art hardware and scientific processing used.

  2. Synthesis of Joint Volumes, Visualization of Paths, and Revision of Viewing Sequences in a Multi-dimensional Seismic Data Viewer

    Science.gov (United States)

    Chen, D. M.; Clapp, R. G.; Biondi, B.

    2006-12-01

    Ricksep is a freely-available interactive viewer for multi-dimensional data sets. The viewer is very useful for simultaneous display of multiple data sets from different viewing angles, animation of movement along a path through the data space, and selection of local regions for data processing and information extraction. Several new viewing features are added to enhance the program's functionality in the following three aspects. First, two new data synthesis algorithms are created to adaptively combine information from a data set with mostly high-frequency content, such as seismic data, and another data set with mainly low-frequency content, such as velocity data. Using the algorithms, these two data sets can be synthesized into a single data set which resembles the high-frequency data set on a local scale and at the same time resembles the low- frequency data set on a larger scale. As a result, the originally separated high and low-frequency details can now be more accurately and conveniently studied together. Second, a projection algorithm is developed to display paths through the data space. Paths are geophysically important because they represent wells into the ground. Two difficulties often associated with tracking paths are that they normally cannot be seen clearly inside multi-dimensional spaces and depth information is lost along the direction of projection when ordinary projection techniques are used. The new algorithm projects samples along the path in three orthogonal directions and effectively restores important depth information by using variable projection parameters which are functions of the distance away from the path. Multiple paths in the data space can be generated using different character symbols as positional markers, and users can easily create, modify, and view paths in real time. Third, a viewing history list is implemented which enables Ricksep's users to create, edit and save a recipe for the sequence of viewing states. Then, the recipe

  3. Positioning with stationary emitters in a two-dimensional space-time

    International Nuclear Information System (INIS)

    Coll, Bartolome; Ferrando, Joan Josep; Morales, Juan Antonio

    2006-01-01

    The basic elements of the relativistic positioning systems in a two-dimensional space-time have been introduced in a previous work [Phys. Rev. D 73, 084017 (2006)] where geodesic positioning systems, constituted by two geodesic emitters, have been considered in a flat space-time. Here, we want to show in what precise senses positioning systems allow to make relativistic gravimetry. For this purpose, we consider stationary positioning systems, constituted by two uniformly accelerated emitters separated by a constant distance, in two different situations: absence of gravitational field (Minkowski plane) and presence of a gravitational mass (Schwarzschild plane). The physical coordinate system constituted by the electromagnetic signals broadcasting the proper time of the emitters are the so called emission coordinates, and we show that, in such emission coordinates, the trajectories of the emitters in both situations, the absence and presence of a gravitational field, are identical. The interesting point is that, in spite of this fact, particular additional information on the system or on the user allows us not only to distinguish both space-times, but also to complete the dynamical description of emitters and user and even to measure the mass of the gravitational field. The precise information under which these dynamical and gravimetric results may be obtained is carefully pointed out

  4. Three-dimensionality of space and the quantum bit: an information-theoretic approach

    International Nuclear Information System (INIS)

    Müller, Markus P; Masanes, Lluís

    2013-01-01

    It is sometimes pointed out as a curiosity that the state space of quantum two-level systems, i.e. the qubit, and actual physical space are both three-dimensional and Euclidean. In this paper, we suggest an information-theoretic analysis of this relationship, by proving a particular mathematical result: suppose that physics takes place in d spatial dimensions, and that some events happen probabilistically (not assuming quantum theory in any way). Furthermore, suppose there are systems that carry ‘minimal amounts of direction information’, interacting via some continuous reversible time evolution. We prove that this uniquely determines spatial dimension d = 3 and quantum theory on two qubits (including entanglement and unitary time evolution), and that it allows observers to infer local spatial geometry from probability measurements. (paper)

  5. Euclidean scalar Green function in a higher dimensional global monopole space-time

    International Nuclear Information System (INIS)

    Bezerra de Mello, E.R.

    2002-01-01

    We construct the explicit Euclidean scalar Green function associated with a massless field in a higher dimensional global monopole space-time, i.e., a (1+d)-space-time with d≥3 which presents a solid angle deficit. Our result is expressed in terms of an infinite sum of products of Legendre functions with Gegenbauer polynomials. Although this Green function cannot be expressed in a closed form, for the specific case where the solid angle deficit is very small, it is possible to develop the sum and obtain the Green function in a more workable expression. Having this expression it is possible to calculate the vacuum expectation value of some relevant operators. As an application of this formalism, we calculate the renormalized vacuum expectation value of the square of the scalar field, 2 (x)> Ren , and the energy-momentum tensor, μν (x)> Ren , for the global monopole space-time with spatial dimensions d=4 and d=5

  6. Dimensional verification of high aspect micro structures using FIB-SEM

    DEFF Research Database (Denmark)

    Zhang, Yang; Hansen, Hans Nørgaard

    2013-01-01

    -SEM) assisted by Spip®. The micro features are circular holes 10μm in diameter and 20μm deep, with a 20μm pitch. Various inspection methods were attempted to obtain dimensional information. Due to the dimension, neither optical instrument nor atomic force microscope (AFM) was capable to perform the measurement...

  7. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    Science.gov (United States)

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  8. Three-dimensional features of GAM zonal flows in the HL-2A tokamak

    International Nuclear Information System (INIS)

    Yan, L.W.; Cheng, J.; Hong, W.Y.; Zhao, K.J.; Lan, T.; Dong, J.Q.; Liu, A.D.; Yu, C.X.; Yu, D.L.; Qian, J.; Huang, Y.; Yang, Q.W.; Ding, X.T.; Liu, Y.; Pan, C.H.

    2007-01-01

    A novel design of the three-step Langmuir probe (TSLP) array has been developed to investigate the zonal flow (ZF) physics in the HL-2A tokamak. Three TSLP arrays are applied to measure the three-dimensional (3D) features of ZFs. They are separated by 65 mm in the poloidal and 800 mm in the toroidal directions, respectively. The 3D properties of the geodesic acoustic mode (GAM) ZFs are presented. The poloidal and toroidal modes of the radial electric fields of the GAM perturbations are simultaneously determined in the HL-2A tokamak for the first time. The modes have narrow radial wave numbers (k r ρ i = 0.03-0.07) and short radial scale lengths (2.4-4.2 cm). High coherence of both the GAM and the ambient turbulence separated by toroidal 22.5 0 along a magnetic field line is observed, which contrasts with the high coherence of the GAM and the low coherence of the ambient turbulence apart from the field line. The nonlinear three wave coupling between the turbulent fluctuations and the ZFs is a plausible mechanism for flow generation. The skewness and kurtosis spectra of the probability distribution function of the potential perturbations are contrasted with the corresponding bicoherence for the first time, which support the three wave coupling mechanism

  9. Progress in nanoscale dry processes for fabrication of high-aspect-ratio features: How can we control critical dimension uniformity at the bottom?

    Science.gov (United States)

    Ishikawa, Kenji; Karahashi, Kazuhiro; Ishijima, Tatsuo; Cho, Sung Il; Elliott, Simon; Hausmann, Dennis; Mocuta, Dan; Wilson, Aaron; Kinoshita, Keizo

    2018-06-01

    In this review, we discuss the progress of emerging dry processes for nanoscale fabrication of high-aspect-ratio features, including emerging design technology for manufacturability. Experts in the fields of plasma processing have contributed to addressing the increasingly challenging demands of nanoscale deposition and etching technologies for high-aspect-ratio features. The discussion of our atomic-scale understanding of physicochemical reactions involving ion bombardment and neutral transport presents the major challenges shared across the plasma science and technology community. Focus is placed on advances in fabrication technology that control surface reactions on three-dimensional features, as well as state-of-the-art techniques used in semiconductor manufacturing with a brief summary of future challenges.

  10. A Semidefinite Programming Based Search Strategy for Feature Selection with Mutual Information Measure.

    Science.gov (United States)

    Naghibi, Tofigh; Hoffmann, Sarah; Pfister, Beat

    2015-08-01

    Feature subset selection, as a special case of the general subset selection problem, has been the topic of a considerable number of studies due to the growing importance of data-mining applications. In the feature subset selection problem there are two main issues that need to be addressed: (i) Finding an appropriate measure function than can be fairly fast and robustly computed for high-dimensional data. (ii) A search strategy to optimize the measure over the subset space in a reasonable amount of time. In this article mutual information between features and class labels is considered to be the measure function. Two series expansions for mutual information are proposed, and it is shown that most heuristic criteria suggested in the literature are truncated approximations of these expansions. It is well-known that searching the whole subset space is an NP-hard problem. Here, instead of the conventional sequential search algorithms, we suggest a parallel search strategy based on semidefinite programming (SDP) that can search through the subset space in polynomial time. By exploiting the similarities between the proposed algorithm and an instance of the maximum-cut problem in graph theory, the approximation ratio of this algorithm is derived and is compared with the approximation ratio of the backward elimination method. The experiments show that it can be misleading to judge the quality of a measure solely based on the classification accuracy, without taking the effect of the non-optimum search strategy into account.

  11. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  12. Study of Dynamic Features of Surface Plasma in High-Power Disk Laser Welding

    International Nuclear Information System (INIS)

    Wang Teng; Gao Xiangdong; Seiji, Katayama; Jin, Xiaoli

    2012-01-01

    High-speed photography was used to obtain the dynamic changes in the surface plasma during a high-power disk laser welding process. A color space clustering algorithm to extract the edge information of the surface plasma region was developed in order to improve the accuracy of image processing. With a comparative analysis of the plasma features, i.e., area and height, and the characteristics of the welded seam, the relationship between the surface plasma and the stability of the laser welding process was characterized, which provides a basic understanding for the real-time monitoring of laser welding.

  13. Hidden discriminative features extraction for supervised high-order time series modeling.

    Science.gov (United States)

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A New Three-Dimensional Indoor Positioning Mechanism Based on Wireless LAN

    Directory of Open Access Journals (Sweden)

    Jiujun Cheng

    2014-01-01

    Full Text Available The researches on two-dimensional indoor positioning based on wireless LAN and the location fingerprint methods have become mature, but in the actual indoor positioning situation, users are also concerned about the height where they stand. Due to the expansion of the range of three-dimensional indoor positioning, more features must be needed to describe the location fingerprint. Directly using a machine learning algorithm will result in the reduced ability of classification. To solve this problem, in this paper, a “divide and conquer” strategy is adopted; that is, first through k-medoids algorithm the three-dimensional location space is clustered into a number of service areas, and then a multicategory SVM with less features is created for each service area for further positioning. Our experiment shows that the error distance resolution of the approach with k-medoids algorithm and multicategory SVM is higher than that of the approach only with SVM, and the former can effectively decrease the “crazy prediction.”

  15. Three-dimensional volume rendering of tibiofibular joint space and quantitative analysis of change in volume due to tibiofibular syndesmosis diastases

    International Nuclear Information System (INIS)

    Taser, F.; Shafiq, Q.; Ebraheim, N.A.

    2006-01-01

    The diagnosis of ankle syndesmosis injuries is made by various imaging techniques. The present study was undertaken to examine whether the three-dimensional reconstruction of axial CT images and calculation of the volume of tibiofibular joint space enhances the sensitivity of diastases diagnoses or not. Six adult cadaveric ankle specimens were used for spiral CT-scan assessment of tibiofibular syndesmosis. After the specimens were dissected, external fixation was performed and diastases of 1, 2, and 3 mm was simulated by a precalibrated device. Helical CT scans were obtained with 1.0-mm slice thickness. The data was transferred to the computer software AcquariusNET. Then the contours of the tibiofibular syndesmosis joint space were outlined on each axial CT slice and the collection of these slices were stacked using the computer software AutoCAD 2005, according to the spatial arrangement and geometrical coordinates between each slice, to produce a three-dimensional reconstruction of the joint space. The area of each slice and the volume of the entire tibiofibular joint space were calculated. The tibiofibular joint space at the 10th-mm slice level was also measured on axial CT scan images at normal, 1, 2 and 3-mm joint space diastases. The three-dimensional volume-rendering of the tibiofibular syndesmosis joint space from the spiral CT data demonstrated the shape of the joint space and has been found to be a sensitive method for calculating joint space volume. We found that, from normal to 1 mm, a 1-mm diastasis increases approximately 43% of the joint space volume, while from 1 to 3 mm, there is about a 20% increase for each 1-mm increase. Volume calculation using this method can be performed in cases of syndesmotic instability after ankle injuries and for preoperative and postoperative evaluation of the integrity of the tibiofibular syndesmosis. (orig.)

  16. Three-dimensional propagation and absorption of high frequency Gaussian beams in magnetoactive plasmas

    International Nuclear Information System (INIS)

    Nowak, S.; Orefice, A.

    1994-01-01

    In today's high frequency systems employed for plasma diagnostics, power heating, and current drive the behavior of the wave beams is appreciably affected by the self-diffraction phenomena due to their narrow collimation. In the present article the three-dimensional propagation of Gaussian beams in inhomogeneous and anisotropic media is analyzed, starting from a properly formulated dispersion relation. Particular attention is paid, in the case of electromagnetic electron cyclotron (EC) waves, to the toroidal geometry characterizing tokamak plasmas, to the power density evolution on the advancing wave fronts, and to the absorption features occurring when a beam crosses an EC resonant layer

  17. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  18. New superintegrable models with position-dependent mass from Bertrand's Theorem on curved spaces

    Energy Technology Data Exchange (ETDEWEB)

    Ballesteros, A; Herranz, F J [Departamento de Fisica, Universidad de Burgos, E-09001 Burgos (Spain); Enciso, A [Departamento de Fisica Teorica II, Universidad Complutense, E-28040 Madrid (Spain); Ragnisco, O; Riglioni, D, E-mail: angelb@ubu.es, E-mail: aenciso@fis.ucm.es, E-mail: fjherranz@ubu.es, E-mail: ragnisco@fis.uniroma3.it, E-mail: riglioni@fis.uniroma3.it [Dipartimento di Fisica, Universita di Roma Tre and Instituto Nazionale di Fisica Nucleare sezione di Roma Tre, Via Vasca Navale 84, I-00146 Roma (Italy)

    2011-03-01

    A generalized version of Bertrand's theorem on spherically symmetric curved spaces is presented. This result is based on the classification of (3+1)-dimensional (Lorentzian) Bertrand spacetimes, that gives rise to two families of Hamiltonian systems defined on certain 3-dimensional (Riemannian) spaces. These two systems are shown to be either the Kepler or the oscillator potentials on the corresponding Bertrand spaces, and both of them are maximally superintegrable. Afterwards, the relationship between such Bertrand Hamiltonians and position-dependent mass systems is explicitly established. These results are illustrated through the example of a superintegrable (nonlinear) oscillator on a Bertrand-Darboux space, whose quantization and physical features are also briefly addressed.

  19. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    Full Text Available Scanning Electron Microscope (SEM as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D. In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  20. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    Science.gov (United States)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  1. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  2. Families of null surfaces in the Minkowski tri dimensional space-time and its associated differential equations

    International Nuclear Information System (INIS)

    Silva O, G.; Garcia G, P.

    2004-01-01

    In this work we describe the procedure to obtain all the family of third order ordinary differential equations connected by a contact transformation such that in their spaces of solutions is defined a conformal three dimensional Minkowski metric. (Author)

  3. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Science.gov (United States)

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  4. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    Science.gov (United States)

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  5. A spectral multiscale hybridizable discontinuous Galerkin method for second order elliptic problems

    KAUST Repository

    Efendiev, Yalchin R.; Lazarov, Raytcho D.; Moon, Minam; Shi, Ke

    2015-01-01

    of multiscale trace spaces. Using local snapshots, we avoid high dimensional representation of trace spaces and use some local features of the solution space in constructing a low dimensional trace space. We investigate the solvability and numerically study

  6. Human Factors in Green Office Building Design: The Impact of Workplace Green Features on Health Perceptions in High-Rise High-Density Asian Cities

    Directory of Open Access Journals (Sweden)

    Fei Xue

    2016-10-01

    Full Text Available There is a growing concern about human factors in green building, which is imperative in high-rise high-density urban environments. This paper describes our attempts to explore the influence of workplace green features (such as green certification, ventilation mode, and building morphology on health perceptions (personal sensation, sensorial assumptions, healing performance based on a survey in Hong Kong and Singapore. The results validated the relationship between green features and health perceptions in the workplace environment. Remarkably, participants from the air-conditioned offices revealed significant higher concerns about health issues than those participants from the mixed-ventilated offices. The mixed-ventilation design performs as a bridge to connect the indoor environment and outdoor space, which enables people to have contact with nature. Additionally, the preferred building morphology of the workplace is the pattern of a building complex instead of a single building. The complex form integrates the configuration of courtyards, podium gardens, green terrace, public plaza, and other types of open spaces with the building clusters, which contributes to better health perceptions. This research contributes to the rationalization and optimization of passive climate-adaptive design strategies for green buildings in high-density tropical or subtropical cities.

  7. Mining potential biomarkers associated with space flight in Caenorhabditis elegans experienced Shenzhou-8 mission with multiple feature selection techniques

    International Nuclear Information System (INIS)

    Zhao, Lei; Gao, Ying; Mi, Dong; Sun, Yeqing

    2016-01-01

    Highlights: • A combined algorithm is proposed to mine biomarkers of spaceflight in C. elegans. • This algorithm makes the feature selection more reliable and robust. • Apply this algorithm to predict 17 positive biomarkers to space environment stress. • The strategy can be used as a general method to select important features. - Abstract: To identify the potential biomarkers associated with space flight, a combined algorithm, which integrates the feature selection techniques, was used to deal with the microarray datasets of Caenorhabditis elegans obtained in the Shenzhou-8 mission. Compared with the ground control treatment, a total of 86 differentially expressed (DE) genes in responses to space synthetic environment or space radiation environment were identified by two filter methods. And then the top 30 ranking genes were selected by the random forest algorithm. Gene Ontology annotation and functional enrichment analyses showed that these genes were mainly associated with metabolism process. Furthermore, clustering analysis showed that 17 genes among these are positive, including 9 for space synthetic environment and 8 for space radiation environment only. These genes could be used as the biomarkers to reflect the space environment stresses. In addition, we also found that microgravity is the main stress factor to change the expression patterns of biomarkers for the short-duration spaceflight.

  8. Mining potential biomarkers associated with space flight in Caenorhabditis elegans experienced Shenzhou-8 mission with multiple feature selection techniques

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Lei [Institute of Environmental Systems Biology, College of Environmental Science and Engineering, Dalian Maritime University, Dalian 116026 (China); Gao, Ying [Center of Medical Physics and Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Shushanhu Road 350, Hefei 230031 (China); Mi, Dong, E-mail: mid@dlmu.edu.cn [Department of Physics, Dalian Maritime University, Dalian 116026 (China); Sun, Yeqing, E-mail: yqsun@dlmu.edu.cn [Institute of Environmental Systems Biology, College of Environmental Science and Engineering, Dalian Maritime University, Dalian 116026 (China)

    2016-09-15

    Highlights: • A combined algorithm is proposed to mine biomarkers of spaceflight in C. elegans. • This algorithm makes the feature selection more reliable and robust. • Apply this algorithm to predict 17 positive biomarkers to space environment stress. • The strategy can be used as a general method to select important features. - Abstract: To identify the potential biomarkers associated with space flight, a combined algorithm, which integrates the feature selection techniques, was used to deal with the microarray datasets of Caenorhabditis elegans obtained in the Shenzhou-8 mission. Compared with the ground control treatment, a total of 86 differentially expressed (DE) genes in responses to space synthetic environment or space radiation environment were identified by two filter methods. And then the top 30 ranking genes were selected by the random forest algorithm. Gene Ontology annotation and functional enrichment analyses showed that these genes were mainly associated with metabolism process. Furthermore, clustering analysis showed that 17 genes among these are positive, including 9 for space synthetic environment and 8 for space radiation environment only. These genes could be used as the biomarkers to reflect the space environment stresses. In addition, we also found that microgravity is the main stress factor to change the expression patterns of biomarkers for the short-duration spaceflight.

  9. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata.

    Science.gov (United States)

    Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-11-08

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.

  10. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata

    Directory of Open Access Journals (Sweden)

    Aiming Liu

    2017-11-01

    Full Text Available Motor Imagery (MI electroencephalography (EEG is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP and local characteristic-scale decomposition (LCD algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems.

  11. Estimate of the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model: a sensitivity analysis

    International Nuclear Information System (INIS)

    Guerrieri, A.

    2009-01-01

    In this report the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model of intermediate complexity has been estimated numerically. A sensitivity analysis has been carried out by varying the equator-to-pole temperature difference, the space resolution and the value of some parameters employed by the model. Chaotic and non-chaotic regimes of circulation have been found. [it

  12. Medical X-ray Image Hierarchical Classification Using a Merging and Splitting Scheme in Feature Space.

    Science.gov (United States)

    Fesharaki, Nooshin Jafari; Pourghassem, Hossein

    2013-07-01

    Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.

  13. Penalized feature selection and classification in bioinformatics

    OpenAIRE

    Ma, Shuangge; Huang, Jian

    2008-01-01

    In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classific...

  14. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  15. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    Science.gov (United States)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  16. Joint High-Dimensional Bayesian Variable and Covariance Selection with an Application to eQTL Analysis

    KAUST Repository

    Bhadra, Anindya

    2013-04-22

    We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.

  17. Conformal field theory in conformal space

    International Nuclear Information System (INIS)

    Preitschopf, C.R.; Vasiliev, M.A.

    1999-01-01

    We present a new framework for a Lagrangian description of conformal field theories in various dimensions based on a local version of d + 2-dimensional conformal space. The results include a true gauge theory of conformal gravity in d = (1, 3) and any standard matter coupled to it. An important feature is the automatic derivation of the conformal gravity constraints, which are necessary for the analysis of the matter systems

  18. Dimensionality Reduction Methods: Comparative Analysis of methods PCA, PPCA and KPCA

    Directory of Open Access Journals (Sweden)

    Jorge Arroyo-Hernández

    2016-01-01

    Full Text Available The dimensionality reduction methods are algorithms mapping the set of data in subspaces derived from the original space, of fewer dimensions, that allow a description of the data at a lower cost. Due to their importance, they are widely used in processes associated with learning machine. This article presents a comparative analysis of PCA, PPCA and KPCA dimensionality reduction methods. A reconstruction experiment of worm-shape data was performed through structures of landmarks located in the body contour, with methods having different number of main components. The results showed that all methods can be seen as alternative processes. Nevertheless, thanks to the potential for analysis in the features space and the method for calculation of its preimage presented, KPCA offers a better method for recognition process and pattern extraction

  19. From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data

    Science.gov (United States)

    Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.

    2007-11-01

    Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.

  20. Experimental two-dimensional quantum walk on a photonic chip.

    Science.gov (United States)

    Tang, Hao; Lin, Xiao-Feng; Feng, Zhen; Chen, Jing-Yuan; Gao, Jun; Sun, Ke; Wang, Chao-Yue; Lai, Peng-Cheng; Xu, Xiao-Yun; Wang, Yao; Qiao, Lu-Feng; Yang, Ai-Lin; Jin, Xian-Min

    2018-05-01

    Quantum walks, in virtue of the coherent superposition and quantum interference, have exponential superiority over their classical counterpart in applications of quantum searching and quantum simulation. The quantum-enhanced power is highly related to the state space of quantum walks, which can be expanded by enlarging the photon number and/or the dimensions of the evolution network, but the former is considerably challenging due to probabilistic generation of single photons and multiplicative loss. We demonstrate a two-dimensional continuous-time quantum walk by using the external geometry of photonic waveguide arrays, rather than the inner degree of freedoms of photons. Using femtosecond laser direct writing, we construct a large-scale three-dimensional structure that forms a two-dimensional lattice with up to 49 × 49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum walks using heralded single photons and single photon-level imaging. We analyze the quantum transport properties via observing the ballistic evolution pattern and the variance profile, which agree well with simulation results. We further reveal the transient nature that is the unique feature for quantum walks of beyond one dimension. An architecture that allows a quantum walk to freely evolve in all directions and at a large scale, combining with defect and disorder control, may bring up powerful and versatile quantum walk machines for classically intractable problems.

  1. Linearized fermion-gravitation system in a (2+1)-dimensional space-time with Chern-Simons data

    International Nuclear Information System (INIS)

    Mello, E.R.B. de.

    1990-01-01

    The fermion-graviton system at linearized level in a (2+1)-dimensional space-time with the gravitational Chern-Simons term is studied. In this approximation it is shown that this system presents anomalous rotational properties and spin, in analogy with the gauge field-matter system. (A.C.A.S.) [pt

  2. Curve Evolution in Subspaces and Exploring the Metameric Class of Histogram of Gradient Orientation based Features using Nonlinear Projection Methods

    DEFF Research Database (Denmark)

    Tatu, Aditya Jayant

    This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...

  3. Free massless fermionic fields of arbitrary spin in d-dimensional anti-de Sitter space

    Energy Technology Data Exchange (ETDEWEB)

    Vasiliev, M A

    1988-04-25

    Free massless fermionic fields of arbitrary spins, corresponding to fully symmetric tensor-spinor irreducible representations of the flat little group SO(d-2), are described in d-dimensional anti-de Sitter space in terms of differential forms. Appropriate linearized higher-spin curvature 2-forms are found. Explicitly gauge invariant higher-spin actions are constructed in terms of these linearized curvatures.

  4. Multisymplectic Structure-Preserving in Simple Finite Element Method in High Dimensional Case

    Institute of Scientific and Technical Information of China (English)

    BAI Yong-Qiang; LIU Zhen; PEI Ming; ZHENG Zhu-Jun

    2003-01-01

    In this paper, we study a finite element scheme of some semi-linear elliptic boundary value problems inhigh-dimensional space. With uniform mesh, we find that, the numerical scheme derived from finite element method cankeep a preserved multisymplectic structure.

  5. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  6. Massive quantum field theory in two-dimensional Robertson-Walker space-time

    International Nuclear Information System (INIS)

    Bunch, T.S.; Christensen, S.M.; Fulling, S.A.

    1978-01-01

    The stress tensor of a massive scalar field, as an integral over normal modes (which are not mere plane waves), is regularized by covariant point separation. When the expectation value in a Parker-Fulling adiabatic vacuum state is expanded in the limit of small curvature-to-mass ratios, the series coincides in each order with the Schwinger-DeWitt-Christensen proper-time expansion. The renormalization ansatz suggested by these expansions (which applies to arbitrary curvature-to-mass ratios and arbitrary quantum state) can be implemented at the integrand level for practical computations. The renormalized tensor (1) passes in the massless limit, for appropriate choice of state, to the known vacuum stress of a massless field, (2) agrees with the explicit results of Bernard and Duncan for a special model, and (3) has a nonzero vacuum expectation value in the two-dimensional ''Milne universe'' (flat space in hyperbolic coordinates). Following Wald, we prove that the renormalized tensor is conserved and point out that there is no arbitrariness in the renormalization procedure. The general approach of this paper is applicable to four-dimensional models

  7. Three-dimensional spatiotemporal features for fast content-based retrieval of focal liver lesions.

    Science.gov (United States)

    Roy, Sharmili; Chi, Yanling; Liu, Jimin; Venkatesh, Sudhakar K; Brown, Michael S

    2014-11-01

    Content-based image retrieval systems for 3-D medical datasets still largely rely on 2-D image-based features extracted from a few representative slices of the image stack. Most 2 -D features that are currently used in the literature not only model a 3-D tumor incompletely but are also highly expensive in terms of computation time, especially for high-resolution datasets. Radiologist-specified semantic labels are sometimes used along with image-based 2-D features to improve the retrieval performance. Since radiological labels show large interuser variability, are often unstructured, and require user interaction, their use as lesion characterizing features is highly subjective, tedious, and slow. In this paper, we propose a 3-D image-based spatiotemporal feature extraction framework for fast content-based retrieval of focal liver lesions. All the features are computer generated and are extracted from four-phase abdominal CT images. Retrieval performance and query processing times for the proposed framework is evaluated on a database of 44 hepatic lesions comprising of five pathological types. Bull's eye percentage score above 85% is achieved for three out of the five lesion pathologies and for 98% of query lesions, at least one same type of lesion is ranked among the top two retrieved results. Experiments show that the proposed system's query processing is more than 20 times faster than other already published systems that use 2-D features. With fast computation time and high retrieval accuracy, the proposed system has the potential to be used as an assistant to radiologists for routine hepatic tumor diagnosis.

  8. Three-dimensional growth of human endothelial cells in an automated cell culture experiment container during the SpaceX CRS-8 ISS space mission - The SPHEROIDS project.

    Science.gov (United States)

    Pietsch, Jessica; Gass, Samuel; Nebuloni, Stefano; Echegoyen, David; Riwaldt, Stefan; Baake, Christin; Bauer, Johann; Corydon, Thomas J; Egli, Marcel; Infanger, Manfred; Grimm, Daniela

    2017-04-01

    Human endothelial cells (ECs) were sent to the International Space Station (ISS) to determine the impact of microgravity on the formation of three-dimensional structures. For this project, an automatic experiment unit (EU) was designed allowing cell culture in space. In order to enable a safe cell culture, cell nourishment and fixation after a pre-programmed timeframe, the materials used for construction of the EUs were tested in regard to their biocompatibility. These tests revealed a high biocompatibility for all parts of the EUs, which were in contact with the cells or the medium used. Most importantly, we found polyether ether ketones for surrounding the incubation chamber, which kept cellular viability above 80% and allowed the cells to adhere as long as they were exposed to normal gravity. After assembling the EU the ECs were cultured therein, where they showed good cell viability at least for 14 days. In addition, the functionality of the automatic medium exchange, and fixation procedures were confirmed. Two days before launch, the ECs were cultured in the EUs, which were afterwards mounted on the SpaceX CRS-8 rocket. 5 and 12 days after launch the cells were fixed. Subsequent analyses revealed a scaffold-free formation of spheroids in space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Two-Dimensional Space-Time Dependent Multi-group Diffusion Equation with SLOR Method

    International Nuclear Information System (INIS)

    Yulianti, Y.; Su'ud, Z.; Waris, A.; Khotimah, S. N.

    2010-01-01

    The research of two-dimensional space-time diffusion equations with SLOR (Successive-Line Over Relaxation) has been done. SLOR method is chosen because this method is one of iterative methods that does not required to defined whole element matrix. The research is divided in two cases, homogeneous case and heterogeneous case. Homogeneous case has been inserted by step reactivity. Heterogeneous case has been inserted by step reactivity and ramp reactivity. In general, the results of simulations are agreement, even in some points there are differences.

  10. Data driven analysis of rain events: feature extraction, clustering, microphysical /macro physical relationship

    Science.gov (United States)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    The study of rain time series records is mainly carried out using rainfall rate or rain accumulation parameters estimated on a fixed integration time (typically 1 min, 1 hour or 1 day). In this study we used the concept of rain event. In fact, the discrete and intermittent natures of rain processes make the definition of some features inadequate when defined on a fixed duration. Long integration times (hour, day) lead to mix rainy and clear air periods in the same sample. Small integration time (seconds, minutes) will lead to noisy data with a great sensibility to detector characteristics. The analysis on the whole rain event instead of individual short duration samples of a fixed duration allows to clarify relationships between features, in particular between macro physical and microphysical ones. This approach allows suppressing the intra-event variability partly due to measurement uncertainties and allows focusing on physical processes. An algorithm based on Genetic Algorithm (GA) and Self Organising Maps (SOM) is developed to obtain a parsimonious characterisation of rain events using a minimal set of variables. The use of self-organizing map (SOM) is justified by the fact that it allows to map a high dimensional data space in a two-dimensional space while preserving as much as possible the initial space topology in an unsupervised way. The obtained SOM allows providing the dependencies between variables and consequently removing redundant variables leading to a minimal subset of only five features (the event duration, the rain rate peak, the rain event depth, the event rain rate standard deviation and the absolute rain rate variation of order 0.5). To confirm relevance of the five selected features the corresponding SOM is analyzed. This analysis shows clearly the existence of relationships between features. It also shows the independence of the inter-event time (IETp) feature or the weak dependence of the Dry percentage in event (Dd%e) feature. This confirms

  11. Reactive scattering with row-orthonormal hyperspherical coordinates. 4. Four-dimensional-space Wigner rotation function for pentaatomic systems.

    Science.gov (United States)

    Kuppermann, Aron

    2011-05-14

    The row-orthonormal hyperspherical coordinate (ROHC) approach to calculating state-to-state reaction cross sections and bound state levels of N-atom systems requires the use of angular momentum tensors and Wigner rotation functions in a space of dimension N - 1. The properties of those tensors and functions are discussed for arbitrary N and determined for N = 5 in terms of the 6 Euler angles involved in 4-dimensional space.

  12. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    Science.gov (United States)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  13. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  14. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  15. Non-commutative phase space and its space-time symmetry

    International Nuclear Information System (INIS)

    Li Kang; Dulat Sayipjamal

    2010-01-01

    First a description of 2+1 dimensional non-commutative (NC) phase space is presented, and then we find that in this formulation the generalized Bopp's shift has a symmetric representation and one can easily and straightforwardly define the star product on NC phase space. Then we define non-commutative Lorentz transformations both on NC space and NC phase space. We also discuss the Poincare symmetry. Finally we point out that our NC phase space formulation and the NC Lorentz transformations are applicable to any even dimensional NC space and NC phase space. (authors)

  16. Theoretical formulation of finite-dimensional discrete phase spaces: I. Algebraic structures and uncertainty principles

    International Nuclear Information System (INIS)

    Marchiolli, M.A.; Ruzzi, M.

    2012-01-01

    We propose a self-consistent theoretical framework for a wide class of physical systems characterized by a finite space of states which allows us, within several mathematical virtues, to construct a discrete version of the Weyl–Wigner–Moyal (WWM) formalism for finite-dimensional discrete phase spaces with toroidal topology. As a first and important application from this ab initio approach, we initially investigate the Robertson–Schrödinger (RS) uncertainty principle related to the discrete coordinate and momentum operators, as well as its implications for physical systems with periodic boundary conditions. The second interesting application is associated with a particular uncertainty principle inherent to the unitary operators, which is based on the Wiener–Khinchin theorem for signal processing. Furthermore, we also establish a modified discrete version for the well-known Heisenberg–Kennard–Robertson (HKR) uncertainty principle, which exhibits additional terms (or corrections) that resemble the generalized uncertainty principle (GUP) into the context of quantum gravity. The results obtained from this new algebraic approach touch on some fundamental questions inherent to quantum mechanics and certainly represent an object of future investigations in physics. - Highlights: ► We construct a discrete version of the Weyl–Wigner–Moyal formalism. ► Coherent states for finite-dimensional discrete phase spaces are established. ► Discrete coordinate and momentum operators are properly defined. ► Uncertainty principles depend on the topology of finite physical systems. ► Corrections for the discrete Heisenberg uncertainty relation are also obtained.

  17. Feature Selection and Blind Source Separation in an EEG-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Michael H. Thaut

    2005-11-01

    Full Text Available Most EEG-based BCI systems make use of well-studied patterns of brain activity. However, those systems involve tasks that indirectly map to simple binary commands such as “yes” or “no” or require many weeks of biofeedback training. We hypothesized that signal processing and machine learning methods can be used to discriminate EEG in a direct “yes”/“no” BCI from a single session. Blind source separation (BSS and spectral transformations of the EEG produced a 180-dimensional feature space. We used a modified genetic algorithm (GA wrapped around a support vector machine (SVM classifier to search the space of feature subsets. The GA-based search found feature subsets that outperform full feature sets and random feature subsets. Also, BSS transformations of the EEG outperformed the original time series, particularly in conjunction with a subset search of both spaces. The results suggest that BSS and feature selection can be used to improve the performance of even a “direct,” single-session BCI.

  18. Characterization of 3-dimensional superconductive thin film components for gravitational experiments in space

    Energy Technology Data Exchange (ETDEWEB)

    Hechler, S.; Nawrodt, R.; Nietzsche, S.; Vodel, W.; Seidel, P. [Friedrich-Schiller-Univ. Jena (Germany). Inst. fuer Festkoerperphysik; Dittus, H. [ZARM, Univ. Bremen (Germany); Loeffler, F. [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    2007-07-01

    Superconducting quantum interference devices (SQUIDs) are used for high precise gravitational experiments. One of the most impressive experiments is the satellite test of the equivalence principle (STEP) of NASA/ESA. The STEP mission aims to prove a possible violation of Einstein's equivalence principle at an extreme level of accuracy of 1 part in 10{sup 18} in space. In this contribution we present an automatically working measurement equipment to characterize 3-dimensional superconducting thin film components like i.e. pick-up coils and test masses for STEP. The characterization is done by measurements of the transition temperature between the normal and the superconducting state using a special built anti-cryostat. Above all the setup was designed for use in normal LHe transport Dewars. The sample chamber has a volume of 150 cm{sup 3} and can be fully temperature controlled over a range from 4.2 K to 300 K with a resolution of better then 100 mK. (orig.)

  19. Rare event simulation in finite-infinite dimensional space

    International Nuclear Information System (INIS)

    Au, Siu-Kui; Patelli, Edoardo

    2016-01-01

    Modern engineering systems are becoming increasingly complex. Assessing their risk by simulation is intimately related to the efficient generation of rare failure events. Subset Simulation is an advanced Monte Carlo method for risk assessment and it has been applied in different disciplines. Pivotal to its success is the efficient generation of conditional failure samples, which is generally non-trivial. Conventionally an independent-component Markov Chain Monte Carlo (MCMC) algorithm is used, which is applicable to high dimensional problems (i.e., a large number of random variables) without suffering from ‘curse of dimension’. Experience suggests that the algorithm may perform even better for high dimensional problems. Motivated by this, for any given problem we construct an equivalent problem where each random variable is represented by an arbitrary (hence possibly infinite) number of ‘hidden’ variables. We study analytically the limiting behavior of the algorithm as the number of hidden variables increases indefinitely. This leads to a new algorithm that is more generic and offers greater flexibility and control. It coincides with an algorithm recently suggested by independent researchers, where a joint Gaussian distribution is imposed between the current sample and the candidate. The present work provides theoretical reasoning and insights into the algorithm.

  20. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....