WorldWideScience

Sample records for self-adaptive randomized subspace

  1. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  2. Random matrix improved subspace clustering

    KAUST Repository

    Couillet, Romain; Kammoun, Abla

    2017-01-01

    This article introduces a spectral method for statistical subspace clustering. The method is built upon standard kernel spectral clustering techniques, however carefully tuned by theoretical understanding arising from random matrix findings. We show

  3. Random matrix improved subspace clustering

    KAUST Repository

    Couillet, Romain

    2017-03-06

    This article introduces a spectral method for statistical subspace clustering. The method is built upon standard kernel spectral clustering techniques, however carefully tuned by theoretical understanding arising from random matrix findings. We show in particular that our method provides high clustering performance while standard kernel choices provably fail. An application to user grouping based on vector channel observations in the context of massive MIMO wireless communication networks is provided.

  4. Random Subspace Aggregation for Cancer Prediction with Gene Expression Profiles

    Directory of Open Access Journals (Sweden)

    Liying Yang

    2016-01-01

    Full Text Available Background. Precisely predicting cancer is crucial for cancer treatment. Gene expression profiles make it possible to analyze patterns between genes and cancers on the genome-wide scale. Gene expression data analysis, however, is confronted with enormous challenges for its characteristics, such as high dimensionality, small sample size, and low Signal-to-Noise Ratio. Results. This paper proposes a method, termed RS_SVM, to predict gene expression profiles via aggregating SVM trained on random subspaces. After choosing gene features through statistical analysis, RS_SVM randomly selects feature subsets to yield random subspaces and training SVM classifiers accordingly and then aggregates SVM classifiers to capture the advantage of ensemble learning. Experiments on eight real gene expression datasets are performed to validate the RS_SVM method. Experimental results show that RS_SVM achieved better classification accuracy and generalization performance in contrast with single SVM, K-nearest neighbor, decision tree, Bagging, AdaBoost, and the state-of-the-art methods. Experiments also explored the effect of subspace size on prediction performance. Conclusions. The proposed RS_SVM method yielded superior performance in analyzing gene expression profiles, which demonstrates that RS_SVM provides a good channel for such biological data.

  5. Random subspaces for encryption based on a private shared Cartesian frame

    International Nuclear Information System (INIS)

    Bartlett, Stephen D.; Hayden, Patrick; Spekkens, Robert W.

    2005-01-01

    A private shared Cartesian frame is a novel form of private shared correlation that allows for both private classical and quantum communication. Cryptography using a private shared Cartesian frame has the remarkable property that asymptotically, if perfect privacy is demanded, the private classical capacity is three times the private quantum capacity. We demonstrate that if the requirement for perfect privacy is relaxed, then it is possible to use the properties of random subspaces to nearly triple the private quantum capacity, almost closing the gap between the private classical and quantum capacities

  6. Invariant subspaces

    CERN Document Server

    Radjavi, Heydar

    2003-01-01

    This broad survey spans a wealth of studies on invariant subspaces, focusing on operators on separable Hilbert space. Largely self-contained, it requires only a working knowledge of measure theory, complex analysis, and elementary functional analysis. Subjects include normal operators, analytic functions of operators, shift operators, examples of invariant subspace lattices, compact operators, and the existence of invariant and hyperinvariant subspaces. Additional chapters cover certain results on von Neumann algebras, transitive operator algebras, algebras associated with invariant subspaces,

  7. Fault Diagnosis for Hydraulic Servo System Using Compressed Random Subspace Based ReliefF

    Directory of Open Access Journals (Sweden)

    Yu Ding

    2018-01-01

    Full Text Available Playing an important role in electromechanical systems, hydraulic servo system is crucial to mechanical systems like engineering machinery, metallurgical machinery, ships, and other equipment. Fault diagnosis based on monitoring and sensory signals plays an important role in avoiding catastrophic accidents and enormous economic losses. This study presents a fault diagnosis scheme for hydraulic servo system using compressed random subspace based ReliefF (CRSR method. From the point of view of feature selection, the scheme utilizes CRSR method to determine the most stable feature combination that contains the most adequate information simultaneously. Based on the feature selection structure of ReliefF, CRSR employs feature integration rules in the compressed domain. Meanwhile, CRSR substitutes information entropy and fuzzy membership for traditional distance measurement index. The proposed CRSR method is able to enhance the robustness of the feature information against interference while selecting the feature combination with balanced information expressing ability. To demonstrate the effectiveness of the proposed CRSR method, a hydraulic servo system joint simulation model is constructed by HyPneu and Simulink, and three fault modes are injected to generate the validation data.

  8. Pathological Brain Detection Using Weiner Filtering, 2D-Discrete Wavelet Transform, Probabilistic PCA, and Random Subspace Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Debesh Jha

    2017-01-01

    Full Text Available Accurate diagnosis of pathological brain images is important for patient care, particularly in the early phase of the disease. Although numerous studies have used machine-learning techniques for the computer-aided diagnosis (CAD of pathological brain, previous methods encountered challenges in terms of the diagnostic efficiency owing to deficiencies in the choice of proper filtering techniques, neuroimaging biomarkers, and limited learning models. Magnetic resonance imaging (MRI is capable of providing enhanced information regarding the soft tissues, and therefore MR images are included in the proposed approach. In this study, we propose a new model that includes Wiener filtering for noise reduction, 2D-discrete wavelet transform (2D-DWT for feature extraction, probabilistic principal component analysis (PPCA for dimensionality reduction, and a random subspace ensemble (RSE classifier along with the K-nearest neighbors (KNN algorithm as a base classifier to classify brain images as pathological or normal ones. The proposed methods provide a significant improvement in classification results when compared to other studies. Based on 5×5 cross-validation (CV, the proposed method outperforms 21 state-of-the-art algorithms in terms of classification accuracy, sensitivity, and specificity for all four datasets used in the study.

  9. Spatial prediction of landslides using a hybrid machine learning approach based on Random Subspace and Classification and Regression Trees

    Science.gov (United States)

    Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu

    2018-02-01

    A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.

  10. Greedy subspace clustering.

    Science.gov (United States)

    2016-09-01

    We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets ...

  11. Relevant Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2009-01-01

    Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace...... clusters. Their results are typically highly redundant, i.e. many clusters are detected multiple times in several projections. In this work, we propose a novel model for relevant subspace clustering (RESCU). We present a global optimization which detects the most interesting non-redundant subspace clusters...... achieves top clustering quality while competing approaches show greatly varying performance....

  12. OpenSubspace

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2009-01-01

    Subspace clustering and projected clustering are recent research areas for clustering in high dimensional spaces. As the field is rather young, there is a lack of comparative studies on the advantages and disadvantages of the different algorithms. Part of the underlying problem is the lack...... of available open source implementations that could be used by researchers to understand, compare, and extend subspace and projected clustering algorithms. In this paper, we discuss the requirements for open source evaluation software. We propose OpenSubspace, an open source framework that meets...... these requirements. OpenSubspace integrates state-of-the-art performance measures and visualization techniques to foster research in subspace and projected clustering....

  13. Self-adapted sliding scale spectroscopy ADC

    International Nuclear Information System (INIS)

    Xu Qichun; Wang Jingjin

    1992-01-01

    The traditional sliding scale technique causes a disabled range that is equal to the sliding length, thus reduces the analysis range of a MCA. A method for reduce ADC's DNL, which is called self-adapted sliding scale method, has been designed and tested. With this method, the disabled range caused by a traditional sliding scale method can be eliminated by a random trial scale and there is no need of an additional amplitude discriminator with swing threshold. A special trial-and-correct logic is presented. The tested DNL of the spectroscopy ADC described here is less than 0.5%

  14. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  15. On the dimension of subspaces with bounded Schmidt rank

    International Nuclear Information System (INIS)

    Cubitt, Toby; Montanaro, Ashley; Winter, Andreas

    2008-01-01

    We consider the question of how large a subspace of a given bipartite quantum system can be when the subspace contains only highly entangled states. This is motivated in part by results of Hayden et al. [e-print arXiv:quant-ph/0407049; Commun. Math. Phys., 265, 95 (2006)], which show that in large dxd-dimensional systems there exist random subspaces of dimension almost d 2 , all of whose states have entropy of entanglement at least log d-O(1). It is also a generalization of results on the dimension of completely entangled subspaces, which have connections with the construction of unextendible product bases. Here we take as entanglement measure the Schmidt rank, and determine, for every pair of local dimensions d A and d B , and every r, the largest dimension of a subspace consisting only of entangled states of Schmidt rank r or larger. This exact answer is a significant improvement on the best bounds that can be obtained using the random subspace techniques in Hayden et al. We also determine the converse: the largest dimension of a subspace with an upper bound on the Schmidt rank. Finally, we discuss the question of subspaces containing only states with Schmidt equal to r

  16. Semitransitive subspaces of operators

    Czech Academy of Sciences Publication Activity Database

    Bernik, J.; Drnovšek, R.; Hadwin, D.; Jafarian, A.; Bukovšek, D.K.; Košir, T.; Fijavž, M.K.; Laffey, T.; Livshits, L.; Mastnak, M.; Meshulam, R.; Müller, Vladimír; Nordgren, E.; Okniński, J.; Omladič, M.; Radjavi, H.; Sourour, A.; Timoney, R.

    2006-01-01

    Roč. 15, č. 1 (2006), s. 225-238 E-ISSN 1081-3810 Institutional research plan: CEZ:AV0Z10190503 Keywords : semitransitive subspaces Subject RIV: BA - General Mathematics Impact factor: 0.322, year: 2006 http://www.math.technion.ac.il/iic/ ela

  17. Diversity in random subspacing ensembles

    NARCIS (Netherlands)

    Tsymbal, A.; Pechenizkiy, M.; Cunningham, P.; Kambayashi, Y.; Mohania, M.K.; Wöß, W.

    2004-01-01

    Ensembles of learnt models constitute one of the main current directions in machine learning and data mining. It was shown experimentally and theoretically that in order for an ensemble to be effective, it should consist of classifiers having diversity in their predictions. A number of ways are

  18. Seismic noise attenuation using an online subspace tracking algorithm

    Science.gov (United States)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  19. Self-Adaptive Systems for Machine Intelligence

    CERN Document Server

    He, Haibo

    2011-01-01

    This book will advance the understanding and application of self-adaptive intelligent systems; therefore it will potentially benefit the long-term goal of replicating certain levels of brain-like intelligence in complex and networked engineering systems. It will provide new approaches for adaptive systems within uncertain environments. This will provide an opportunity to evaluate the strengths and weaknesses of the current state-of-the-art of knowledge, give rise to new research directions, and educate future professionals in this domain. Self-adaptive intelligent systems have wide application

  20. Subspace K-means clustering.

    Science.gov (United States)

    Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-12-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).

  1. Geometric mean for subspace selection.

    Science.gov (United States)

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2009-02-01

    Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.

  2. Self-adapted thermocouple-diagnostic complex

    International Nuclear Information System (INIS)

    Alekseev, S.V.; Grankovskij, K.Eh.; Olejnikov, P.P.; Prijmak, S.V.; Shikalov, V.F.

    2003-01-01

    A self-adapted thermocouple-diagnostic complex (STDC) for obtaining the reliable data on the coolant temperature in the reactors of NPP is described. The STDC in based on the thermal pulse monitoring of a thermocouple in the measuring channel of a reactor. Measurement method and STDC composition are substantiated. It is shown that introduction of the developed STDC ensures realization of precise and reliable temperature monitoring in the reactors of all types [ru

  3. The Role of Item Feedback in Self-Adapted Testing.

    Science.gov (United States)

    Roos, Linda L.; And Others

    1997-01-01

    The importance of item feedback in self-adapted testing was studied by comparing feedback and no feedback conditions for computerized adaptive tests and self-adapted tests taken by 363 college students. Results indicate that item feedback is not necessary to realize score differences between self-adapted and computerized adaptive testing. (SLD)

  4. Shape analysis with subspace symmetries

    KAUST Repository

    Berner, Alexander

    2011-04-01

    We address the problem of partial symmetry detection, i.e., the identification of building blocks a complex shape is composed of. Previous techniques identify parts that relate to each other by simple rigid mappings, similarity transforms, or, more recently, intrinsic isometries. Our approach generalizes the notion of partial symmetries to more general deformations. We introduce subspace symmetries whereby we characterize similarity by requiring the set of symmetric parts to form a low dimensional shape space. We present an algorithm to discover subspace symmetries based on detecting linearly correlated correspondences among graphs of invariant features. We evaluate our technique on various data sets. We show that for models with pronounced surface features, subspace symmetries can be found fully automatically. For complicated cases, a small amount of user input is used to resolve ambiguities. Our technique computes dense correspondences that can subsequently be used in various applications, such as model repair and denoising. © 2010 The Author(s).

  5. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  6. Parallel Monitors for Self-adaptive Sessions

    Directory of Open Access Journals (Sweden)

    Mario Coppo

    2016-06-01

    Full Text Available The paper presents a data-driven model of self-adaptivity for multiparty sessions. System choreography is prescribed by a global type. Participants are incarnated by processes associated with monitors, which control their behaviour. Each participant can access and modify a set of global data, which are able to trigger adaptations in the presence of critical changes of values. The use of the parallel composition for building global types, monitors and processes enables a significant degree of flexibility: an adaptation step can dynamically reconfigure a set of participants only, without altering the remaining participants, even if the two groups communicate.

  7. Subspace K-means clustering

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-01-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the

  8. Subspace preservation, subspace locality, and gluing of completely positive maps

    International Nuclear Information System (INIS)

    Aaberg, Johan

    2004-01-01

    Three concepts concerning completely positive maps (CPMs) and trace preserving CPMs (channels) are introduced and investigated. These are named subspace preserving (SP) CPMs, subspace local (SL) channels, and gluing of CPMs. SP CPMs has, in the case of trace preserving CPMs, a simple interpretation as those which preserve probability weights on a given orthogonal sum decomposition of the Hilbert space of a quantum system. The proposed definition of subspace locality of quantum channels is an attempt to answer the question of what kind of restriction should be put on a channel, if it is to act 'locally' with respect to two 'locations', when these naturally correspond to a separation of the total Hilbert space in an orthogonal sum of subspaces, rather than a tensor product decomposition. As a description of the concept of gluings of quantum channels, consider a pair of 'evolution machines', each with the ability to evolve the internal state of a 'particle' inserted into its input. Each of these machines is characterized by a channel describing the operation the internal state has experienced when the particle is returned at the output. Suppose a particle is put in a superposition between the input of the first and the second machine. Here it is shown that the total evolution caused by a pair of such devices is not uniquely determined by the channels of the two machines. Such 'global' channels describing the machine pair are examples of gluings of the two single machine channels. Various expressions to generate the set of SP and SL channels, as well as expressions to generate the set of gluings of given channels, are deduced. We discuss conceptual aspects of the nature of these channels and the nature of the non-uniqueness of gluings

  9. Self-Adaptive Step Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Shuhao Yu

    2013-01-01

    Full Text Available In the standard firefly algorithm, each firefly has the same step settings and its values decrease from iteration to iteration. Therefore, it may fall into the local optimum. Furthermore, the decreasing of step is restrained by the maximum of iteration, which has an influence on the convergence speed and precision. In order to avoid falling into the local optimum and reduce the impact of the maximum of iteration, a self-adaptive step firefly algorithm is proposed in the paper. Its core idea is setting the step of each firefly varying with the iteration, according to each firefly’s historical information and current situation. Experiments are made to show the performance of our approach compared with the standard FA, based on sixteen standard testing benchmark functions. The results reveal that our method can prevent the premature convergence and improve the convergence speed and accurateness.

  10. Subspace Arrangement Codes and Cryptosystems

    Science.gov (United States)

    2011-05-09

    Signature Date Acceptance for the Trident Scholar Committee Professor Carl E. Wick Associate Director of Midshipmen Research Signature Date SUBSPACE...Professor William Traves. I also thank Professor Carl Wick and the Trident Scholar Committee for providing me with the opportunity to conduct this... Sagan . Why the characteristic polynomial factors. Bulletin of the American Mathematical Society, 36(2):113–133, February 1999. [16] Karen E. Smith

  11. Robust adaptive subspace detection in impulsive noise

    KAUST Repository

    Ben Atitallah, Ismail

    2016-09-13

    This paper addresses the design of the Adaptive Subspace Matched Filter (ASMF) detector in the presence of compound Gaussian clutters and a mismatch in the steering vector. In particular, we consider the case wherein the ASMF uses the regularized Tyler estimator (RTE) to estimate the clutter covariance matrix. Under this setting, a major question that needs to be addressed concerns the setting of the threshold and the regularization parameter. To answer this question, we consider the regime in which the number of observations used to estimate the RTE and their dimensions grow large together. Recent results from random matrix theory are then used in order to approximate the false alarm and detection probabilities by deterministic quantities. The latter are optimized in order to maximize an upper bound on the asymptotic detection probability while keeping the asymptotic false alarm probability at a fixed rate. © 2016 IEEE.

  12. Robust adaptive subspace detection in impulsive noise

    KAUST Repository

    Ben Atitallah, Ismail; Kammoun, Abla; Alouini, Mohamed-Slim; Al-Naffouri, Tareq Y.

    2016-01-01

    This paper addresses the design of the Adaptive Subspace Matched Filter (ASMF) detector in the presence of compound Gaussian clutters and a mismatch in the steering vector. In particular, we consider the case wherein the ASMF uses the regularized Tyler estimator (RTE) to estimate the clutter covariance matrix. Under this setting, a major question that needs to be addressed concerns the setting of the threshold and the regularization parameter. To answer this question, we consider the regime in which the number of observations used to estimate the RTE and their dimensions grow large together. Recent results from random matrix theory are then used in order to approximate the false alarm and detection probabilities by deterministic quantities. The latter are optimized in order to maximize an upper bound on the asymptotic detection probability while keeping the asymptotic false alarm probability at a fixed rate. © 2016 IEEE.

  13. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  14. Controllable Subspaces of Open Quantum Dynamical Systems

    International Nuclear Information System (INIS)

    Zhang Ming; Gong Erling; Xie Hongwei; Hu Dewen; Dai Hongyi

    2008-01-01

    This paper discusses the concept of controllable subspace for open quantum dynamical systems. It is constructively demonstrated that combining structural features of decoherence-free subspaces with the ability to perform open-loop coherent control on open quantum systems will allow decoherence-free subspaces to be controllable. This is in contrast to the observation that open quantum dynamical systems are not open-loop controllable. To a certain extent, this paper gives an alternative control theoretical interpretation on why decoherence-free subspaces can be useful for quantum computation.

  15. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  16. External Evaluation Measures for Subspace Clustering

    DEFF Research Database (Denmark)

    Günnemann, Stephan; Färber, Ines; Müller, Emmanuel

    2011-01-01

    research area of subspace clustering. We formalize general quality criteria for subspace clustering measures not yet addressed in the literature. We compare the existing external evaluation methods based on these criteria and pinpoint limitations. We propose a novel external evaluation measure which meets...

  17. Subspace exclusion zones for damage localization

    DEFF Research Database (Denmark)

    Bernal, Dionisio; Ulriksen, Martin Dalgaard

    2018-01-01

    , this is exploited in the context of structural damage localization to cast the Subspace Exclusion Zone (SEZ) scheme, which locates damage by reconstructing the captured field quantity shifts from analytical subspaces indexed by postulated boundaries, the so-called exclusion zones (EZs), in a model of the structure...

  18. Subspace learning from image gradient orientations

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data is typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities fails very often to estimate reliably the

  19. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  20. Code subspaces for LLM geometries

    Science.gov (United States)

    Berenstein, David; Miller, Alexandra

    2018-03-01

    We consider effective field theory around classical background geometries with a gauge theory dual, specifically those in the class of LLM geometries. These are dual to half-BPS states of N= 4 SYM. We find that the language of code subspaces is natural for discussing the set of nearby states, which are built by acting with effective fields on these backgrounds. This work extends our previous work by going beyond the strict infinite N limit. We further discuss how one can extract the topology of the state beyond N→∞ and find that, as before, uncertainty and entanglement entropy calculations provide a useful tool to do so. Finally, we discuss obstructions to writing down a globally defined metric operator. We find that the answer depends on the choice of reference state that one starts with. Therefore, within this setup, there is ambiguity in trying to write an operator that describes the metric globally.

  1. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Science.gov (United States)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  2. Sinusoidal Order Estimation Using Angles between Subspaces

    Directory of Open Access Journals (Sweden)

    Søren Holdt Jensen

    2009-01-01

    Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.

  3. Active Subspaces for Wind Plant Surrogate Modeling

    Energy Technology Data Exchange (ETDEWEB)

    King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Adcock, Christiane [Massachusetts Institute of Technology

    2018-01-12

    Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial induction factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.

  4. Active Subspaces of Airfoil Shape Parameterizations

    Science.gov (United States)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  5. DySOA : Making service systems self-adaptive

    NARCIS (Netherlands)

    Siljee, J; Bosloper, [No Value; Nijhuis, J; Hammer, D; Benatallah, B; Casati, F; Traverso, P

    2005-01-01

    Service-centric systems exist in a very dynamic environment. This requires these systems to adapt at runtime in order to keep fulfilling their QoS. In order to create self-adaptive service systems, developers should not only design the service architecture, but also need to design the

  6. Subspace confinement: how good is your qubit?

    International Nuclear Information System (INIS)

    Devitt, Simon J; Schirmer, Sonia G; Oi, Daniel K L; Cole, Jared H; Hollenberg, Lloyd C L

    2007-01-01

    The basic operating element of standard quantum computation is the qubit, an isolated two-level system that can be accurately controlled, initialized and measured. However, the majority of proposed physical architectures for quantum computation are built from systems that contain much more complicated Hilbert space structures. Hence, defining a qubit requires the identification of an appropriate controllable two-dimensional sub-system. This prompts the obvious question of how well a qubit, thus defined, is confined to this subspace, and whether we can experimentally quantify the potential leakage into states outside the qubit subspace. We demonstrate how subspace leakage can be characterized using minimal theoretical assumptions by examining the Fourier spectrum of the oscillation experiment

  7. Monomial codes seen as invariant subspaces

    Directory of Open Access Journals (Sweden)

    García-Planas María Isabel

    2017-08-01

    Full Text Available It is well known that cyclic codes are very useful because of their applications, since they are not computationally expensive and encoding can be easily implemented. The relationship between cyclic codes and invariant subspaces is also well known. In this paper a generalization of this relationship is presented between monomial codes over a finite field and hyperinvariant subspaces of n under an appropriate linear transformation. Using techniques of Linear Algebra it is possible to deduce certain properties for this particular type of codes, generalizing known results on cyclic codes.

  8. Matrix Krylov subspace methods for image restoration

    Directory of Open Access Journals (Sweden)

    khalide jbilou

    2015-09-01

    Full Text Available In the present paper, we consider some matrix Krylov subspace methods for solving ill-posed linear matrix equations and in those problems coming from the restoration of blurred and noisy images. Applying the well known Tikhonov regularization procedure leads to a Sylvester matrix equation depending the Tikhonov regularized parameter. We apply the matrix versions of the well known Krylov subspace methods, namely the Least Squared (LSQR and the conjugate gradient (CG methods to get approximate solutions representing the restored images. Some numerical tests are presented to show the effectiveness of the proposed methods.

  9. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    OpenAIRE

    Bi, Qiao; Guo, Liu; Ruda, H. E.

    2010-01-01

    A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  10. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    Directory of Open Access Journals (Sweden)

    Qiao Bi

    2010-01-01

    Full Text Available A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  11. LBAS: Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Abe, Kyniyoshi

    The regularizing properties of Lanczos bidiagonalization are powerful when the underlying Krylov subspace captures the dominating components of the solution. In some applications the regularized solution can be further improved by augmenting the Krylov subspace with a low-dimensional subspace tha...

  12. A Novel Self-Adaptive Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Kaiping Luo

    2013-01-01

    Full Text Available The harmony search algorithm is a music-inspired optimization technology and has been successfully applied to diverse scientific and engineering problems. However, like other metaheuristic algorithms, it still faces two difficulties: parameter setting and finding the optimal balance between diversity and intensity in searching. This paper proposes a novel, self-adaptive search mechanism for optimization problems with continuous variables. This new variant can automatically configure the evolutionary parameters in accordance with problem characteristics, such as the scale and the boundaries, and dynamically select evolutionary strategies in accordance with its search performance. The new variant simplifies the parameter setting and efficiently solves all types of optimization problems with continuous variables. Statistical test results show that this variant is considerably robust and outperforms the original harmony search (HS, improved harmony search (IHS, and other self-adaptive variants for large-scale optimization problems and constrained problems.

  13. Differential Evolution Algorithm with Self-Adaptive Population Resizing Mechanism

    Directory of Open Access Journals (Sweden)

    Xu Wang

    2013-01-01

    Full Text Available A differential evolution (DE algorithm with self-adaptive population resizing mechanism, SapsDE, is proposed to enhance the performance of DE by dynamically choosing one of two mutation strategies and tuning control parameters in a self-adaptive manner. More specifically, more appropriate mutation strategies along with its parameter settings can be determined adaptively according to the previous status at different stages of the evolution process. To verify the performance of SapsDE, 17 benchmark functions with a wide range of dimensions, and diverse complexities are used. Nonparametric statistical procedures were performed for multiple comparisons between the proposed algorithm and five well-known DE variants from the literature. Simulation results show that SapsDE is effective and efficient. It also exhibits much more superiorresultsthan the other five algorithms employed in the comparison in most of the cases.

  14. Subspace System Identification of the Kalman Filter

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    2003-07-01

    Full Text Available Some proofs concerning a subspace identification algorithm are presented. It is proved that the Kalman filter gain and the noise innovations process can be identified directly from known input and output data without explicitly solving the Riccati equation. Furthermore, it is in general and for colored inputs, proved that the subspace identification of the states only is possible if the deterministic part of the system is known or identified beforehand. However, if the inputs are white, then, it is proved that the states can be identified directly. Some alternative projection matrices which can be used to compute the extended observability matrix directly from the data are presented. Furthermore, an efficient method for computing the deterministic part of the system is presented. The closed loop subspace identification problem is also addressed and it is shown that this problem is solved and unbiased estimates are obtained by simply including a filter in the feedback. Furthermore, an algorithm for consistent closed loop subspace estimation is presented. This algorithm is using the controller parameters in order to overcome the bias problem.

  15. Quantum Zeno subspaces induced by temperature

    Energy Technology Data Exchange (ETDEWEB)

    Militello, B.; Scala, M.; Messina, A. [Dipartimento di Fisica dell' Universita di Palermo, Via Archirafi 36, I-90123 Palermo (Italy)

    2011-08-15

    We discuss the partitioning of the Hilbert space of a quantum system induced by the interaction with another system at thermal equilibrium, showing that the higher the temperature the more effective is the formation of Zeno subspaces. We show that our analysis keeps its validity even in the case of interaction with a bosonic reservoir, provided appropriate limitations of the relevant bandwidth.

  16. Counting Subspaces of a Finite Vector Space

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 11. Counting Subspaces of a Finite Vector Space – 1. Amritanshu Prasad. General Article Volume 15 Issue 11 November 2010 pp 977-987. Fulltext. Click here to view fulltext PDF. Permanent link:

  17. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    Science.gov (United States)

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  18. On Self-Adaptive Method for General Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Abdellah Bnouhachem

    2008-01-01

    Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.

  19. Unsupervised spike sorting based on discriminative subspace learning.

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-01-01

    Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.

  20. INDOOR SUBSPACING TO IMPLEMENT INDOORGML FOR INDOOR NAVIGATION

    Directory of Open Access Journals (Sweden)

    H. Jung

    2015-10-01

    Full Text Available According to an increasing demand for indoor navigation, there are great attempts to develop applicable indoor network. Representation for a room as a node is not sufficient to apply complex and large buildings. As OGC established IndoorGML, subspacing to partition the space for constructing logical network is introduced. Concerning subspacing for indoor network, transition space like halls or corridors also have to be considered. This study presents the subspacing process for creating an indoor network in shopping mall. Furthermore, categorization of transition space is performed and subspacing of this space is considered. Hall and squares in mall is especially defined for subspacing. Finally, implementation of subspacing process for indoor network is presented.

  1. Indoor Subspacing to Implement Indoorgml for Indoor Navigation

    Science.gov (United States)

    Jung, H.; Lee, J.

    2015-10-01

    According to an increasing demand for indoor navigation, there are great attempts to develop applicable indoor network. Representation for a room as a node is not sufficient to apply complex and large buildings. As OGC established IndoorGML, subspacing to partition the space for constructing logical network is introduced. Concerning subspacing for indoor network, transition space like halls or corridors also have to be considered. This study presents the subspacing process for creating an indoor network in shopping mall. Furthermore, categorization of transition space is performed and subspacing of this space is considered. Hall and squares in mall is especially defined for subspacing. Finally, implementation of subspacing process for indoor network is presented.

  2. On the maximal dimension of a completely entangled subspace for ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    dim S = d1d2 ...dk − (d1 +···+ dk) + k − 1, where E is the collection of all completely entangled subspaces. When H1 = H2 and k = 2 an explicit orthonormal basis of a maximal completely entangled subspace of H1 ⊗ H2 is given. We also introduce a more delicate notion of a perfectly entangled subspace for a multipartite ...

  3. A Self Adaptive Differential Evolution Algorithm for Global Optimization

    Science.gov (United States)

    Kumar, Pravesh; Pant, Millie

    This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.

  4. Acoustic levitation with self-adaptive flexible reflectors.

    Science.gov (United States)

    Hong, Z Y; Xie, W J; Wei, B

    2011-07-01

    Two kinds of flexible reflectors are proposed and examined in this paper to improve the stability of single-axis acoustic levitator, especially in the case of levitating high-density and high-temperature samples. One kind is those with a deformable reflecting surface, and the other kind is those with an elastic support, both of which are self-adaptive to the change of acoustic radiation pressure. High-density materials such as iridium (density 22.6 gcm(-3)) are stably levitated at room temperature with a soft reflector made of colloid as well as a rigid reflector supported by a spring. In addition, the containerless melting and solidification of binary In-Bi eutectic alloy (melting point 345.8 K) and ternary Ag-Cu-Ge eutectic alloy (melting point 812 K) are successfully achieved by applying the elastically supported reflector with the assistance of a laser beam.

  5. Central subspace dimensionality reduction using covariance operators.

    Science.gov (United States)

    Kim, Minyoung; Pavlovic, Vladimir

    2011-04-01

    We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.

  6. Seismic noise attenuation using an online subspace tracking algorithm

    NARCIS (Netherlands)

    Zhou, Yatong; Li, Shuhua; Zhang, D.; Chen, Yangkang

    2018-01-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient

  7. On the numerical stability analysis of pipelined Krylov subspace methods

    Czech Academy of Sciences Publication Activity Database

    Carson, E.T.; Rozložník, Miroslav; Strakoš, Z.; Tichý, P.; Tůma, M.

    submitted 2017 (2018) R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : Krylov subspace methods * the conjugate gradient method * numerical stability * inexact computations * delay of convergence * maximal attainable accuracy * pipelined Krylov subspace methods * exascale computations

  8. Subspace methods for pattern recognition in intelligent environment

    CERN Document Server

    Jain, Lakhmi

    2014-01-01

    This research book provides a comprehensive overview of the state-of-the-art subspace learning methods for pattern recognition in intelligent environment. With the fast development of internet and computer technologies, the amount of available data is rapidly increasing in our daily life. How to extract core information or useful features is an important issue. Subspace methods are widely used for dimension reduction and feature extraction in pattern recognition. They transform a high-dimensional data to a lower-dimensional space (subspace), where most information is retained. The book covers a broad spectrum of subspace methods including linear, nonlinear and multilinear subspace learning methods and applications. The applications include face alignment, face recognition, medical image analysis, remote sensing image classification, traffic sign recognition, image clustering, super resolution, edge detection, multi-view facial image synthesis.

  9. Towards Self-adaptation for Dependable Service-Oriented Systems

    Science.gov (United States)

    Cardellini, Valeria; Casalicchio, Emiliano; Grassi, Vincenzo; Lo Presti, Francesco; Mirandola, Raffaela

    Increasingly complex information systems operating in dynamic environments ask for management policies able to deal intelligently and autonomously with problems and tasks. An attempt to deal with these aspects can be found in the Service-Oriented Architecture (SOA) paradigm that foresees the creation of business applications from independently developed services, where services and applications build up complex dependencies. Therefore the dependability of SOA systems strongly depends on their ability to self-manage and adapt themselves to cope with changes in the operating conditions and to meet the required dependability with a minimum of resources. In this paper we propose a model-based approach to the realization of self-adaptable SOA systems, aimed at the fulfillment of dependability requirements. Specifically, we provide a methodology driving the system adaptation and we discuss the architectural issues related to its implementation. To bring this approach to fruition, we developed a prototype tool and we show the results that can be achieved with a simple example.

  10. Self-adaptive calibration for staring infrared sensors

    Science.gov (United States)

    Kendall, William B.; Stocker, Alan D.

    1993-10-01

    This paper presents a new, self-adaptive technique for the correlation of non-uniformities (fixed-pattern noise) in high-density infrared focal-plane detector arrays. We have developed a new approach to non-uniformity correction in which we use multiple image frames of the scene itself, and take advantage of the aim-point wander caused by jitter, residual tracking errors, or deliberately induced motion. Such wander causes each detector in the array to view multiple scene elements, and each scene element to be viewed by multiple detectors. It is therefore possible to formulate (and solve) a set of simultaneous equations from which correction parameters can be computed for the detectors. We have tested our approach with actual images collected by the ARPA-sponsored MUSIC infrared sensor. For these tests we employed a 60-frame (0.75-second) sequence of terrain images for which an out-of-date calibration was deliberately used. The sensor was aimed at a point on the ground via an operator-assisted tracking system having a maximum aim point wander on the order of ten pixels. With these data, we were able to improve the calibration accuracy by a factor of approximately 100.

  11. Relational Database Extension Oriented, Self-adaptive Imagery Pyramid Model

    Directory of Open Access Journals (Sweden)

    HU Zhenghua

    2015-06-01

    Full Text Available With the development of remote sensing technology, especially the improvement of sensor resolution, the amount of image data is increasing. This puts forward higher requirements to manage huge amount of data efficiently and intelligently. And how to access massive remote sensing data with efficiency and smartness becomes an increasingly popular topic. In this paper, against current development status of Spatial Data Management System, we proposed a self-adaptive strategy for image blocking and a method for LoD(level of detailmodel construction that adapts, with the combination of database storage, network transmission and the hardware of the client. Confirmed by experiments, this imagery management mechanism can achieve intelligent and efficient storage and access in a variety of different conditions of database, network and client. This study provides a feasible idea and method for efficient image data management, contributing to the efficient access and management for remote sensing image data which are based on database technology under network environment of C/S architecture.

  12. An alternative subspace approach to EEG dipole source localization

    Science.gov (United States)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  13. An alternative subspace approach to EEG dipole source localization

    International Nuclear Information System (INIS)

    Xu Xiaoliang; Xu, Bobby; He Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist

  14. Microscopic theory of dynamical subspace for large amplitude collective motion

    International Nuclear Information System (INIS)

    Sakata, Fumihiko; Marumori, Toshio; Ogura, Masanori.

    1986-01-01

    A full quantum theory appropriate for describing large amplitude collective motion is proposed by exploiting the basic idea of the semi-classical theory so far developed within the time-depedent Hartree-Fock theory. A central problem of the quantum theory is how to determine an optimal representation called a dynamical representation specific for the collective subspace where the large amplitude collective motion is replicated as precisely as possible. As an extension of the semi-classical theory where the concept of an approximate integral surface played an important role, the collective subspace is properly characterized by introducing a concept of an approximate invariant subspace of the Hamiltonian. (author)

  15. A New Inexact Inverse Subspace Iteration for Generalized Eigenvalue Problems

    Directory of Open Access Journals (Sweden)

    Fatemeh Mohammad

    2014-05-01

    Full Text Available In this paper‎, ‎we represent an inexact inverse‎ ‎subspace iteration method for computing a few eigenpairs of the‎ ‎generalized eigenvalue problem $Ax = \\lambda Bx$[Q.~Ye and P.~Zhang‎, ‎Inexact inverse subspace iteration for generalized eigenvalue‎ ‎problems‎, ‎Linear Algebra and its Application‎, ‎434 (2011 1697-1715‎‎]‎. ‎In particular‎, ‎the linear convergence property of the inverse‎ ‎subspace iteration is preserved‎.

  16. A self-adaptive feedforward rf control system for linacs

    International Nuclear Information System (INIS)

    Zhang Renshan; Ben-Zvi, I.; Xie Jialin

    1993-01-01

    The design and performance of a self-adaptive feedforward rf control system are reported. The system was built for the linac of the Accelerator Test Facility (ATF) at Brookhaven National Laboratory. Variables of time along the linac macropulse, such as field or phase are discretized and represented as vectors. Upon turn-on or after a large change in the operating-point, the control system acquires the response of the system to test signal vectors and generates a linearized system response matrix. During operation an error vector is generated by comparing the linac variable vectors and a target vector. The error vector is multiplied by the inverse of the system's matrix to generate a correction vector is added to an operating point vector. This control system can be used to control a klystron to produce flat rf amplitude and phase pulses, to control a rf cavity to reduce the rf field fluctuation, and to compensate the energy spread among bunches in a rf linac. Beam loading effects can be corrected and a programmed ramp can be produced. The performance of the control system has been evaluated on the control of a klystron's output as well as an rf cavity. Both amplitude and phase have been regulated simultaneously. In initial tests, the rf output from a klystron has been regulated to an amplitude fluctuation of less than ±0.3% and phase variation of less than ±0.6deg. The rf field of the ATF's photo-cathode microwave gun cavity has been regulated to ±5% in amplitude and simultaneously to ±1deg in phase. Regulating just the rf field amplitude in the rf gun cavity, we have achieved amplitude fluctuation of less than ±2%. (orig.)

  17. A hydraulic hybrid propulsion method for automobiles with self-adaptive system

    International Nuclear Information System (INIS)

    Wu, Wei; Hu, Jibin; Yuan, Shihua; Di, Chongfeng

    2016-01-01

    A hydraulic hybrid vehicle with the self-adaptive system is proposed. The mode-switching between the driving mode and the hydraulic regenerative braking mode is realised by the pressure cross-feedback control. Extensive simulated and tested results are presented. The control parameters are reduced and the energy efficiency can be increased by the self-adaptive system. The mode-switching response is fast. The response time can be adjusted by changing the controlling spool diameter of the hydraulic operated check valve in the self-adaptive system. The closing of the valve becomes faster with a smaller controlling spool diameter. The hydraulic regenerative braking mode can be achieved by changing the hydraulic transformer controlled angle. Compared with the convention electric-hydraulic system, the self-adaptive system for the hydraulic hybrid vehicle mode-switching has a higher reliability and a lower cost. The efficiency of the hydraulic regenerative braking is also increased. - Highlights: • A new hybrid system with a self-adaptive system for automobiles is presented. • The mode-switching is realised by the pressure cross-feedback control. • The energy efficiency can be increased with the self-adaptive system. • The control parameters are reduced with the self-adaptive system.

  18. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo; Schö nlieb, Carola-Bibiane

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a

  19. Optimal Design of Large Dimensional Adaptive Subspace Detectors

    KAUST Repository

    Ben Atitallah, Ismail

    2016-05-27

    This paper addresses the design of Adaptive Subspace Matched Filter (ASMF) detectors in the presence of a mismatch in the steering vector. These detectors are coined as adaptive in reference to the step of utilizing an estimate of the clutter covariance matrix using training data of signalfree observations. To estimate the clutter covariance matrix, we employ regularized covariance estimators that, by construction, force the eigenvalues of the covariance estimates to be greater than a positive scalar . While this feature is likely to increase the bias of the covariance estimate, it presents the advantage of improving its conditioning, thus making the regularization suitable for handling high dimensional regimes. In this paper, we consider the setting of the regularization parameter and the threshold for ASMF detectors in both Gaussian and Compound Gaussian clutters. In order to allow for a proper selection of these parameters, it is essential to analyze the false alarm and detection probabilities. For tractability, such a task is carried out under the asymptotic regime in which the number of observations and their dimensions grow simultaneously large, thereby allowing us to leverage existing results from random matrix theory. Simulation results are provided in order to illustrate the relevance of the proposed design strategy and to compare the performances of the proposed ASMF detectors versus Adaptive normalized Matched Filter (ANMF) detectors under mismatch scenarios.

  20. Optimal Design of Large Dimensional Adaptive Subspace Detectors

    KAUST Repository

    Ben Atitallah, Ismail; Kammoun, Abla; Alouini, Mohamed-Slim; Alnaffouri, Tareq Y.

    2016-01-01

    This paper addresses the design of Adaptive Subspace Matched Filter (ASMF) detectors in the presence of a mismatch in the steering vector. These detectors are coined as adaptive in reference to the step of utilizing an estimate of the clutter covariance matrix using training data of signalfree observations. To estimate the clutter covariance matrix, we employ regularized covariance estimators that, by construction, force the eigenvalues of the covariance estimates to be greater than a positive scalar . While this feature is likely to increase the bias of the covariance estimate, it presents the advantage of improving its conditioning, thus making the regularization suitable for handling high dimensional regimes. In this paper, we consider the setting of the regularization parameter and the threshold for ASMF detectors in both Gaussian and Compound Gaussian clutters. In order to allow for a proper selection of these parameters, it is essential to analyze the false alarm and detection probabilities. For tractability, such a task is carried out under the asymptotic regime in which the number of observations and their dimensions grow simultaneously large, thereby allowing us to leverage existing results from random matrix theory. Simulation results are provided in order to illustrate the relevance of the proposed design strategy and to compare the performances of the proposed ASMF detectors versus Adaptive normalized Matched Filter (ANMF) detectors under mismatch scenarios.

  1. On spectral subspaces and their applications to automorphism groups

    International Nuclear Information System (INIS)

    Olesen, Dorte

    1974-03-01

    An attempt is made to give a survey of the theory of spectra and spectral subspaces of group representations in an abstract Banach space setting. The theory is applied to the groups of automorphisms of operator algebras (mostly C*-algebras) and some important results of interest for mathematical physicists are proved (restrictions of the bitransposed action, spectral subspaces for the transposed action on a C*-algebra, and positive states and representations of Rsup(n)) [fr

  2. EVD Dualdating Based Online Subspace Learning

    Directory of Open Access Journals (Sweden)

    Bo Jin

    2014-01-01

    Full Text Available Conventional incremental PCA methods usually only discuss the situation of adding samples. In this paper, we consider two different cases: deleting samples and simultaneously adding and deleting samples. To avoid the NP-hard problem of downdating SVD without right singular vectors and specific position information, we choose to use EVD instead of SVD, which is used by most IPCA methods. First, we propose an EVD updating and downdating algorithm, called EVD dualdating, which permits simultaneous arbitrary adding and deleting operation, via transforming the EVD of the covariance matrix into a SVD updating problem plus an EVD of a small autocorrelation matrix. A comprehensive analysis is delivered to express the essence, expansibility, and computation complexity of EVD dualdating. A mathematical theorem proves that if the whole data matrix satisfies the low-rank-plus-shift structure, EVD dualdating is an optimal rank-k estimator under the sequential environment. A selection method based on eigenvalues is presented to determine the optimal rank k of the subspace. Then, we propose three incremental/decremental PCA methods: EVDD-IPCA, EVDD-DPCA, and EVDD-IDPCA, which are adaptive to the varying mean. Finally, plenty of comparative experiments demonstrate that EVDD-based methods outperform conventional incremental/decremental PCA methods in both efficiency and accuracy.

  3. Self-adaptive phosphor coating technology for wafer-level scale chip packaging

    International Nuclear Information System (INIS)

    Zhou Linsong; Rao Haibo; Wang Wei; Wan Xianlong; Liao Junyuan; Wang Xuemei; Zhou Da; Lei Qiaolin

    2013-01-01

    A new self-adaptive phosphor coating technology has been successfully developed, which adopted a slurry method combined with a self-exposure process. A phosphor suspension in the water-soluble photoresist was applied and exposed to LED blue light itself and developed to form a conformal phosphor coating with self-adaptability to the angular distribution of intensity of blue light and better-performing spatial color uniformity. The self-adaptive phosphor coating technology had been successfully adopted in the wafer surface to realize a wafer-level scale phosphor conformal coating. The first-stage experiments show satisfying results and give an adequate demonstration of the flexibility of self-adaptive coating technology on application of WLSCP. (semiconductor devices)

  4. Beyond Reactive Planning: Self Adaptive Software and Self Modeling Software in Predictive Deliberation Management

    National Research Council Canada - National Science Library

    Lenahan, Jack; Nash, Michael P; Charles, Phil

    2008-01-01

    .... We present the following hypothesis: predictive deliberation management using self-adapting and self-modeling software will be required to provide mission planning adjustments after the start of a mission...

  5. Finite element method for solving Kohn-Sham equations based on self-adaptive tetrahedral mesh

    International Nuclear Information System (INIS)

    Zhang Dier; Shen Lihua; Zhou Aihui; Gong Xingao

    2008-01-01

    A finite element (FE) method with self-adaptive mesh-refinement technique is developed for solving the density functional Kohn-Sham equations. The FE method adopts local piecewise polynomials basis functions, which produces sparsely structured matrices of Hamiltonian. The method is well suitable for parallel implementation without using Fourier transform. In addition, the self-adaptive mesh-refinement technique can control the computational accuracy and efficiency with optimal mesh density in different regions

  6. Two-qubit quantum computing in a projected subspace

    International Nuclear Information System (INIS)

    Bi Qiao; Ruda, H.E.; Zhan, M.S.

    2002-01-01

    A formulation for performing quantum computing in a projected subspace is presented, based on the subdynamical kinetic equation (SKE) for an open quantum system. The eigenvectors of the kinetic equation are shown to remain invariant before and after interaction with the environment. However, the eigenvalues in the projected subspace exhibit a type of phase shift to the evolutionary states. This phase shift does not destroy the decoherence-free (DF) property of the subspace because the associated fidelity is 1. This permits a universal formalism to be presented--the eigenprojectors of the free part of the Hamiltonian for the system and bath may be used to construct a DF projected subspace based on the SKE. To eliminate possible phase or unitary errors induced by the change in the eigenvalues, a cancellation technique is proposed, using the adjustment of the coupling time, and applied to a two-qubit computing system. A general criteria for constructing a DF-projected subspace from the SKE is discussed. Finally, a proposal for using triangulation to realize a decoherence-free subsystem based on SKE is presented. The concrete formulation for a two-qubit model is given exactly. Our approach is general and appears to be applicable to any type of decoherence

  7. Adiabatic evolution of decoherence-free subspaces and its shortcuts

    Science.gov (United States)

    Wu, S. L.; Huang, X. L.; Li, H.; Yi, X. X.

    2017-10-01

    The adiabatic theorem and shortcuts to adiabaticity for time-dependent open quantum systems are explored in this paper. Starting from the definition of dynamical stable decoherence-free subspace, we show that, under a compact adiabatic condition, the quantum state remains in the time-dependent decoherence-free subspace with an extremely high purity, even though the dynamics of the open quantum system may not be adiabatic. The adiabatic condition mentioned here in the adiabatic theorem for open systems is very similar to that for closed quantum systems, except that the operators required to change slowly are the Lindblad operators. We also show that the adiabatic evolution of decoherence-free subspaces depends on the existence of instantaneous decoherence-free subspaces, which requires that the Hamiltonian of open quantum systems be engineered according to the incoherent control protocol. In addition, shortcuts to adiabaticity for adiabatic decoherence-free subspaces are also presented based on the transitionless quantum driving method. Finally, we provide an example that consists of a two-level system coupled to a broadband squeezed vacuum field to show our theory. Our approach employs Markovian master equations and the theory can apply to finite-dimensional quantum open systems.

  8. Independence and totalness of subspaces in phase space methods

    Science.gov (United States)

    Vourdas, A.

    2018-04-01

    The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.

  9. Reduced-Rank Adaptive Filtering Using Krylov Subspace

    Directory of Open Access Journals (Sweden)

    Sergueï Burykh

    2003-01-01

    Full Text Available A unified view of several recently introduced reduced-rank adaptive filters is presented. As all considered methods use Krylov subspace for rank reduction, the approach taken in this work is inspired from Krylov subspace methods for iterative solutions of linear systems. The alternative interpretation so obtained is used to study the properties of each considered technique and to relate one reduced-rank method to another as well as to algorithms used in computational linear algebra. Practical issues are discussed and low-complexity versions are also included in our study. It is believed that the insight developed in this paper can be further used to improve existing reduced-rank methods according to known results in the domain of Krylov subspace methods.

  10. A generalized Schwinger boson mapping with a physical subspace

    International Nuclear Information System (INIS)

    Scholtz, F.G.; Geyer, H.B.

    1988-01-01

    We investigate the existence of a physical subspace for generalized Schwinger boson mappings of SO(2n+1) contains SO(2n) in view of previous observations by Marshalek and the recent construction of such a mapping and subspace for SO(8) by Kaup. It is shown that Kaup's construction can be attributed to the existence of a unique SO(8) automorphism. We proceed to construct a generalized Schwinger-type mapping for SO(2n+1) contains SO(2n) which, in contrast to a similar attempt by Yamamura and Nishiyama, indeed has a corresponding physical subspace. This new mapping includes in the special case of SO(8) the mapping by Kaup which is equivalent to the one given by Yamamura and Nishiyama for n=4. Nevertheless, we indicate the limitations of the generalized Schwinger mapping regarding its applicability to situations where one seeks to establish a direct link between phenomenological boson models and an underlying fermion microscopy. (orig.)

  11. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  12. Roller Bearing Monitoring by New Subspace-Based Damage Indicator

    Directory of Open Access Journals (Sweden)

    G. Gautier

    2015-01-01

    Full Text Available A frequency-band subspace-based damage identification method for fault diagnosis in roller bearings is presented. Subspace-based damage indicators are obtained by filtering the vibration data in the frequency range where damage is likely to occur, that is, around the bearing characteristic frequencies. The proposed method is validated by considering simulated data of a damaged bearing. Also, an experimental case is considered which focuses on collecting the vibration data issued from a run-to-failure test. It is shown that the proposed method can detect bearing defects and, as such, it appears to be an efficient tool for diagnosis purpose.

  13. Krylov subspace methods for solving large unsymmetric linear systems

    International Nuclear Information System (INIS)

    Saad, Y.

    1981-01-01

    Some algorithms based upon a projection process onto the Krylov subspace K/sub m/ = Span(r 0 , Ar 0 ,...,A/sup m/-1r 0 ) are developed, generalizing the method of conjugate gradients to unsymmetric systems. These methods are extensions of Arnoldi's algorithm for solving eigenvalue problems. The convergence is analyzed in terms of the distance of the solution to the subspace K/sub m/ and some error bounds are established showing, in particular, a similarity with the conjugate gradient method (for symmetric matrices) when the eigenvalues are real. Several numerical experiments are described and discussed

  14. A subspace preconditioning algorithm for eigenvector/eigenvalue computation

    Energy Technology Data Exchange (ETDEWEB)

    Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.

    1996-12-31

    We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.

  15. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  16. Fast regularizing sequential subspace optimization in Banach spaces

    International Nuclear Information System (INIS)

    Schöpfer, F; Schuster, T

    2009-01-01

    We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

  17. Experimental Comparison of Signal Subspace Based Noise Reduction Methods

    DEFF Research Database (Denmark)

    Hansen, Peter Søren Kirk; Hansen, Per Christian; Hansen, Steffen Duus

    1999-01-01

    The signal subspace approach for non-parametric speech enhancement is considered. Several algorithms have been proposed in the literature but only partly analyzed. Here, the different algorithms are compared, and the emphasis is put onto the limiting factors and practical behavior of the estimators...

  18. Recursive subspace identification for in flight modal analysis of airplanes

    OpenAIRE

    De Cock , Katrien; Mercère , Guillaume; De Moor , Bart

    2006-01-01

    International audience; In this paper recursive subspace identification algorithms are applied to track the modal parameters of airplanes on-line during test flights. The ability to track changes in the damping ratios and the influence of the forgetting factor are studied through simulations.

  19. Von Neumann algebras as complemented subspaces of B(H)

    DEFF Research Database (Denmark)

    Christensen, Erik; Wang, Liguang

    2014-01-01

    Let M be a von Neumann algebra of type II1 which is also a complemented subspace of B( H). We establish an algebraic criterion, which ensures that M is an injective von Neumann algebra. As a corollary we show that if M is a complemented factor of type II1 on a Hilbert space H, then M is injective...

  20. Lie n-derivations on 7 -subspace lattice algebras

    Indian Academy of Sciences (India)

    all x ∈ K and all A ∈ Alg L. Based on this result, a complete characterization of linear n-Lie derivations on Alg L is obtained. Keywords. J -subspace lattice algebras; Lie derivations; Lie n-derivations; derivations. 2010 Mathematics Subject Classification. 47B47, 47L35. 1. Introduction. Let A be an algebra. Recall that a linear ...

  1. Intrinsic Grassmann Averages for Online Linear and Robust Subspace Learning

    DEFF Research Database (Denmark)

    Chakraborty, Rudrasis; Hauberg, Søren; Vemuri, Baba C.

    2017-01-01

    Principal Component Analysis (PCA) is a fundamental method for estimating a linear subspace approximation to high-dimensional data. Many algorithms exist in literature to achieve a statistically robust version of PCA called RPCA. In this paper, we present a geometric framework for computing the p...

  2. Active Subspace Methods for Data-Intensive Inverse Problems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qiqi [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-04-27

    The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.

  3. Subspace identification of distributed clusters of homogeneous systems

    NARCIS (Netherlands)

    Yu, C.; Verhaegen, M.H.G.

    2017-01-01

    This note studies the identification of a network comprised of interconnected clusters of LTI systems. Each cluster consists of homogeneous dynamical systems, and its interconnections with the rest of the network are unmeasurable. A subspace identification method is proposed for identifying a single

  4. A parallel direct solver for the self-adaptive hp Finite Element Method

    KAUST Repository

    Paszyński, Maciej R.; Pardo, David; Torres-Verdí n, Carlos; Demkowicz, Leszek F.; Calo, Victor M.

    2010-01-01

    measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements

  5. A Self-adaptive Scope Allocation Scheme for Labeling Dynamic XML Documents

    NARCIS (Netherlands)

    Shen, Y.; Feng, L.; Shen, T.; Wang, B.

    This paper proposes a self-adaptive scope allocation scheme for labeling dynamic XML documents. It is general, light-weight and can be built upon existing data retrieval mechanisms. Bayesian inference is used to compute the actual scope allocated for labeling a certain node based on both the prior

  6. Extending and implementing the Self-adaptive Virtual Processor for distributed memory architectures

    NARCIS (Netherlands)

    van Tol, M.W.; Koivisto, J.

    2011-01-01

    Many-core architectures of the future are likely to have distributed memory organizations and need fine grained concurrency management to be used effectively. The Self-adaptive Virtual Processor (SVP) is an abstract concurrent programming model which can provide this, but the model and its current

  7. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  8. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    Science.gov (United States)

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  9. Geodesic Flow Kernel Support Vector Machine for Hyperspectral Image Classification by Unsupervised Subspace Feature Transfer

    Directory of Open Access Journals (Sweden)

    Alim Samat

    2016-03-01

    Full Text Available In order to deal with scenarios where the training data, used to deduce a model, and the validation data have different statistical distributions, we study the problem of transformed subspace feature transfer for domain adaptation (DA in the context of hyperspectral image classification via a geodesic Gaussian flow kernel based support vector machine (GFKSVM. To show the superior performance of the proposed approach, conventional support vector machines (SVMs and state-of-the-art DA algorithms, including information-theoretical learning of discriminative cluster for domain adaptation (ITLDC, joint distribution adaptation (JDA, and joint transfer matching (JTM, are also considered. Additionally, unsupervised linear and nonlinear subspace feature transfer techniques including principal component analysis (PCA, randomized nonlinear principal component analysis (rPCA, factor analysis (FA and non-negative matrix factorization (NNMF are investigated and compared. Experiments on two real hyperspectral images show the cross-image classification performances of the GFKSVM, confirming its effectiveness and suitability when applied to hyperspectral images.

  10. Subspace-based interference removal methods for a multichannel biomagnetic sensor array

    Science.gov (United States)

    Sekihara, Kensuke; Nagarajan, Srikantan S.

    2017-10-01

    Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  11. Different structures on subspaces of OsckM

    Directory of Open Access Journals (Sweden)

    Čomić Irena

    2013-01-01

    Full Text Available The geometry of OsckM spaces was introduced by R. Miron and Gh. Atanasiu in [6] and [7]. The theory of these spaces was developed by R. Miron and his cooperators from Romania, Japan and other countries in several books and many papers. Only some of them are mentioned in references. Here we recall the construction of adapted bases in T(OsckM and T*(OsckM, which are comprehensive with the J structure. The theory of two complementary family of subspaces is presented as it was done in [2] and [4]. The operators J,J, θ,θ, p, p* are introduced in the ambient space and subspaces. Some new relations between them are established. The action of these operators on Liouville vector fields are examined.

  12. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  13. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  14. Invariant subspaces in some function spaces on symmetric spaces. II

    International Nuclear Information System (INIS)

    Platonov, S S

    1998-01-01

    Let G be a semisimple connected Lie group with finite centre, K a maximal compact subgroup of G, and M=G/K a Riemannian symmetric space of non-compact type. We study the problem of describing the structure of closed linear subspaces in various function spaces on M that are invariant under the quasiregular representation of the group G. We consider the case when M is a symplectic symmetric space of rank 1

  15. Quantum cloning of mixed states in symmetric subspaces

    International Nuclear Information System (INIS)

    Fan Heng

    2003-01-01

    Quantum-cloning machine for arbitrary mixed states in symmetric subspaces is proposed. This quantum-cloning machine can be used to copy part of the output state of another quantum-cloning machine and is useful in quantum computation and quantum information. The shrinking factor of this quantum cloning achieves the well-known upper bound. When the input is identical pure states, two different fidelities of this cloning machine are optimal

  16. Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery

    Science.gov (United States)

    2016-09-27

    Bian, Student Member, IEEE, and Hamid Krim, Fellow, IEEE Abstract The success of sparse models in computer vision and machine learning is due to the...16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world

  17. Index Formulae for Subspaces of Kreĭn Spaces

    NARCIS (Netherlands)

    Dijksma, Aad; Gheondea, Aurelian

    1996-01-01

    For a subspace S of a Kreĭn space K and an arbitrary fundamental decomposition K = K-[+]K+ of K, we prove the index formula κ-(S) + dim(S⊥ ∩ K+) = κ+(S⊥) + dim(S ∩ K-), where κ±(S) stands for the positive/negative signature of S. The difference dim(S ∩ K-) - dim(S⊥ ∩ K+), provided it is well

  18. An adaptation of Krylov subspace methods to path following

    Energy Technology Data Exchange (ETDEWEB)

    Walker, H.F. [Utah State Univ., Logan, UT (United States)

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  19. Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Khuwaileh, B.A., E-mail: bakhuwai@ncsu.edu; Abdel-Khalik, H.S.

    2015-01-15

    Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.

  20. Comparison Study of Subspace Identification Methods Applied to Flexible Structures

    Science.gov (United States)

    Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.

    1998-09-01

    In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.

  1. Improved Stochastic Subspace System Identification for Structural Health Monitoring

    Science.gov (United States)

    Chang, Chia-Ming; Loh, Chin-Hsiung

    2015-07-01

    Structural health monitoring acquires structural information through numerous sensor measurements. Vibrational measurement data render the dynamic characteristics of structures to be extracted, in particular of the modal properties such as natural frequencies, damping, and mode shapes. The stochastic subspace system identification has been recognized as a power tool which can present a structure in the modal coordinates. To obtain qualitative identified data, this tool needs to spend computational expense on a large set of measurements. In study, a stochastic system identification framework is proposed to improve the efficiency and quality of the conventional stochastic subspace system identification. This framework includes 1) measured signal processing, 2) efficient space projection, 3) system order selection, and 4) modal property derivation. The measured signal processing employs the singular spectrum analysis algorithm to lower the noise components as well as to present a data set in a reduced dimension. The subspace is subsequently derived from the data set presented in a delayed coordinate. With the proposed order selection criteria, the number of structural modes is determined, resulting in the modal properties. This system identification framework is applied to a real-world bridge for exploring the feasibility in real-time applications. The results show that this improved system identification method significantly decreases computational time, while qualitative modal parameters are still attained.

  2. Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.

    Science.gov (United States)

    Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo

    2013-06-20

    A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.

  3. Smart Electrochemical Energy Storage Devices with Self-Protection and Self-Adaptation Abilities.

    Science.gov (United States)

    Yang, Yun; Yu, Dandan; Wang, Hua; Guo, Lin

    2017-12-01

    Currently, with booming development and worldwide usage of rechargeable electrochemical energy storage devices, their safety issues, operation stability, service life, and user experience are garnering special attention. Smart and intelligent energy storage devices with self-protection and self-adaptation abilities aiming to address these challenges are being developed with great urgency. In this Progress Report, we highlight recent achievements in the field of smart energy storage systems that could early-detect incoming internal short circuits and self-protect against thermal runaway. Moreover, intelligent devices that are able to take actions and self-adapt in response to external mechanical disruption or deformation, i.e., exhibiting self-healing or shape-memory behaviors, are discussed. Finally, insights into the future development of smart rechargeable energy storage devices are provided. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Self-Adaptive Hidden Markov Model for Emotion Classification in Chinese Microblogs

    Directory of Open Access Journals (Sweden)

    Li Liu

    2015-01-01

    we propose a modified version of hidden Markov model (HMM classifier, called self-adaptive HMM, whose parameters are optimized by Particle Swarm Optimization algorithms. Since manually labeling large-scale dataset is difficult, we also employ the entropy to decide whether a new unlabeled tweet shall be contained in the training dataset after being assigned an emotion using our HMM-based approach. In the experiment, we collected about 200,000 Chinese tweets from Sina Weibo. The results show that the F-score of our approach gets 76% on happiness and fear and 65% on anger, surprise, and sadness. In addition, the self-adaptive HMM classifier outperforms Naive Bayes and Support Vector Machine on recognition of happiness, anger, and sadness.

  5. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2015-01-01

    Full Text Available The key problem of computer-aided diagnosis (CAD of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO pulmonary nodules than other typical algorithms.

  6. Location-Based Self-Adaptive Routing Algorithm for Wireless Sensor Networks in Home Automation

    Directory of Open Access Journals (Sweden)

    Hong SeungHo

    2011-01-01

    Full Text Available The use of wireless sensor networks in home automation (WSNHA is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route discovery flooding to a cylindrical request zone, which reduces the routing overhead and decreases broadcast storm problems in the MAC layer. It also automatically adjusts the size of the request zone using a self-adaptive algorithm based on Bayes' theorem. This makes WSNHA-LBAR more adaptable to the changes of the network state and easier to implement. Simulation results show improved network reliability as well as reduced routing overhead.

  7. A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Baoguo Yu

    2016-01-01

    Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.

  8. Control of suspended low-gravity simulation system based on self-adaptive fuzzy PID

    Science.gov (United States)

    Chen, Zhigang; Qu, Jiangang

    2017-09-01

    In this paper, an active suspended low-gravity simulation system is proposed to follow the vertical motion of the spacecraft. Firstly, working principle and mathematical model of the low-gravity simulation system are shown. In order to establish the balance process and suppress the strong position interference of the system, the idea of self-adaptive fuzzy PID control strategy is proposed. It combines the PID controller with a fuzzy controll strategy, the control system can be automatically adjusted by changing the proportional parameter, integral parameter and differential parameter of the controller in real-time. At last, we use the Simulink tools to verify the performance of the controller. The results show that the system can reach balanced state quickly without overshoot and oscillation by the method of the self-adaptive fuzzy PID, and follow the speed of 3m/s, while simulation degree of accuracy of system can reach to 95.9% or more.

  9. QoS-aware self-adaptation of communication protocols in a pervasive service middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius; Fernandes, João

    2010-01-01

    Pervasive computing is characterized by heterogeneous devices that usually have scarce resources requiring optimized usage. These devices may use different communication protocols which can be switched at runtime. As different communication protocols have different quality of service (Qo......S) properties, this motivates optimized self-adaption of protocols for devices, e.g., considering power consumption and other QoS requirements, e.g. round trip time (RTT) for service invocations, throughput, and reliability. In this paper, we present an extensible approach for self-adaptation of communication...... protocols for pervasive web services, where protocols are designed as reusable connectors and our middleware infrastructure can hide the complexity of using different communication protocols to upper layers. We also propose to use Genetic Algorithms (GAs) to find optimized configurations at runtime...

  10. Study on the pressure self-adaptive water-tight junction box in underwater vehicle

    Directory of Open Access Journals (Sweden)

    Haocai Huang

    2012-09-01

    Full Text Available Underwater vehicles play a very important role in underwater engineering. Water-tight junction box (WJB is one of the key components in underwater vehicle. This paper puts forward a pressure self-adaptive water-tight junction box (PSAWJB which improves the reliability of the WJB significantly by solving the sealing and pressure problems in conventional WJB design. By redundancy design method, the pressure self-adaptive equalizer (PSAE is designed in such a way that it consists of a piston pressure-adaptive compensator (PPAC and a titanium film pressure-adaptive compensator (TFPAC. According to hydro-mechanical simulations, the operating volume of the PSAE is more than or equal to 11.6 % of the volume of WJB liquid system. Furthermore, the required operating volume of the PSAE also increases as the gas content of oil, hydrostatic pressure or temperature difference increases. The reliability of the PSAWJB is proved by hyperbaric chamber tests.

  11. Self-adapting metal-ceramic coating for biomass and waste incineration plants

    Energy Technology Data Exchange (ETDEWEB)

    Faulstich, Martin [Technische Univ. Muenchen (Germany); Fehr, Karl Thomas; Ye, Ya-Ping [Ludwig-Maximilians-Univ., Muenchen (Germany); Loeh, Ingrid; Mocker, Mario; Wolf, Gerhard [ATZ Entwicklungszentrum, Sulzbach-Rosenberg (Germany)

    2010-07-01

    Thermally sprayed coatings might become a reasonable alternative to cost-intensive cladding of heat exchangers in biomass and waste incineration. Shortcomings of these coatings might be overcome by a double-layer system, consisting of Alloy 625 covered with yttria-stabilized zirconia. Under appropriate conditions, re-crystallized zirconium oxide and chromium oxide form a dense, self-adapting and self-healing barrier against further infiltration of gaseous species. (orig.)

  12. The self-adaptation to dynamic failures for efficient virtual organization formations in grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. However, due to the nature of heterogeneous and dynamic resources, dynamic failures in the distributed grid environment usually occur more than in traditional computation platforms, which cause failed VO formations. In this paper, we develop a novel self-adaptive mechanism to dynamic failures during VO formations. Such a self-adaptive scheme allows an individual and member of VOs to automatically find other available or replaceable one once a failure happens and therefore makes systems automatically recover from dynamic failures. We define dynamic failure situations of a system by using two standard indicators: mean time between failures (MTBF) and mean time to recover (MTTR). We model both MTBF and MTTR as Poisson distributions. We investigate and analyze the efficiency of the proposed self-adaptation mechanism to dynamic failures by comparing the success probability of VO formations before and after adopting it in three different cases: (1) different failure situations; (2) different organizational structures and scales; (3) different task complexities. The experimental results show that the proposed scheme can automatically adapt to dynamic failures and effectively improve the dynamic VO formation performance in the event of node failures, which provide a valuable addition to the field.

  13. The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multiagent Markov Game Environment

    Directory of Open Access Journals (Sweden)

    Lun-Hui Xu

    2013-01-01

    Full Text Available Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting.

  14. Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value Decomposition (SVD)

    Science.gov (United States)

    2017-09-27

    100 times larger for the minimal Krylov subspace. 0 5 10 15 20 25 Krylov subspace dimension 10-2 10-1 100 101 102 103 104 jjĜ ¡ 1 jj F SVD...approximation Kn (G;u(0) ) 0 5 10 15 20 25 Krylov subspace dimension 10-2 10-1 100 101 102 103 104 jjx jj fo r m in x jjĜ x ¡ bjj SVD approximation Kn (G;u(0

  15. Gamow state vectors as functionals over subspaces of the nuclear space

    International Nuclear Information System (INIS)

    Bohm, A.

    1979-12-01

    Exponentially decaying Gamow state vectors are obtained from S-matrix poles in the lower half of the second sheet, and are defined as functionals over a subspace of the nuclear space, PHI. Exponentially growing Gamow state vectors are obtained from S-matrix poles in the upper half of the second sheet, and are defined as functionals over another subspace of PHI. On functionals over these two subspaces the dynamical group of time development splits into two semigroups

  16. A subspace approach to high-resolution spectroscopic imaging.

    Science.gov (United States)

    Lam, Fan; Liang, Zhi-Pei

    2014-04-01

    To accelerate spectroscopic imaging using sparse sampling of (k,t)-space and subspace (or low-rank) modeling to enable high-resolution metabolic imaging with good signal-to-noise ratio. The proposed method, called SPectroscopic Imaging by exploiting spatiospectral CorrElation, exploits a unique property known as partial separability of spectroscopic signals. This property indicates that high-dimensional spectroscopic signals reside in a very low-dimensional subspace and enables special data acquisition and image reconstruction strategies to be used to obtain high-resolution spatiospectral distributions with good signal-to-noise ratio. More specifically, a hybrid chemical shift imaging/echo-planar spectroscopic imaging pulse sequence is proposed for sparse sampling of (k,t)-space, and a low-rank model-based algorithm is proposed for subspace estimation and image reconstruction from sparse data with the capability to incorporate prior information and field inhomogeneity correction. The performance of the proposed method has been evaluated using both computer simulations and phantom studies, which produced very encouraging results. For two-dimensional spectroscopic imaging experiments on a metabolite phantom, a factor of 10 acceleration was achieved with a minimal loss in signal-to-noise ratio compared to the long chemical shift imaging experiments and with a significant gain in signal-to-noise ratio compared to the accelerated echo-planar spectroscopic imaging experiments. The proposed method, SPectroscopic Imaging by exploiting spatiospectral CorrElation, is able to significantly accelerate spectroscopic imaging experiments, making high-resolution metabolic imaging possible. Copyright © 2014 Wiley Periodicals, Inc.

  17. Subspace-based analysis of the ERT inverse problem

    Science.gov (United States)

    Ben Hadj Miled, Mohamed Khames; Miller, Eric L.

    2004-05-01

    In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.

  18. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    International Nuclear Information System (INIS)

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  19. Perturbation for Frames for a Subspace of a Hilbert Space

    DEFF Research Database (Denmark)

    Christensen, Ole; deFlicht, C.; Lennard, C.

    1997-01-01

    We extend a classical result stating that a sufficiently small perturbation$\\{ g_i \\}$ of a Riesz sequence $\\{ f_i \\}$ in a Hilbert space $H$ is again a Riesz sequence. It turns out that the analog result for a frame does not holdunless the frame is complete. However, we are able to prove a very...... similarresult for frames in the case where the gap between the subspaces$\\overline{span} \\{f_i \\}$ and $\\overline{span} \\{ g_i \\}$ is small enough. We give a geometric interpretation of the result....

  20. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    Science.gov (United States)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  1. Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.

    Science.gov (United States)

    Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui

    2017-01-01

    To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.

  2. Parameters identification of photovoltaic models using self-adaptive teaching-learning-based optimization

    International Nuclear Information System (INIS)

    Yu, Kunjie; Chen, Xu; Wang, Xin; Wang, Zhenlei

    2017-01-01

    Highlights: • SATLBO is proposed to identify the PV model parameters efficiently. • In SATLBO, the learners self-adaptively select different learning phases. • An elite learning is developed in teacher phase to perform local searching. • A diversity learning is proposed in learner phase to maintain population diversity. • SATLBO achieves the first in ranking on overall performance among nine algorithms. - Abstract: Parameters identification of photovoltaic (PV) model based on measured current-voltage characteristic curves plays an important role in the simulation and evaluation of PV systems. To accurately and reliably identify the PV model parameters, a self-adaptive teaching-learning-based optimization (SATLBO) is proposed in this paper. In SATLBO, the learners can self-adaptively select different learning phases based on their knowledge level. The better learners are more likely to choose the learner phase for improving the population diversity, while the worse learners tend to choose the teacher phase to enhance the convergence rate. Thus, learners at different levels focus on different searching abilities to efficiently enhance the performance of algorithm. In addition, to improve the searching ability of different learning phases, an elite learning strategy and a diversity learning method are introduced into the teacher phase and learner phase, respectively. The performance of SATLBO is firstly evaluated on 34 benchmark functions, and experimental results show that SATLBO achieves the first in ranking on the overall performance among nine algorithms. Then, SATLBO is employed to identify parameters of different PV models, i.e., single diode, double diode, and PV module. Experimental results indicate that SATLBO exhibits high accuracy and reliability compared with other parameter extraction methods.

  3. LogDet Rank Minimization with Application to Subspace Clustering

    Directory of Open Access Journals (Sweden)

    Zhao Kang

    2015-01-01

    Full Text Available Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.

  4. View subspaces for indexing and retrieval of 3D models

    Science.gov (United States)

    Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel

    2010-02-01

    View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.

  5. Subspace methods for identification of human ankle joint stiffness.

    Science.gov (United States)

    Zhao, Y; Westwick, D T; Kearney, R E

    2011-11-01

    Joint stiffness, the dynamic relationship between the angular position of a joint and the torque acting about it, describes the dynamic, mechanical behavior of a joint during posture and movement. Joint stiffness arises from both intrinsic and reflex mechanisms, but the torques due to these mechanisms cannot be measured separately experimentally, since they appear and change together. Therefore, the direct estimation of the intrinsic and reflex stiffnesses is difficult. In this paper, we present a new, two-step procedure to estimate the intrinsic and reflex components of ankle stiffness. In the first step, a discrete-time, subspace-based method is used to estimate a state-space model for overall stiffness from the measured overall torque and then predict the intrinsic and reflex torques. In the second step, continuous-time models for the intrinsic and reflex stiffnesses are estimated from the predicted intrinsic and reflex torques. Simulations and experimental results demonstrate that the algorithm estimates the intrinsic and reflex stiffnesses accurately. The new subspace-based algorithm has three advantages over previous algorithms: 1) It does not require iteration, and therefore, will always converge to an optimal solution; 2) it provides better estimates for data with high noise or short sample lengths; and 3) it provides much more accurate results for data acquired under the closed-loop conditions, that prevail when subjects interact with compliant loads.

  6. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  7. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    Science.gov (United States)

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  8. Enhancing Low-Rank Subspace Clustering by Manifold Regularization.

    Science.gov (United States)

    Liu, Junmin; Chen, Yijun; Zhang, JiangShe; Xu, Zongben

    2014-07-25

    Recently, low-rank representation (LRR) method has achieved great success in subspace clustering (SC), which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers (ADMM), is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.

  9. Automatic synthesis of MEMS devices using self-adaptive hybrid metaheuristics

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Fan, Zhun

    2011-01-01

    - multaneous minimization of size and power input of a MEMS device, while investigating optimum geometrical conguration as the main concern. The major contribution of this paper is the application of self-adaptive memetic computing in MEMS design. An evolutionary multi-objective optimization (EMO) technique......, in particular non-dominated sorting genetic algorithm (NSGA-II), has been applied to- gether with a pattern recognition statistical tool, i.e. Principal Component Analysis (PCA), to nd multiple trade-o solutions in an ecient manner. Following this, a gradient- based local search, i.e. sequential quadratic...

  10. Self-adaptive multimethod optimization applied to a tailored heating forging process

    Science.gov (United States)

    Baldan, M.; Steinberg, T.; Baake, E.

    2018-05-01

    The presented paper describes an innovative self-adaptive multi-objective optimization code. Investigation goals concern proving the superiority of this code compared to NGSA-II and applying it to an inductor’s design case study addressed to a “tailored” heating forging application. The choice of the frequency and the heating time are followed by the determination of the turns number and their positions. Finally, a straightforward optimization is performed in order to minimize energy consumption using “optimal control”.

  11. Self adaptive internal combustion engine control for hydrogen mixtures based on piezoelectric dynamic cylinder pressure transducers

    Energy Technology Data Exchange (ETDEWEB)

    Courteau, R.; Bose, T. K. [Universite du Quebec a Trois-Rivieres, Hydrogen Research Institute, Trois-Rivieres, PQ (Canada)

    2004-07-01

    An algorithm for self-adaptive tuning of an internal combustion engine is proposed, based on a Kalman filter operating on a few selected metrics of the dynamic pressure curve. Piezoelectric transducers are devices to monitor dynamic cylinder pressure; spark plugs with embedded piezo elements are now available to provide diagnostic engine functions. Such transducers are also capable of providing signals to the engine controller to perform auto tuning, a function that is considered very useful particularly in vehicles using alternative fuels whose characteristics frequently show variations between fill-ups. 2 refs., 2 figs.

  12. Self-Adaptive Operator Scheduling using the Religion-Based EA

    DEFF Research Database (Denmark)

    Thomsen, Rene; Krink, Thiemo

    2002-01-01

    of their application is determined by a constant parameter, such as a fixed mutation rate. However, recent studies have shown that the optimal usage of a variation operator changes during the EA run. In this study, we combined the idea of self-adaptive mutation operator scheduling with the Religion-Based EA (RBEA......), which is an agent model with spatially structured and variable sized subpopulations (religions). In our new model (OSRBEA), we used a selection of different operators, such that each operator type was applied within one specific subpopulation only. Our results indicate that the optimal choice...

  13. A Self-Adaptive Evolutionary Approach to the Evolution of Aesthetic Maps for a RTS Game

    OpenAIRE

    Lara-Cabrera, Raúl; Cotta, Carlos; Fernández-Leiva, Antonio J.

    2014-01-01

    Procedural content generation (PCG) is a research eld on the rise,with numerous papers devoted to this topic. This paper presents a PCG method based on a self-adaptive evolution strategy for the automatic generation of maps for the real-time strategy (RTS) game PlanetWars. These maps are generated in order to ful ll the aesthetic preferences of the user, as implied by her assessment of a collection of maps used as training set. A topological approach is used for the characterization of th...

  14. Towards Static Analysis of Policy-Based Self-adaptive Computing Systems

    DEFF Research Database (Denmark)

    Margheri, Andrea; Nielson, Hanne Riis; Nielson, Flemming

    2016-01-01

    For supporting the design of self-adaptive computing systems, the PSCEL language offers a principled approach that relies on declarative definitions of adaptation and authorisation policies enforced at runtime. Policies permit managing system components by regulating their interactions...... and by dynamically introducing new actions to accomplish task-oriented goals. However, the runtime evaluation of policies and their effects on system components make the prediction of system behaviour challenging. In this paper, we introduce the construction of a flow graph that statically points out the policy...... evaluations that can take place at runtime and exploit it to analyse the effects of policy evaluations on the progress of system components....

  15. Time stepping free numerical solution of linear differential equations: Krylov subspace versus waveform relaxation

    NARCIS (Netherlands)

    Bochev, Mikhail A.; Oseledets, I.V.; Tyrtyshnikov, E.E.

    2013-01-01

    The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation method based on block Krylov subspaces. Second, we compare this new implementation against Krylov subspace methods combined with the shift and invert technique.

  16. Subspace in Linear Algebra: Investigating Students' Concept Images and Interactions with the Formal Definition

    Science.gov (United States)

    Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.

    2011-01-01

    This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…

  17. Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry.

    Science.gov (United States)

    Serafini, S; Paone, N; Castellini, P

    2013-12-01

    A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.

  18. A Self-adaptive Dynamic Evaluation Model for Diabetes Mellitus, Based on Evolutionary Strategies

    Directory of Open Access Journals (Sweden)

    An-Jiang Lu

    2016-03-01

    Full Text Available In order to evaluate diabetes mellitus objectively and accurately, this paper builds a self-adaptive dynamic evaluation model for diabetes mellitus, based on evolutionary strategies. First of all, on the basis of a formalized description of the evolutionary process of diabetes syndromes, using a state transition function, it judges whether a disease is evolutionary, through an excitation parameter. It then, provides evidence for the rebuilding of the evaluation index system. After that, by abstracting and rebuilding the composition of evaluation indexes, it makes use of a heuristic algorithm to determine the composition of the evolved evaluation index set of diabetes mellitus, It then, calculates the weight of each index in the evolved evaluation index set of diabetes mellitus by building a dependency matrix and realizes the self-adaptive dynamic evaluation of diabetes mellitus under an evolutionary environment. Using this evaluation model, it is possible to, quantify all kinds of diagnoses and treatment experiences of diabetes and finally to adopt ideal diagnoses and treatment measures for different patients with diabetics.

  19. Enhancement of combined heat and power economic dispatch using self adaptive real-coded genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Subbaraj, P. [Kalasalingam University, Srivilliputhur, Tamilnadu 626 190 (India); Rengaraj, R. [Electrical and Electronics Engineering, S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India); Salivahanan, S. [S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India)

    2009-06-15

    In this paper, a self adaptive real-coded genetic algorithm (SARGA) is implemented to solve the combined heat and power economic dispatch (CHPED) problem. The self adaptation is achieved by means of tournament selection along with simulated binary crossover (SBX). The selection process has a powerful exploration capability by creating tournaments between two solutions. The better solution is chosen and placed in the mating pool leading to better convergence and reduced computational burden. The SARGA integrates penalty parameterless constraint handling strategy and simultaneously handles equality and inequality constraints. The population diversity is introduced by making use of distribution index in SBX operator to create a better offspring. This leads to a high diversity in population which can increase the probability towards the global optimum and prevent premature convergence. The SARGA is applied to solve CHPED problem with bounded feasible operating region which has large number of local minima. The numerical results demonstrate that the proposed method can find a solution towards the global optimum and compares favourably with other recent methods in terms of solution quality, handling constraints and computation time. (author)

  20. An Efficient and Self-Adapting Localization in Static Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Wei Dong

    2009-08-01

    Full Text Available Localization is one of the most important subjects in Wireless Sensor Networks (WSNs. To reduce the number of beacons and adopt probabilistic methods, some particle filter-based mobile beacon-assisted localization approaches have been proposed, such as Mobile Beacon-assisted Localization (MBL, Adapting MBL (A-MBL, and the method proposed by Hang et al. Some new significant problems arise in these approaches, however. The first question is which probability distribution should be selected as the dynamic model in the prediction stage. The second is whether the unknown node adopts neighbors’ observation in the update stage. The third is how to find a self-adapting mechanism to achieve more flexibility in the adapting stage. In this paper, we give the theoretical analysis and experimental evaluations to suggest which probability distribution in the dynamic model should be adopted to improve the efficiency in the prediction stage. We also give the condition for whether the unknown node should use the observations from its neighbors to improve the accuracy. Finally, we propose a Self-Adapting Mobile Beacon-assisted Localization (SA-MBL approach to achieve more flexibility and achieve almost the same performance with A-MBL.

  1. Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry

    Science.gov (United States)

    Serafini, S.; Paone, N.; Castellini, P.

    2013-12-01

    A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.

  2. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.

    Science.gov (United States)

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei

    2018-04-08

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  3. Dynamic Self-Adaptive Reliability Control for Electric-Hydraulic Systems

    Directory of Open Access Journals (Sweden)

    Yi Wan

    2015-02-01

    Full Text Available The high-speed electric-hydraulic proportional control is a new development of the hydraulic control technique with high reliability, low cost, efficient energy, and easy maintenance; it is widely used in industrial manufacturing and production. However, there are still some unresolved challenges, the most notable being the requirements of high stability and real-time by the classical control algorithm due to its high nonlinear characteristics. We propose a dynamic self-adaptive mixed control method based on the least squares support vector machine (LSSVM and the genetic algorithm for high-speed electric-hydraulic proportional control systems in this paper; LSSVM is used to identify and adjust online a nonlinear electric-hydraulic proportional system, and the genetic algorithm is used to optimize the control law of the controlled system and dynamic self-adaptive internal model control and predictive control are implemented by using the mixed intelligent method. The internal model and the inverse control model are online adjusted together. At the same time, a time-dependent Hankel matrix is constructed based on sample data; thus finite dimensional solution can be optimized on finite dimensional space. The results of simulation experiments show that the dynamic characteristics are greatly improved by the mixed intelligent control strategy, and good tracking and high stability are met in condition of high frequency response.

  4. Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems

    Science.gov (United States)

    Omelchenko, Yuri; Karimabadi, Homayoun

    2005-10-01

    Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.

  5. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2017-10-01

    Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  6. Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data

    Science.gov (United States)

    Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  7. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Science.gov (United States)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  8. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Weifang Zhang

    2018-04-01

    Full Text Available A Fiber Bragg Grating (FBG interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA and advanced RISC machine (ARM platform, tunable Fabry–Perot (F–P filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  9. Self-adaptive change detection in streaming data with non-stationary distribution

    KAUST Repository

    Zhang, Xiangliang

    2010-01-01

    Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non-stationary distribution helps to alarm the anomalies, to clean the noises, and to report the new patterns. In this paper, we employ a novel approach for detecting changes in streaming data with the purpose of improving the quality of modeling the data streams. Through observing the outliers, this approach of change detection uses a weighted standard deviation to monitor the evolution of the distribution of data streams. A cumulative statistical test, Page-Hinkley, is employed to collect the evidence of changes in distribution. The parameter used for reporting the changes is self-adaptively adjusted according to the distribution of data streams, rather than set by a fixed empirical value. The self-adaptability of the novel approach enhances the effectiveness of modeling data streams by timely catching the changes of distributions. We validated the approach on an online clustering framework with a benchmark KDDcup 1999 intrusion detection data set as well as with a real-world grid data set. The validation results demonstrate its better performance on achieving higher accuracy and lower percentage of outliers comparing to the other change detection approaches. © 2010 Springer-Verlag.

  10. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    Science.gov (United States)

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  11. DETECTION OF CHANGES OF THE SYSTEM TECHNICAL STATE USING STOCHASTIC SUBSPACE OBSERVATION METHOD

    Directory of Open Access Journals (Sweden)

    Andrzej Puchalski

    2014-03-01

    Full Text Available System diagnostics based on vibroacoustics signals, carried out by means of stochastic subspace methods was undertaken in the hereby paper. Subspace methods are the ones based on numerical linear algebra tools. The considered solutions belong to diagnostic methods according to data, leading to the generation of residuals allowing failure recognition of elements and assemblies in machines and devices. The algorithm of diagnostics according to the subspace observation method was applied – in the paper – for the estimation of the valve system of the spark ignition engine.

  12. The influence of different PAST-based subspace trackers on DaPT parameter estimation

    Science.gov (United States)

    Lechtenberg, M.; Götze, J.

    2012-09-01

    In the context of parameter estimation, subspace-based methods like ESPRIT have become common. They require a subspace separation e.g. based on eigenvalue/-vector decomposition. In time-varying environments, this can be done by subspace trackers. One class of these is based on the PAST algorithm. Our non-linear parameter estimation algorithm DaPT builds on-top of the ESPRIT algorithm. Evaluation of the different variants of the PAST algorithm shows which variant of the PAST algorithm is worthwhile in the context of frequency estimation.

  13. Krylov subspace method with communication avoiding technique for linear system obtained from electromagnetic analysis

    International Nuclear Information System (INIS)

    Ikuno, Soichiro; Chen, Gong; Yamamoto, Susumu; Itoh, Taku; Abe, Kuniyoshi; Nakamura, Hiroaki

    2016-01-01

    Krylov subspace method and the variable preconditioned Krylov subspace method with communication avoiding technique for a linear system obtained from electromagnetic analysis are numerically investigated. In the k−skip Krylov method, the inner product calculations are expanded by Krylov basis, and the inner product calculations are transformed to the scholar operations. k−skip CG method is applied for the inner-loop solver of Variable Preconditioned Krylov subspace methods, and the converged solution of electromagnetic problem is obtained using the method. (author)

  14. Geometric subspace updates with applications to online adaptive nonlinear model reduction

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Peherstorfer, Benjamin; Willcox, Karen

    2018-01-01

    In many scientific applications, including model reduction and image processing, subspaces are used as ansatz spaces for the low-dimensional approximation and reconstruction of the state vectors of interest. We introduce a procedure for adapting an existing subspace based on information from...... Estimation (GROUSE). We establish for GROUSE a closed-form expression for the residual function along the geodesic descent direction. Specific applications of subspace adaptation are discussed in the context of image processing and model reduction of nonlinear partial differential equation systems....

  15. A neural learning classifier system with self-adaptive constructivism for mobile robot control.

    Science.gov (United States)

    Hurst, Jacob; Bull, Larry

    2006-01-01

    For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.

  16. Iterative approach to self-adapting and altitude-dependent regularization for atmospheric profile retrievals.

    Science.gov (United States)

    Ridolfi, Marco; Sgheri, Luca

    2011-12-19

    In this paper we present the IVS (Iterative Variable Strength) method, an altitude-dependent, self-adapting Tikhonov regularization scheme for atmospheric profile retrievals. The method is based on a similar scheme we proposed in 2009. The new method does not need any specifically tuned minimization routine, hence it is more robust and faster. We test the self-consistency of the method using simulated observations of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). We then compare the new method with both our previous scheme and the scalar method currently implemented in the MIPAS on-line processor, using both synthetic and real atmospheric limb measurements. The IVS method shows very good performances.

  17. Self adaptive internal combustion engine control for hydrogen mixtures based on piezoelectric dynamic cylinder pressure transducers

    International Nuclear Information System (INIS)

    Courteau, R.; Bose, T.K.

    2004-01-01

    Piezoelectric transducers offer an effective, non-intrusive way to monitor dynamic cylinder pressure in internal combustion engines. Devices dedicated to this purpose are appearing on the market, often in the form of spark plugs with embedded piezo elements. Dynamic cylinder pressure is typically used to provide diagnostic functions, or to help map an engine after it is designed. With the advent of powerful signal processor chips, it is now possible to embed enough computing power in the engine controller to perform auto tuning based on the signals provided by such transducers. Such functionality is very useful if the fuel characteristics vary between fill ups, as is often the case with alternative fuels. We propose here an algorithm for self-adaptive tuning based on a Kalman filter operating on a few selected metrics of the dynamic pressure curve. (author)

  18. Integrable discretizations and self-adaptive moving mesh method for a coupled short pulse equation

    International Nuclear Information System (INIS)

    Feng, Bao-Feng; Chen, Junchao; Chen, Yong; Maruno, Ken-ichi; Ohta, Yasuhiro

    2015-01-01

    In the present paper, integrable semi-discrete and fully discrete analogues of a coupled short pulse (CSP) equation are constructed. The key to the construction are the bilinear forms and determinant structure of the solutions of the CSP equation. We also construct N-soliton solutions for the semi-discrete and fully discrete analogues of the CSP equations in the form of Casorati determinants. In the continuous limit, we show that the fully discrete CSP equation converges to the semi-discrete CSP equation, then further to the continuous CSP equation. Moreover, the integrable semi-discretization of the CSP equation is used as a self-adaptive moving mesh method for numerical simulations. The numerical results agree with the analytical results very well. (paper)

  19. Self-adaptive tensor network states with multi-site correlators

    Science.gov (United States)

    Kovyrshin, Arseny; Reiher, Markus

    2017-12-01

    We introduce the concept of self-adaptive tensor network states (SATNSs) based on multi-site correlators. The SATNS ansatz gradually extends its variational space incorporating the most important next-order correlators into the ansatz for the wave function. The selection of these correlators is guided by entanglement-entropy measures from quantum information theory. By sequentially introducing variational parameters and adjusting them to the system under study, the SATNS ansatz achieves keeping their number significantly smaller than the total number of full-configuration interaction parameters. The SATNS ansatz is studied for manganocene in its lowest-energy sextet and doublet states; the latter of which is known to be difficult to describe. It is shown that the SATNS parametrization solves the convergence issues found for previous correlator-based tensor network states.

  20. Achieving Optimal Self-Adaptivity for Dynamic Tuning of Organic Semiconductors through Resonance Engineering.

    Science.gov (United States)

    Tao, Ye; Xu, Lijia; Zhang, Zhen; Chen, Runfeng; Li, Huanhuan; Xu, Hui; Zheng, Chao; Huang, Wei

    2016-08-03

    Current static-state explorations of organic semiconductors for optimal material properties and device performance are hindered by limited insights into the dynamically changed molecular states and charge transport and energy transfer processes upon device operation. Here, we propose a simple yet successful strategy, resonance variation-based dynamic adaptation (RVDA), to realize optimized self-adaptive properties in donor-resonance-acceptor molecules by engineering the resonance variation for dynamic tuning of organic semiconductors. Organic light-emitting diodes hosted by these RVDA materials exhibit remarkably high performance, with external quantum efficiencies up to 21.7% and favorable device stability. Our approach, which supports simultaneous realization of dynamically adapted and selectively enhanced properties via resonance engineering, illustrates a feasible design map for the preparation of smart organic semiconductors capable of dynamic structure and property modulations, promoting the studies of organic electronics from static to dynamic.

  1. Design of 2-D Recursive Filters Using Self-adaptive Mutation Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Lianghong Wu

    2011-08-01

    Full Text Available This paper investigates a novel approach to the design of two-dimensional recursive digital filters using differential evolution (DE algorithm. The design task is reformulated as a constrained minimization problem and is solved by an Self-adaptive Mutation DE algorithm (SAMDE, which adopts an adaptive mutation operator that combines with the advantages of the DE/rand/1/bin strategy and the DE/best/2/bin strategy. As a result, its convergence performance is improved greatly. Numerical experiment results confirm the conclusion. The proposedSAMDE approach is effectively applied to test a numerical example and is compared with previous design methods. The computational experiments show that the SAMDE approach can obtain better results than previous design methods.

  2. A self-adaptive k-means classifier for business incentive in a fashion design environment

    Directory of Open Access Journals (Sweden)

    O.R. Vincent

    2018-01-01

    Full Text Available An incentive mechanism to target market for fashion designers is proposed. Recent researches have been focused on the art, style or the design; while a few were based on traditional practice. In this study, economy is considered as a major liberation in the fashion world by analyzing six attributes, namely, style, color, fabric, brand, price and size that could bring about commercial success. Dataset of 1000 customers’ records were used and categorized as original, combined and new designs using self-adaptive k-means algorithm, which extract common attributes that would foster better business from the dataset. The results would be useful to designers in knowing the type of designs usually ordered by customers with the design code, and which combinations of the attributes have high patronage. In addition, customers would have easy access to the best and current designs invoke from a combination of highest patronized designs.

  3. Modeling and Design of Fault-Tolerant and Self-Adaptive Reconfigurable Networked Embedded Systems

    Directory of Open Access Journals (Sweden)

    Jürgen Teich

    2006-06-01

    Full Text Available Automotive, avionic, or body-area networks are systems that consist of several communicating control units specialized for certain purposes. Typically, different constraints regarding fault tolerance, availability and also flexibility are imposed on these systems. In this article, we will present a novel framework for increasing fault tolerance and flexibility by solving the problem of hardware/software codesign online. Based on field-programmable gate arrays (FPGAs in combination with CPUs, we allow migrating tasks implemented in hardware or software from one node to another. Moreover, if not enough hardware/software resources are available, the migration of functionality from hardware to software or vice versa is provided. Supporting such flexibility through services integrated in a distributed operating system for networked embedded systems is a substantial step towards self-adaptive systems. Beside the formal definition of methods and concepts, we describe in detail a first implementation of a reconfigurable networked embedded system running automotive applications.

  4. Multi-objective optimization problems concepts and self-adaptive parameters with mathematical and engineering applications

    CERN Document Server

    Lobato, Fran Sérgio

    2017-01-01

    This book is aimed at undergraduate and graduate students in applied mathematics or computer science, as a tool for solving real-world design problems. The present work covers fundamentals in multi-objective optimization and applications in mathematical and engineering system design using a new optimization strategy, namely the Self-Adaptive Multi-objective Optimization Differential Evolution (SA-MODE) algorithm. This strategy is proposed in order to reduce the number of evaluations of the objective function through dynamic update of canonical Differential Evolution parameters (population size, crossover probability and perturbation rate). The methodology is applied to solve mathematical functions considering test cases from the literature and various engineering systems design, such as cantilevered beam design, biochemical reactor, crystallization process, machine tool spindle design, rotary dryer design, among others.

  5. A self-adapting herding model: The agent judge-abilities influence the dynamic behaviors

    Science.gov (United States)

    Dong, Linrong

    2008-10-01

    We propose a self-adapting herding model, in which the financial markets consist of agent clusters with different sizes and market desires. The ratio of successful exchange and merger depends on the volatility of the market and the market desires of the agent clusters. The desires are assigned in term of the wealth of the agent clusters when they merge. After an exchange, the beneficial cluster’s desire keeps on the same, the losing one’s desire is altered which is correlative with the agent judge-ability. A parameter R is given to all agents to denote the judge-ability. The numerical calculation shows that the dynamic behaviors of the market are influenced distinctly by R, which includes the exponential magnitudes of the probability distribution of sizes of the agent clusters and the volatility autocorrelation of the returns, the intensity and frequency of the volatility.

  6. Self-adaptive Green-Ampt infiltration parameters obtained from measured moisture processes

    Directory of Open Access Journals (Sweden)

    Long Xiang

    2016-07-01

    Full Text Available The Green-Ampt (G-A infiltration model (i.e., the G-A model is often used to characterize the infiltration process in hydrology. The parameters of the G-A model are critical in applications for the prediction of infiltration and associated rainfall-runoff processes. Previous approaches to determining the G-A parameters have depended on pedotransfer functions (PTFs or estimates from experimental results, usually without providing optimum values. In this study, rainfall simulators with soil moisture measurements were used to generate rainfall in various experimental plots. Observed runoff data and soil moisture dynamic data were jointly used to yield the infiltration processes, and an improved self-adaptive method was used to optimize the G-A parameters for various types of soil under different rainfall conditions. The two G-A parameters, i.e., the effective hydraulic conductivity and the effective capillary drive at the wetting front, were determined simultaneously to describe the relationships between rainfall, runoff, and infiltration processes. Through a designed experiment, the method for determining the G-A parameters was proved to be reliable in reflecting the effects of pedologic background in G-A type infiltration cases and deriving the optimum G-A parameters. Unlike PTF methods, this approach estimates the G-A parameters directly from infiltration curves obtained from rainfall simulation experiments so that it can be used to determine site-specific parameters. This study provides a self-adaptive method of optimizing the G-A parameters through designed field experiments. The parameters derived from field-measured rainfall-infiltration processes are more reliable and applicable to hydrological models.

  7. Self-adapting denoising, alignment and reconstruction in electron tomography in materials science

    Energy Technology Data Exchange (ETDEWEB)

    Printemps, Tony, E-mail: tony.printemps@cea.fr [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France); Mula, Guido [Dipartimento di Fisica, Università di Cagliari, Cittadella Universitaria, S.P. 8km 0.700, 09042 Monserrato (Italy); Sette, Daniele; Bleuet, Pierre; Delaye, Vincent; Bernier, Nicolas; Grenier, Adeline; Audoit, Guillaume; Gambacorti, Narciso; Hervé, Lionel [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)

    2016-01-15

    An automatic procedure for electron tomography is presented. This procedure is adapted for specimens that can be fashioned into a needle-shaped sample and has been evaluated on inorganic samples. It consists of self-adapting denoising, automatic and accurate alignment including detection and correction of tilt axis, and 3D reconstruction. We propose the exploitation of a large amount of information of an electron tomography acquisition to achieve robust and automatic mixed Poisson–Gaussian noise parameter estimation and denoising using undecimated wavelet transforms. The alignment is made by mixing three techniques, namely (i) cross-correlations between neighboring projections, (ii) common line algorithm to get a precise shift correction in the direction of the tilt axis and (iii) intermediate reconstructions to precisely determine the tilt axis and shift correction in the direction perpendicular to that axis. Mixing alignment techniques turns out to be very efficient and fast. Significant improvements are highlighted in both simulations and real data reconstructions of porous silicon in high angle annular dark field mode and agglomerated silver nanoparticles in incoherent bright field mode. 3D reconstructions obtained with minimal user-intervention present fewer artefacts and less noise, which permits easier and more reliable segmentation and quantitative analysis. After careful sample preparation and data acquisition, the denoising procedure, alignment and reconstruction can be achieved within an hour for a 3D volume of about a hundred million voxels, which is a step toward a more routine use of electron tomography. - Highlights: • Goal: perform a reliable and user-independent 3D electron tomography reconstruction. • Proposed method: self-adapting denoising and alignment prior to 3D reconstruction. • Noise estimation and denoising are performed using wavelet transform. • Tilt axis determination is done automatically as well as projection alignment.

  8. Self-adaptive global best harmony search algorithm applied to reactor core fuel management optimization

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Valavi, K.

    2013-01-01

    Highlights: • SGHS enhanced the convergence rate of LPO using some improvements in comparison to basic HS and GHS. • SGHS optimization algorithm obtained averagely better fitness relative to basic HS and GHS algorithms. • Upshot of the SGHS implementation in the LPO reveals its flexibility, efficiency and reliability. - Abstract: The aim of this work is to apply the new developed optimization algorithm, Self-adaptive Global best Harmony Search (SGHS), for PWRs fuel management optimization. SGHS algorithm has some modifications in comparison with basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms such as dynamically change of parameters. For the demonstration of SGHS ability to find an optimal configuration of fuel assemblies, basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms also have been developed and investigated. For this purpose, Self-adaptive Global best Harmony Search Nodal Expansion package (SGHSNE) has been developed implementing HS, GHS and SGHS optimization algorithms for the fuel management operation of nuclear reactor cores. This package uses developed average current nodal expansion code which solves the multi group diffusion equation by employment of first and second orders of Nodal Expansion Method (NEM) for two dimensional, hexagonal and rectangular geometries, respectively, by one node per a FA. Loading pattern optimization was performed using SGHSNE package for some test cases to present the SGHS algorithm capability in converging to near optimal loading pattern. Results indicate that the convergence rate and reliability of the SGHS method are quite promising and practically, SGHS improves the quality of loading pattern optimization results relative to HS and GHS algorithms. As a result, it has the potential to be used in the other nuclear engineering optimization problems

  9. Forecasting the natural gas demand in China using a self-adapting intelligent grey model

    International Nuclear Information System (INIS)

    Zeng, Bo; Li, Chuan

    2016-01-01

    Reasonably forecasting demands of natural gas in China is of significance as it could aid Chinese government in formulating energy policies and adjusting industrial structures. To this end, a self-adapting intelligent grey prediction model is proposed in this paper. Compared with conventional grey models which have the inherent drawbacks of fixed structure and poor adaptability, the proposed new model can automatically optimize model parameters according to the real data characteristics of modeling sequence. In this study, the proposed new model, discrete grey model, even difference grey model and classical grey model were employed, respectively, to simulate China's natural gas demands during 2002–2010 and forecast demands during 2011–2014. The results show the new model has the best simulative and predictive precision. Finally, the new model is used to forecast China's natural gas demand during 2015–2020. The forecast shows the demand will grow rapidly over the next six years. Therefore, in order to maintain the balance between the supplies and the demands for the natural gas in the future, Chinese government needs to take some measures, such as importing huge amounts of natural gas from abroad, increasing the domestic yield, using more alternative energy, and reducing the industrial reliance on natural gas. - Highlights: • A self-adapting intelligent grey prediction model (SIGM) is proposed in this paper. • The SIGM has the advantage of working with exponential functions and linear functions. • The SIGM solves the drawbacks of fixed structure and poor adaptability of grey models. • The demand of natural gas in China is successfully forecasted using the SIGM model. • The study findings can help Chinese government reasonably formulate energy policies.

  10. A parallel direct solver for the self-adaptive hp Finite Element Method

    KAUST Repository

    Paszyński, Maciej R.

    2010-03-01

    In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p = 1 to p = 9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf. © 2009 Elsevier Inc. All rights reserved.

  11. MULTI-LABEL ASRS DATASET CLASSIFICATION USING SEMI-SUPERVISED SUBSPACE CLUSTERING

    Data.gov (United States)

    National Aeronautics and Space Administration — MULTI-LABEL ASRS DATASET CLASSIFICATION USING SEMI-SUPERVISED SUBSPACE CLUSTERING MOHAMMAD SALIM AHMED, LATIFUR KHAN, NIKUNJ OZA, AND MANDAVA RAJESWARI Abstract....

  12. A Comparative Study for Orthogonal Subspace Projection and Constrained Energy Minimization

    National Research Council Canada - National Science Library

    Du, Qian; Ren, Hsuan; Chang, Chein-I

    2003-01-01

    ...: orthogonal subspace projection (OSP) and constrained energy minimization (CEM). It is shown that they are closely related and essentially equivalent provided that the noise is white with large SNR...

  13. Predictor-Year Subspace Clustering Based Ensemble Prediction of Indian Summer Monsoon

    Directory of Open Access Journals (Sweden)

    Moumita Saha

    2016-01-01

    Full Text Available Forecasting the Indian summer monsoon is a challenging task due to its complex and nonlinear behavior. A large number of global climatic variables with varying interaction patterns over years influence monsoon. Various statistical and neural prediction models have been proposed for forecasting monsoon, but many of them fail to capture variability over years. The skill of predictor variables of monsoon also evolves over time. In this article, we propose a joint-clustering of monsoon years and predictors for understanding and predicting the monsoon. This is achieved by subspace clustering algorithm. It groups the years based on prevailing global climatic condition using statistical clustering technique and subsequently for each such group it identifies significant climatic predictor variables which assist in better prediction. Prediction model is designed to frame individual cluster using random forest of regression tree. Prediction of aggregate and regional monsoon is attempted. Mean absolute error of 5.2% is obtained for forecasting aggregate Indian summer monsoon. Errors in predicting the regional monsoons are also comparable in comparison to the high variation of regional precipitation. Proposed joint-clustering based ensemble model is observed to be superior to existing monsoon prediction models and it also surpasses general nonclustering based prediction models.

  14. Concept of a collective subspace associated with the invariance principle of the Schroedinger equation

    International Nuclear Information System (INIS)

    Marumori, Toshio; Hayashi, Akihisa; Tomoda, Toshiaki; Kuriyama, Atsushi; Maskawa, Toshihide

    1980-01-01

    The aim of this series of papers is to propose a microscopic theory to go beyond the situations where collective motions are described by the random phase approximation, i.e., by small amplitude harmonic oscillations about equilibrium. The theory is thus appropriate for the microscopic description of the large amplitude collective motion of soft nuclei. The essential idea is to develop a method to determine the collective subspace (or submanifold) in the many-particle Hilbert space in an optimal way, on the basis of a fundamental principle called the invariance principle of the Schroedinger equation. By using the principle within the framework of the Hartree-Fock theory, it is shown that the theory can clarify the structure of the so-called ''phonon-bands'' by self-consistently deriving the collective Hamiltonian where the number of the ''physical phonon'' is conserved. The purpose of this paper is not to go into detailed quantitative discussion, but rather to develop the basic idea. (author)

  15. Closed and Open Loop Subspace System Identification of the Kalman Filter

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    2009-04-01

    Full Text Available Some methods for consistent closed loop subspace system identification presented in the literature are analyzed and compared to a recently published subspace algorithm for both open as well as for closed loop data, the DSR_e algorithm. Some new variants of this algorithm are presented and discussed. Simulation experiments are included in order to illustrate if the algorithms are variance efficient or not.

  16. Subspace orthogonalization for substructuring preconditioners for nonsymmetric systems of linear equations

    Energy Technology Data Exchange (ETDEWEB)

    Starke, G. [Universitaet Karlsruhe (Germany)

    1994-12-31

    For nonselfadjoint elliptic boundary value problems which are preconditioned by a substructuring method, i.e., nonoverlapping domain decomposition, the author introduces and studies the concept of subspace orthogonalization. In subspace orthogonalization variants of Krylov methods the computation of inner products and vector updates, and the storage of basis elements is restricted to a (presumably small) subspace, in this case the edge and vertex unknowns with respect to the partitioning into subdomains. The author investigates subspace orthogonalization for two specific iterative algorithms, GMRES and the full orthogonalization method (FOM). This is intended to eliminate certain drawbacks of the Arnoldi-based Krylov subspace methods mentioned above. Above all, the length of the Arnoldi recurrences grows linearly with the iteration index which is therefore restricted to the number of basis elements that can be held in memory. Restarts become necessary and this often results in much slower convergence. The subspace orthogonalization methods, in contrast, require the storage of only the edge and vertex unknowns of each basis element which means that one can iterate much longer before restarts become necessary. Moreover, the computation of inner products is also restricted to the edge and vertex points which avoids the disturbance of the computational flow associated with the solution of subdomain problems. The author views subspace orthogonalization as an alternative to restarting or truncating Krylov subspace methods for nonsymmetric linear systems of equations. Instead of shortening the recurrences, one restricts them to a subset of the unknowns which has to be carefully chosen in order to be able to extend this partial solution to the entire space. The author discusses the convergence properties of these iteration schemes and its advantages compared to restarted or truncated versions of Krylov methods applied to the full preconditioned system.

  17. Compiling the functional data-parallel language SaC for Microgrids of Self-Adaptive Virtual Processors

    NARCIS (Netherlands)

    Grelck, C.; Herhut, S.; Jesshope, C.; Joslin, C.; Lankamp, M.; Scholz, S.-B.; Shafarenko, A.

    2009-01-01

    We present preliminary results from compiling the high-level, functional and data-parallel programming language SaC into a novel multi-core design: Microgrids of Self-Adaptive Virtual Processors (SVPs). The side-effect free nature of SaC in conjunction with its data-parallel foundation make it an

  18. Simulations research of the global predictive control with self-adaptive in the gas turbine of the nuclear power plant

    International Nuclear Information System (INIS)

    Su Jie; Xia Guoqing; Zhang Wei

    2007-01-01

    For further improving the dynamic control capabilities of the gas turbine of the nuclear power plant, this paper puts forward to apply the algorithm of global predictive control with self-adaptive in the rotate speed control of the gas turbine, including control structure and the design of controller in the base of expounding the math model of the gas turbine of the nuclear power plant. the simulation results show that the respond of the change of the gas turbine speed under the control algorithm of global predictive control with self-adaptive is ten second faster than that under the PID control algorithm, and the output value of the gas turbine speed under the PID control algorithm is 1%-2% higher than that under the control slgorithm of global predictive control with self-adaptive. It shows that the algorithm of global predictive control with self-adaptive can better control the output of the speed of the gas turbine of the nuclear power plant and get the better control effect. (authors)

  19. Self-adaptation in Software-intensive Cyber-physical Systems: From System Goals to Architecture Configurations

    Czech Academy of Sciences Publication Activity Database

    Gerostathopoulos, I.; Bureš, Tomáš; Hnětynka, P.; Keznikl, Jaroslav; Kit, M.; Plášil, F.; Plouzeau, N.

    2016-01-01

    Roč. 122, December (2016), s. 378-397 ISSN 0164-1212 Grant - others:GA MŠk(CZ) LD15051 Institutional support: RVO:67985807 Keywords : cyber–physical systems * self-adaptivity * dependability Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.444, year: 2016

  20. Overview of hybrid subspace methods for uncertainty quantification, sensitivity analysis

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Bang, Youngsuk; Wang, Congjian

    2013-01-01

    Highlights: ► We overview the state-of-the-art in uncertainty quantification and sensitivity analysis. ► We overview new developments in above areas using hybrid methods. ► We give a tutorial introduction to above areas and the new developments. ► Hybrid methods address the explosion in dimensionality in nonlinear models. ► Representative numerical experiments are given. -- Abstract: The role of modeling and simulation has been heavily promoted in recent years to improve understanding of complex engineering systems. To realize the benefits of modeling and simulation, concerted efforts in the areas of uncertainty quantification and sensitivity analysis are required. The manuscript intends to serve as a pedagogical presentation of the material to young researchers and practitioners with little background on the subjects. We believe this is important as the role of these subjects is expected to be integral to the design, safety, and operation of existing as well as next generation reactors. In addition to covering the basics, an overview of the current state-of-the-art will be given with particular emphasis on the challenges pertaining to nuclear reactor modeling. The second objective will focus on presenting our own development of hybrid subspace methods intended to address the explosion in the computational overhead required when handling real-world complex engineering systems.

  1. Subspace identification of Hammer stein models using support vector machines

    International Nuclear Information System (INIS)

    Al-Dhaifallah, Mujahed

    2011-01-01

    System identification is the art of finding mathematical tools and algorithms that build an appropriate mathematical model of a system from measured input and output data. Hammerstein model, consisting of a memoryless nonlinearity followed by a dynamic linear element, is often a good trade-off as it can represent some dynamic nonlinear systems very accurately, but is nonetheless quite simple. Moreover, the extensive knowledge about LTI system representations can be applied to the dynamic linear block. On the other hand, finding an effective representation for the nonlinearity is an active area of research. Recently, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have demonstrated powerful abilities in approximating linear and nonlinear functions. In contrast with other approximation methods, SVMs do not require a-priori structural information. Furthermore, there are well established methods with guaranteed convergence (ordinary least squares, quadratic programming) for fitting LS-SVMs and SVMs. The general objective of this research is to develop new subspace algorithms for Hammerstein systems based on SVM regression.

  2. Parallelised Krylov subspace method for reactor kinetics by IQS approach

    International Nuclear Information System (INIS)

    Gupta, Anurag; Modak, R.S.; Gupta, H.P.; Kumar, Vinod; Bhatt, K.

    2005-01-01

    Nuclear reactor kinetics involves numerical solution of space-time-dependent multi-group neutron diffusion equation. Two distinct approaches exist for this purpose: the direct (implicit time differencing) approach and the improved quasi-static (IQS) approach. Both the approaches need solution of static space-energy-dependent diffusion equations at successive time-steps; the step being relatively smaller for the direct approach. These solutions are usually obtained by Gauss-Seidel type iterative methods. For a faster solution, the Krylov sub-space methods have been tried and also parallelised by many investigators. However, these studies seem to have been done only for the direct approach. In the present paper, parallelised Krylov methods are applied to the IQS approach in addition to the direct approach. It is shown that the speed-up obtained for IQS is higher than that for the direct approach. The reasons for this are also discussed. Thus, the use of IQS approach along with parallelised Krylov solvers seems to be a promising scheme

  3. MODAL TRACKING of A Structural Device: A Subspace Identification Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Franco, S. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ruggiero, E. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Emmons, M. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lopez, I. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Stoops, L. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-03-20

    Mechanical devices operating in an environment contaminated by noise, uncertainties, and extraneous disturbances lead to low signal-to-noise-ratios creating an extremely challenging processing problem. To detect/classify a device subsystem from noisy data, it is necessary to identify unique signatures or particular features. An obvious feature would be resonant (modal) frequencies emitted during its normal operation. In this report, we discuss a model-based approach to incorporate these physical features into a dynamic structure that can be used for such an identification. The approach we take after pre-processing the raw vibration data and removing any extraneous disturbances is to obtain a representation of the structurally unknown device along with its subsystems that capture these salient features. One approach is to recognize that unique modal frequencies (sinusoidal lines) appear in the estimated power spectrum that are solely characteristic of the device under investigation. Therefore, the objective of this effort is based on constructing a black box model of the device that captures these physical features that can be exploited to “diagnose” whether or not the particular device subsystem (track/detect/classify) is operating normally from noisy vibrational data. Here we discuss the application of a modern system identification approach based on stochastic subspace realization techniques capable of both (1) identifying the underlying black-box structure thereby enabling the extraction of structural modes that can be used for analysis and modal tracking as well as (2) indicators of condition and possible changes from normal operation.

  4. Removing Ocular Movement Artefacts by a Joint Smoothened Subspace Estimator

    Directory of Open Access Journals (Sweden)

    Ronald Phlypo

    2007-01-01

    Full Text Available To cope with the severe masking of background cerebral activity in the electroencephalogram (EEG by ocular movement artefacts, we present a method which combines lower-order, short-term and higher-order, long-term statistics. The joint smoothened subspace estimator (JSSE calculates the joint information in both statistical models, subject to the constraint that the resulting estimated source should be sufficiently smooth in the time domain (i.e., has a large autocorrelation or self predictive power. It is shown that the JSSE is able to estimate a component from simulated data that is superior with respect to methodological artefact suppression to those of FastICA, SOBI, pSVD, or JADE/COM1 algorithms used for blind source separation (BSS. Interference and distortion suppression are of comparable order when compared with the above-mentioned methods. Results on patient data demonstrate that the method is able to suppress blinking and saccade artefacts in a fully automated way.

  5. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  6. Supervised orthogonal discriminant subspace projects learning for face recognition.

    Science.gov (United States)

    Chen, Yu; Xu, Xiao-Hong

    2014-02-01

    In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A Variational Approach to Video Registration with Subspace Constraints.

    Science.gov (United States)

    Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes

    2013-01-01

    This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.

  8. Self-adaptive robot training of stroke survivors for continuous tracking movements

    Directory of Open Access Journals (Sweden)

    Morasso Pietro

    2010-03-01

    Full Text Available Abstract Background Although robot therapy is progressively becoming an accepted method of treatment for stroke survivors, few studies have investigated how to adapt the robot/subject interaction forces in an automatic way. The paper is a feasibility study of a novel self-adaptive robot controller to be applied with continuous tracking movements. Methods The haptic robot Braccio di Ferro is used, in relation with a tracking task. The proposed control architecture is based on three main modules: 1 a force field generator that combines a non linear attractive field and a viscous field; 2 a performance evaluation module; 3 an adaptive controller. The first module operates in a continuous time fashion; the other two modules operate in an intermittent way and are triggered at the end of the current block of trials. The controller progressively decreases the gain of the force field, within a session, but operates in a non monotonic way between sessions: it remembers the minimum gain achieved in a session and propagates it to the next one, which starts with a block whose gain is greater than the previous one. The initial assistance gains are chosen according to a minimal assistance strategy. The scheme can also be applied with closed eyes in order to enhance the role of proprioception in learning and control. Results The preliminary results with a small group of patients (10 chronic hemiplegic subjects show that the scheme is robust and promotes a statistically significant improvement in performance indicators as well as a recalibration of the visual and proprioceptive channels. The results confirm that the minimally assistive, self-adaptive strategy is well tolerated by severely impaired subjects and is beneficial also for less severe patients. Conclusions The experiments provide detailed information about the stability and robustness of the adaptive controller of robot assistance that could be quite relevant for the design of future large scale

  9. Optimization of an Autonomous Car Controller Using a Self-Adaptive Evolutionary Strategy

    Directory of Open Access Journals (Sweden)

    Tae Seong Kim

    2012-09-01

    Full Text Available Autonomous cars control the steering wheel, acceleration and the brake pedal, the gears and the clutch using sensory information from multiple sources. Like a human driver, it understands the current situation on the roads from the live streaming of sensory values. The decision-making module often suffers from the limited range of sensors and complexity due to the large number of sensors and actuators. Because it is tedious and difficult to design the controller manually from trial-and-error, it is desirable to use intelligent optimization algorithms. In this work, we propose optimizing the parameters of an autonomous car controller using self-adaptive evolutionary strategies (SAESs which co-evolve solutions and mutation steps for each parameter. We also describe how the most generalized parameter set can be retrieved from the process of optimization. Open-source car racing simulation software (TORCS is used to test the goodness of the proposed methods on 6 different tracks. Experimental results show that the SAES is competitive with the manual design of authors and a simple ES.

  10. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-15

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations.

  11. Self-Adaptive Context Aware Routing Protocol for Unicast Communication in Delay and Tolerant Network

    Directory of Open Access Journals (Sweden)

    Yunbo Chen

    2014-05-01

    Full Text Available At present, most of research works in mobile network focus on the network overhead of the known path which exists between the sender and the receiver. However, the trend of the current practical application demands is becoming increasingly distributed and decentralized. The Delay and Tolerant Network (DTN just comes out of such background of the conflicts between them. The DTN could effectively eliminate the gap between the mobile network and the practical application demands. In this paper, a Self-Adaptive Context Aware Routing Protocol (SACARP for the unicast communication in delay and tolerant networks is presented. Meanwhile, according to the real-time context information of DTN, the Kalman filter theory is introduced to predict the information state of mobility for the optional message ferrying node, and then gives the optimal selection strategy of the message ferrying nodes. The simulation experiments have shown that, compared to the familiar single- copy and multi-copy protocols, the SACARP proposed in this paper has better transmission performance and stability, especially when the network is free, the protocol would keep a good performance with fewer connections and less buffer space.

  12. Self-Adaptive Contention Aware Routing Protocol for Intermittently Connected Mobile Networks

    KAUST Repository

    Elwhishi, Ahmed

    2013-07-01

    This paper introduces a novel multicopy routing protocol, called Self-Adaptive Utility-based Routing Protocol (SAURP), for Delay Tolerant Networks (DTNs) that are possibly composed of a vast number of devices in miniature such as smart phones of heterogeneous capacities in terms of energy resources and buffer spaces. SAURP is characterized by the ability of identifying potential opportunities for forwarding messages to their destinations via a novel utility function-based mechanism, in which a suite of environment parameters, such as wireless channel condition, nodal buffer occupancy, and encounter statistics, are jointly considered. Thus, SAURP can reroute messages around nodes experiencing high-buffer occupancy, wireless interference, and/or congestion, while taking a considerably small number of transmissions. The developed utility function in SAURP is proved to be able to achieve optimal performance, which is further analyzed via a stochastic modeling approach. Extensive simulations are conducted to verify the developed analytical model and compare the proposed SAURP with a number of recently reported encounter-based routing approaches in terms of delivery ratio, delivery delay, and the number of transmissions required for each message delivery. The simulation results show that SAURP outperforms all the counterpart multicopy encounter-based routing protocols considered in the study.

  13. A self-adaptive thermal switch array for rapid temperature stabilization under various thermal power inputs

    International Nuclear Information System (INIS)

    Geng, Xiaobao; Patel, Pragnesh; Narain, Amitabh; Meng, Dennis Desheng

    2011-01-01

    A self-adaptive thermal switch array (TSA) based on actuation by low-melting-point alloy droplets is reported to stabilize the temperature of a heat-generating microelectromechanical system (MEMS) device at a predetermined range (i.e. the optimal working temperature of the device) with neither a control circuit nor electrical power consumption. When the temperature is below this range, the TSA stays off and works as a thermal insulator. Therefore, the MEMS device can quickly heat itself up to its optimal working temperature during startup. Once this temperature is reached, TSA is automatically turned on to increase the thermal conductance, working as an effective thermal spreader. As a result, the MEMS device tends to stay at its optimal working temperature without complex thermal management components and the associated parasitic power loss. A prototype TSA was fabricated and characterized to prove the concept. The stabilization temperatures under various power inputs have been studied both experimentally and theoretically. Under the increment of power input from 3.8 to 5.8 W, the temperature of the device increased only by 2.5 °C due to the stabilization effect of TSA

  14. Self-Adaptive On-Chip System Based on Cross-Layer Adaptation Approach

    Directory of Open Access Journals (Sweden)

    Kais Loukil

    2013-01-01

    Full Text Available The emergence of mobile and battery operated multimedia systems and the diversity of supported applications mount new challenges in terms of design efficiency of these systems which must provide a maximum application quality of service (QoS in the presence of a dynamically varying environment. These optimization problems cannot be entirely solved at design time and some efficiency gains can be obtained at run-time by means of self-adaptivity. In this paper, we propose a new cross-layer hardware (HW/software (SW adaptation solution for embedded mobile systems. It supports application QoS under real-time and lifetime constraints via coordinated adaptation in the hardware, operating system (OS, and application layers. Our method relies on an original middleware solution used on both global and local managers. The global manager (GM handles large, long-term variations whereas the local manager (LM is used to guarantee real-time constraints. The GM acts in three layers whereas the LM acts in application and OS layers only. The main role of GM is to select the best configuration for each application to meet the constraints of the system and respect the preferences of the user. The proposed approach has been applied to a 3D graphics application and successfully implemented on an Altera FPGA.

  15. Dim small targets detection based on self-adaptive caliber temporal-spatial filtering

    Science.gov (United States)

    Fan, Xiangsuo; Xu, Zhiyong; Zhang, Jianlin; Huang, Yongmei; Peng, Zhenming

    2017-09-01

    To boost the detect ability of dim small targets, this paper began by using improved anisotropy for background prediction (IABP), followed by target enhancement by improved high-order cumulates (HQS). Finally, on the basis of image pre-processing, to address the problem of missed and wrong detection caused by fixed caliber of traditional pipeline filtering, this paper used targets' multi-frame movement correlation in the time-space domain, combined with the scale-space theory, to propose a temporal-spatial filtering algorithm which allows the caliber to make self-adaptive changes according to the changes of the targets' scale, effectively solving the detection-related issues brought by unchanged caliber and decreased/increased size of the targets. Experiments showed that the improved anisotropic background predication could be loyal to the true background of the original image to the maximum extent, presenting a superior overall performance to other background prediction methods; the improved HQS significantly increased the signal-noise ratio of images; when the signal-noise ratio was lower than 2.6 dB, this detection algorithm could effectively eliminate noise and detect targets. For the algorithm, the lowest signal-to-noise ratio of the detectable target is 0.37.

  16. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    Science.gov (United States)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  17. Design of a self-adaptive fuzzy PID controller for piezoelectric ceramics micro-displacement system

    Science.gov (United States)

    Zhang, Shuang; Zhong, Yuning; Xu, Zhongbao

    2008-12-01

    In order to improve control precision of the piezoelectric ceramics (PZT) micro-displacement system, a self-adaptive fuzzy Proportional Integration Differential (PID) controller is designed based on the traditional digital PID controller combining with fuzzy control. The arithmetic gives a fuzzy control rule table with the fuzzy control rule and fuzzy reasoning, through this table, the PID parameters can be adjusted online in real time control. Furthermore, the automatic selective control is achieved according to the change of the error. The controller combines the good dynamic capability of the fuzzy control and the high stable precision of the PID control, adopts the method of using fuzzy control and PID control in different segments of time. In the initial and middle stage of the transition process of system, that is, when the error is larger than the value, fuzzy control is used to adjust control variable. It makes full use of the fast response of the fuzzy control. And when the error is smaller than the value, the system is about to be in the steady state, PID control is adopted to eliminate static error. The problems of PZT existing in the field of precise positioning are overcome. The results of the experiments prove that the project is correct and practicable.

  18. Self-adapted and tunable graphene strain sensors for detecting both subtle and large human motions.

    Science.gov (United States)

    Tao, Lu-Qi; Wang, Dan-Yang; Tian, He; Ju, Zhen-Yi; Liu, Ying; Pang, Yu; Chen, Yuan-Quan; Yang, Yi; Ren, Tian-Ling

    2017-06-22

    Conventional strain sensors rarely have both a high gauge factor and a large strain range simultaneously, so they can only be used in specific situations where only a high sensitivity or a large strain range is required. However, for detecting human motions that include both subtle and large motions, these strain sensors can't meet the diverse demands simultaneously. Here, we come up with laser patterned graphene strain sensors with self-adapted and tunable performance for the first time. A series of strain sensors with either an ultrahigh gauge factor or a preferable strain range can be fabricated simultaneously via one-step laser patterning, and are suitable for detecting all human motions. The strain sensors have a GF of up to 457 with a strain range of 35%, or have a strain range of up to 100% with a GF of 268. Most importantly, the performance of the strain sensors can be easily tuned by adjusting the patterns of the graphene, so that the sensors can meet diverse demands in both subtle and large motion situations. The graphene strain sensors show significant potential in applications such as wearable electronics, health monitoring and intelligent robots. Furthermore, the facile, fast and low-cost fabrication method will make them possible and practical to be used for commercial applications in the future.

  19. Self-Adaptive Contention Aware Routing Protocol for Intermittently Connected Mobile Networks

    KAUST Repository

    Elwhishi, Ahmed; Ho, Pin-Han; Naik, K.; Shihada, Basem

    2013-01-01

    This paper introduces a novel multicopy routing protocol, called Self-Adaptive Utility-based Routing Protocol (SAURP), for Delay Tolerant Networks (DTNs) that are possibly composed of a vast number of devices in miniature such as smart phones of heterogeneous capacities in terms of energy resources and buffer spaces. SAURP is characterized by the ability of identifying potential opportunities for forwarding messages to their destinations via a novel utility function-based mechanism, in which a suite of environment parameters, such as wireless channel condition, nodal buffer occupancy, and encounter statistics, are jointly considered. Thus, SAURP can reroute messages around nodes experiencing high-buffer occupancy, wireless interference, and/or congestion, while taking a considerably small number of transmissions. The developed utility function in SAURP is proved to be able to achieve optimal performance, which is further analyzed via a stochastic modeling approach. Extensive simulations are conducted to verify the developed analytical model and compare the proposed SAURP with a number of recently reported encounter-based routing approaches in terms of delivery ratio, delivery delay, and the number of transmissions required for each message delivery. The simulation results show that SAURP outperforms all the counterpart multicopy encounter-based routing protocols considered in the study.

  20. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    International Nuclear Information System (INIS)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-01

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations

  1. A self-adaptive metamaterial beam with digitally controlled resonators for subwavelength broadband flexural wave attenuation

    Science.gov (United States)

    Li, Xiaopeng; Chen, Yangyang; Hu, Gengkai; Huang, Guoliang

    2018-04-01

    Designing lightweight materials and/or structures for broadband low-frequency noise/vibration mitigation is an issue of fundamental importance both practically and theoretically. In this paper, by leveraging the concept of frequency-dependent effective stiffness control, we numerically and experimentally demonstrate, for the first time, a self-adaptive metamaterial beam with digital circuit controlled mechanical resonators for strong and broadband flexural wave attenuation at subwavelength scales. The digital controllers that are capable of feedback control of piezoelectric shunts are integrated into mechanical resonators in the metamaterial, and the transfer function is semi-analytically determined to realize an effective bending stiffness in a quadratic function of the wave frequency for adaptive band gaps. The digital as well as analog control circuits as the backbone of the system are experimentally realized with the guarantee stability of the whole electromechanical system in whole frequency regions, which is the most challenging problem so far. Our experimental results are in good agreement with numerical predictions and demonstrate the strong wave attenuation in almost a three times larger frequency region over the bandwidth of a passive metamaterial. The proposed metamaterial could be applied in a range of applications in the design of elastic wave control devices.

  2. Architecture and Knowledge-Driven Self-Adaptive Security in Smart Space

    Directory of Open Access Journals (Sweden)

    Antti Evesti

    2013-03-01

    Full Text Available Dynamic and heterogeneous smart spaces cause challenges for security because it is impossible to anticipate all the possible changes at design-time. Self-adaptive security is an applicable solution for this challenge. This paper presents an architectural approach for security adaptation in smart spaces. The approach combines an adaptation loop, Information Security Measuring Ontology (ISMO and a smart space security-control model. The adaptation loop includes phases to monitor, analyze, plan and execute changes in the smart space. The ISMO offers input knowledge for the adaptation loop and the security-control model enforces dynamic access control policies. The approach is novel because it defines the whole adaptation loop and knowledge required in each phase of the adaptation. The contributions are validated as a part of the smart space pilot implementation. The approach offers reusable and extensible means to achieve adaptive security in smart spaces and up-to-date access control for devices that appear in the space. Hence, the approach supports the work of smart space application developers.

  3. Experimental investigation of biomimetic self-pumping and self-adaptive transpiration cooling.

    Science.gov (United States)

    Jiang, Pei-Xue; Huang, Gan; Zhu, Yinhai; Xu, Ruina; Liao, Zhiyuan; Lu, Taojie

    2017-09-01

    Transpiration cooling is an effective way to protect high heat flux walls. However, the pumps for the transpiration cooling system make the system more complex and increase the load, which is a huge challenge for practical applications. A biomimetic self-pumping transpiration cooling system was developed inspired by the process of trees transpiration that has no pumps. An experimental investigation showed that the water coolant automatically flowed from the water tank to the hot surface with a height difference of 80 mm without any pumps. A self-adaptive transpiration cooling system was then developed based on this mechanism. The system effectively cooled the hot surface with the surface temperature kept to about 373 K when the heating flame temperature was 1639 K and the heat flux was about 0.42 MW m -2 . The cooling efficiency reached 94.5%. The coolant mass flow rate adaptively increased with increasing flame heat flux from 0.24 MW m -2 to 0.42 MW m -2 while the cooled surface temperature stayed around 373 K. Schlieren pictures showed a protective steam layer on the hot surface which blocked the flame heat flux to the hot surface. The protective steam layer thickness also increased with increasing heat flux.

  4. A new Self-Adaptive disPatching System for local clusters

    Science.gov (United States)

    Kan, Bowen; Shi, Jingyan; Lei, Xiaofeng

    2015-12-01

    The scheduler is one of the most important components of a high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on Torque[1] and Maui[2]. It promotes cluster resource utilization and improves the overall speed of tasks. It provides some extra functions for administrators and users. First of all, in order to allow the scheduling of GPUs, a GPU scheduling module based on Torque and Maui has been developed. Second, SAPS analyses the relationship between the number of queueing jobs and the idle job slots, and then tunes the priority of users’ jobs dynamically. This means more jobs run and fewer job slots are idle. Third, integrating with the monitoring function, SAPS excludes nodes in error states as detected by the monitor, and returns them to the cluster after the nodes have recovered. In addition, SAPS provides a series of function modules including a batch monitoring management module, a comprehensive scheduling accounting module and a real-time alarm module. The aim of SAPS is to enhance the reliability and stability of Torque and Maui. Currently, SAPS has been running stably on a local cluster at IHEP (Institute of High Energy Physics, Chinese Academy of Sciences), with more than 12,000 cpu cores and 50,000 jobs running each day. Monitoring has shown that resource utilization has been improved by more than 26%, and the management work for both administrator and users has been reduced greatly.

  5. Conjunctive patches subspace learning with side information for collaborative image retrieval.

    Science.gov (United States)

    Zhang, Lining; Wang, Lipo; Lin, Weisi

    2012-08-01

    Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.

  6. Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States)

    1994-12-31

    Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.

  7. Extending the subspace hybrid method for eigenvalue problems in reactor physics calculation

    International Nuclear Information System (INIS)

    Zhang, Q.; Abdel-Khalik, H. S.

    2013-01-01

    This paper presents an innovative hybrid Monte-Carlo-Deterministic method denoted by the SUBSPACE method designed for improving the efficiency of hybrid methods for reactor analysis applications. The SUBSPACE method achieves its high computational efficiency by taking advantage of the existing correlations between desired responses. Recently, significant gains in computational efficiency have been demonstrated using this method for source driven problems. Within this work the mathematical theory behind the SUBSPACE method is introduced and extended to address core wide level k-eigenvalue problems. The method's efficiency is demonstrated based on a three-dimensional quarter-core problem, where responses are sought on the pin cell level. The SUBSPACE method is compared to the FW-CADIS method and is found to be more efficient for the utilized test problem because of the reason that the FW-CADIS method solves a forward eigenvalue problem and an adjoint fixed-source problem while the SUBSPACE method only solves an adjoint fixed-source problem. Based on the favorable results obtained here, we are confident that the applicability of Monte Carlo for large scale reactor analysis could be realized in the near future. (authors)

  8. Subspace-Based Holistic Registration for Low-Resolution Facial Images

    Directory of Open Access Journals (Sweden)

    Boom BJ

    2010-01-01

    Full Text Available Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration.

  9. Self-adaptive method to distinguish inner and outer contours of industrial computed tomography image for rapid prototype

    International Nuclear Information System (INIS)

    Duan Liming; Ye Yong; Zhang Xia; Zuo Jian

    2013-01-01

    A self-adaptive identification method is proposed for realizing more accurate and efficient judgment about the inner and outer contours of industrial computed tomography (CT) slice images. The convexity-concavity of the single-pixel-wide closed contour is identified with angle method at first. Then, contours with concave vertices are distinguished to be inner or outer contours with ray method, and contours without concave vertices are distinguished with extreme coordinate value method. The method was chosen to automatically distinguish contours by means of identifying the convexity and concavity of the contours. Thus, the disadvantages of single distinguishing methods, such as ray method's time-consuming and extreme coordinate method's fallibility, can be avoided. The experiments prove the adaptability, efficiency, and accuracy of the self-adaptive method. (authors)

  10. A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM

    Science.gov (United States)

    Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui

    2014-12-01

    Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.

  11. A self-adapting and altitude-dependent regularization method for atmospheric profile retrievals

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2009-03-01

    Full Text Available MIPAS is a Fourier transform spectrometer, operating onboard of the ENVISAT satellite since July 2002. The online retrieval algorithm produces geolocated profiles of temperature and of volume mixing ratios of six key atmospheric constituents: H2O, O3, HNO3, CH4, N2O and NO2. In the validation phase, oscillations beyond the error bars were observed in several profiles, particularly in CH4 and N2O.

    To tackle this problem, a Tikhonov regularization scheme has been implemented in the retrieval algorithm. The applied regularization is however rather weak in order to preserve the vertical resolution of the profiles.

    In this paper we present a self-adapting and altitude-dependent regularization approach that detects whether the analyzed observations contain information about small-scale profile features, and determines the strength of the regularization accordingly. The objective of the method is to smooth out artificial oscillations as much as possible, while preserving the fine detail features of the profile when related information is detected in the observations.

    The proposed method is checked for self consistency, its performance is tested on MIPAS observations and compared with that of some other regularization schemes available in the literature. In all the considered cases the proposed scheme achieves a good performance, thanks to its altitude dependence and to the constraints employed, which are specific of the inversion problem under consideration. The proposed method is generally applicable to iterative Gauss-Newton algorithms for the retrieval of vertical distribution profiles from atmospheric remote sounding measurements.

  12. A Timed Colored Petri Net Simulation-Based Self-Adaptive Collaboration Method for Production-Logistics Systems

    OpenAIRE

    Zhengang Guo; Yingfeng Zhang; Xibin Zhao; Xiaoyu Song

    2017-01-01

    Complex and customized manufacturing requires a high level of collaboration between production and logistics in a flexible production system. With the widespread use of Internet of Things technology in manufacturing, a great amount of real-time and multi-source manufacturing data and logistics data is created, that can be used to perform production-logistics collaboration. To solve the aforementioned problems, this paper proposes a timed colored Petri net simulation-based self-adaptive colla...

  13. Self-adaptive strain-relaxation optimization for high-energy lithium storage material through crumpling of graphene.

    Science.gov (United States)

    Zhao, Yunlong; Feng, Jiangang; Liu, Xue; Wang, Fengchao; Wang, Lifen; Shi, Changwei; Huang, Lei; Feng, Xi; Chen, Xiyuan; Xu, Lin; Yan, Mengyu; Zhang, Qingjie; Bai, Xuedong; Wu, Hengan; Mai, Liqiang

    2014-08-01

    High-energy lithium battery materials based on conversion/alloying reactions have tremendous potential applications in new generation energy storage devices. However, these applications are limited by inherent large volume variations and sluggish kinetics. Here we report a self-adaptive strain-relaxed electrode through crumpling of graphene to serve as high-stretchy protective shells on metal framework, to overcome these limitations. The graphene sheets are self-assembled and deeply crumpled into pinecone-like structure through a contraction-strain-driven crumpling method. The as-prepared electrode exhibits high specific capacity (2,165 mAh g(-1)), fast charge-discharge rate (20 A g(-1)) with no capacity fading in 1,000 cycles. This kind of crumpled graphene has self-adaptive behaviour of spontaneous unfolding-folding synchronized with cyclic expansion-contraction volumetric variation of core materials, which can release strain and maintain good electric contact simultaneously. It is expected that such findings will facilitate the applications of crumpled graphene and the self-adaptive materials.

  14. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  15. Multi-objective optimization of p-xylene oxidation process using an improved self-adaptive differential evolution algorithm

    Institute of Scientific and Technical Information of China (English)

    Lili Tao; Bin Xu; Zhihua Hu; Weimin Zhong

    2017-01-01

    The rise in the use of global polyester fiber contributed to strong demand of the Terephthalic acid (TPA). The liquid-phase catalytic oxidation of p-xylene (PX) to TPA is regarded as a critical and efficient chemical process in industry [1]. PX oxidation reaction involves many complex side reactions, among which acetic acid combustion and PX combustion are the most important. As the target product of this oxidation process, the quality and yield of TPA are of great concern. However, the improvement of the qualified product yield can bring about the high energy consumption, which means that the economic objectives of this process cannot be achieved simulta-neously because the two objectives are in conflict with each other. In this paper, an improved self-adaptive multi-objective differential evolution algorithm was proposed to handle the multi-objective optimization prob-lems. The immune concept is introduced to the self-adaptive multi-objective differential evolution algorithm (SADE) to strengthen the local search ability and optimization accuracy. The proposed algorithm is successfully tested on several benchmark test problems, and the performance measures such as convergence and divergence metrics are calculated. Subsequently, the multi-objective optimization of an industrial PX oxidation process is carried out using the proposed immune self-adaptive multi-objective differential evolution algorithm (ISADE). Optimization results indicate that application of ISADE can greatly improve the yield of TPA with low combustion loss without degenerating TA quality.

  16. Persymmetric Adaptive Detectors of Subspace Signals in Homogeneous and Partially Homogeneous Clutter

    Directory of Open Access Journals (Sweden)

    Ding Hao

    2015-08-01

    Full Text Available In the field of adaptive radar detection, an effective strategy to improve the detection performance is to exploit the structural information of the covariance matrix, especially in the case of insufficient reference cells. Thus, in this study, the problem of detecting multidimensional subspace signals is discussed by considering the persymmetric structure of the clutter covariance matrix, which implies that the covariance matrix is persymmetric about its cross diagonal. Persymmetric adaptive detectors are derived on the basis of the one-step principle as well as the two-step Generalized Likelihood Ratio Test (GLRT in homogeneous and partially homogeneous clutter. The proposed detectors consider the structural information of the covariance matrix at the design stage. Simulation results suggest performance improvement compared with existing detectors when reference cells are insufficient. Moreover, the detection performance is assessed with respect to the effects of the covariance matrix, signal subspace dimension, and mismatched performance of signal subspace as well as signal fluctuations.

  17. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  18. The Detection of Subsynchronous Oscillation in HVDC Based on the Stochastic Subspace Identification Method

    Directory of Open Access Journals (Sweden)

    Chen Shi

    2014-01-01

    Full Text Available Subsynchronous oscillation (SSO usually caused by series compensation, power system stabilizer (PSS, high voltage direct current transmission (HVDC and other power electronic equipment, which will affect the safe operation of generator shafting even the system. It is very important to identify the modal parameters of SSO to take effective control strategies as well. Since the identification accuracy of traditional methods are not high enough, the stochastic subspace identification (SSI method is proposed to improve the identification accuracy of subsynchronous oscillation modal. The stochastic subspace identification method was compared with the other two methods on subsynchronous oscillation IEEE benchmark model and Xiang-Shang HVDC system model, the simulation results show that the stochastic subspace identification method has the advantages of high identification precision, high operation efficiency and strong ability of anti-noise.

  19. Boundary regularity of Nevanlinna domains and univalent functions in model subspaces

    International Nuclear Information System (INIS)

    Baranov, Anton D; Fedorovskiy, Konstantin Yu

    2011-01-01

    In the paper we study boundary regularity of Nevanlinna domains, which have appeared in problems of uniform approximation by polyanalytic polynomials. A new method for constructing Nevanlinna domains with essentially irregular nonanalytic boundaries is suggested; this method is based on finding appropriate univalent functions in model subspaces, that is, in subspaces of the form K Θ =H 2 ominus ΘH 2 , where Θ is an inner function. To describe the irregularity of the boundaries of the domains obtained, recent results by Dolzhenko about boundary regularity of conformal mappings are used. Bibliography: 18 titles.

  20. Embeddings of model subspaces of the Hardy space: compactness and Schatten-von Neumann ideals

    International Nuclear Information System (INIS)

    Baranov, Anton D

    2009-01-01

    We study properties of the embedding operators of model subspaces K p Θ (defined by inner functions) in the Hardy space H p (coinvariant subspaces of the shift operator). We find a criterion for the embedding of K p Θ in L p (μ) to be compact similar to the Volberg-Treil theorem on bounded embeddings, and give a positive answer to a question of Cima and Matheson. The proof is based on Bernstein-type inequalities for functions in K p Θ . We investigate measures μ such that the embedding operator belongs to some Schatten-von Neumann ideal.

  1. Robust subspace estimation using low-rank optimization theory and applications

    CERN Document Server

    Oreifej, Omar

    2014-01-01

    Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame

  2. Outlier Ranking via Subspace Analysis in Multiple Views of the Data

    DEFF Research Database (Denmark)

    Muller, Emmanuel; Assent, Ira; Iglesias, Patricia

    2012-01-01

    , a novel outlier ranking concept. Outrank exploits subspace analysis to determine the degree of outlierness. It considers different subsets of the attributes as individual outlier properties. It compares clustered regions in arbitrary subspaces and derives an outlierness score for each object. Its...... principled integration of multiple views into an outlierness measure uncovers outliers that are not detectable in the full attribute space. Our experimental evaluation demonstrates that Outrank successfully determines a high quality outlier ranking, and outperforms state-of-the-art outlierness measures....

  3. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....

  4. Krylov subspace methods for the solution of large systems of ODE's

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Bjurstrøm, Nils Henrik

    1998-01-01

    In Air Pollution Modelling large systems of ODE's arise. Solving such systems may be done efficientliy by Semi Implicit Runge-Kutta methods. The internal stages may be solved using Krylov subspace methods. The efficiency of this approach is investigated and verified.......In Air Pollution Modelling large systems of ODE's arise. Solving such systems may be done efficientliy by Semi Implicit Runge-Kutta methods. The internal stages may be solved using Krylov subspace methods. The efficiency of this approach is investigated and verified....

  5. Extended Krylov subspaces approximations of matrix functions. Application to computational electromagnetics

    Energy Technology Data Exchange (ETDEWEB)

    Druskin, V.; Lee, Ping [Schlumberger-Doll Research, Ridgefield, CT (United States); Knizhnerman, L. [Central Geophysical Expedition, Moscow (Russian Federation)

    1996-12-31

    There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.

  6. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    2007-01-01

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... with working Matlab code and applications in speech processing....

  7. Metastable decoherence-free subspaces and electromagnetically induced transparency in interacting many-body systems

    DEFF Research Database (Denmark)

    Macieszczak, Katarzyna; Zhou, Yanli; Hofferberth, Sebastian

    2017-01-01

    to stationarity this leads to a slow dynamics, which renders the typical assumption of fast relaxation invalid. We derive analytically the effective nonequilibrium dynamics in the decoherence-free subspace, which features coherent and dissipative two-body interactions. We discuss the use of this scenario...

  8. A Framework for Evaluation and Exploration of Clustering Algorithms in Subspaces of High Dimensional Databases

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    comparative studies on the advantages and disadvantages of the different algorithms exist. Part of the underlying problem is the lack of available open source implementations that could be used by researchers to understand, compare, and extend subspace and projected clustering algorithms. In this work, we...

  9. A block Krylov subspace time-exact solution method for linear ordinary differential equation systems

    NARCIS (Netherlands)

    Bochev, Mikhail A.

    2013-01-01

    We propose a time-exact Krylov-subspace-based method for solving linear ordinary differential equation systems of the form $y'=-Ay+g(t)$ and $y"=-Ay+g(t)$, where $y(t)$ is the unknown function. The method consists of two stages. The first stage is an accurate piecewise polynomial approximation of

  10. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    Science.gov (United States)

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Third-order nonlinear differential operators preserving invariant subspaces of maximal dimension

    International Nuclear Information System (INIS)

    Qu Gai-Zhu; Zhang Shun-Li; Li Yao-Long

    2014-01-01

    In this paper, third-order nonlinear differential operators are studied. It is shown that they are quadratic forms when they preserve invariant subspaces of maximal dimension. A complete description of third-order quadratic operators with constant coefficients is obtained. One example is given to derive special solutions for evolution equations with third-order quadratic operators. (general)

  12. A frequency domain subspace algorithm for mixed causal, anti-causal LTI systems

    NARCIS (Netherlands)

    Fraanje, Rufus; Verhaegen, Michel; Verdult, Vincent; Pintelon, Rik

    2003-01-01

    The paper extends the subspacc identification method to estimate state-space models from frequency response function (FRF) samples, proposed by McKelvey et al. (1996) for mixed causal/anti-causal systems, and shows that other frequency domain subspace algorithms can be extended similarly. The method

  13. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control.

    Science.gov (United States)

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan

    2016-01-01

    In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite

  14. A highly self-adaptive cold plate for the single-phase mechanically pumped fluid loop for spacecraft thermal management

    International Nuclear Information System (INIS)

    Wang, Ji-Xiang; Li, Yun-Ze; Zhang, Hong-Sheng; Wang, Sheng-Nan; Liang, Yi-Hao; Guo, Wei; Liu, Yang; Tian, Shao-Ping

    2016-01-01

    Highlights: • A highly self-adaptive cold plate integrated with paraffin-based actuator is proposed. • Higher operating economy is attained due to an energy-efficient strategy. • A greater compatibility of the current space control system is obtained. • Model was entrenched theoretically to design the system efficiently. • A strong self-adaptability of the cold plate is observed experimentally. - Abstract: Aiming to improve the conventional single-phase mechanically pumped fluid loop applied in spacecraft thermal control system, a novel actively-pumped loop using distributed thermal control strategy was proposed. The flow control system for each branch consists primarily of a thermal control valve integrated with a paraffin-based actuator residing in the front part of each corresponding cold plate, where both coolant’s flow rate and the cold plate’s heat removal capability are well controlled sensitively according to the heat loaded upon the cold plate due to a conversion between thermal and mechanical energies. The operating economy enhances remarkably owing to no energy consumption in flow control process. Additionally, realizing the integration of the sensor, controller and actuator systems, it simplifies structure of the traditional mechanically pumped fluid loop as well. Revolving this novel scheme, mathematical model regarding design process of the highly specialized cold plate was entrenched theoretically. A validating system as a prototype was established on the basis of the design method and the scheduled objective of the controlled temperature (43 °C). Then temperature control performances of the highly self-adaptive cold plate under various operating conditions were tested experimentally. During almost all experiments, the controlled temperature remains within a range of ±2 °C around the set-point. Conclusions can be drawn that this self-driven control system is stable with sufficient fast transient responses and sufficient small steady

  15. Using CUDA Technology for Defining the Stiffness Matrix in the Subspace of Eigenvectors

    Directory of Open Access Journals (Sweden)

    Yu. V. Berchun

    2015-01-01

    Full Text Available The aim is to improve the performance of solving a problem of deformable solid mechanics through the use of GPGPU. The paper describes technologies for computing systems using both a central and a graphics processor and provides motivation for using CUDA technology as the efficient one.The paper also analyses methods to solve the problem of defining natural frequencies and design waveforms, i.e. an iteration method in the subspace. The method includes several stages. The paper considers the most resource-hungry stage, which defines the stiffness matrix in the subspace of eigenforms and gives the mathematical interpretation of this stage.The GPU choice as a computing device is justified. The paper presents an algorithm for calculating the stiffness matrix in the subspace of eigenforms taking into consideration the features of input data. The global stiffness matrix is very sparse, and its size can reach tens of millions. Therefore, it is represented as a set of the stiffness matrices of the single elements of a model. The paper analyses methods of data representation in the software and selects the best practices for GPU computing.It describes the software implementation using CUDA technology to calculate the stiffness matrix in the subspace of eigenforms. Due to the input data nature, it is impossible to use the universal libraries of matrix computations (cuSPARSE and cuBLAS for loading the GPU. For efficient use of GPU resources in the software implementation, the stiffness matrices of elements are built in the block matrices of a special form. The advantages of using shared memory in GPU calculations are described.The transfer to the GPU computations allowed a twentyfold increase in performance (as compared to the multithreaded CPU-implementation on the model of middle dimensions (degrees of freedom about 2 million. Such an acceleration of one stage speeds up defining the natural frequencies and waveforms by the iteration method in a subspace

  16. Lyapunov vectors and assimilation in the unstable subspace: theory and applications

    International Nuclear Information System (INIS)

    Palatella, Luigi; Carrassi, Alberto; Trevisan, Anna

    2013-01-01

    Based on a limited number of noisy observations, estimation algorithms provide a complete description of the state of a system at current time. Estimation algorithms that go under the name of assimilation in the unstable subspace (AUS) exploit the nonlinear stability properties of the forecasting model in their formulation. Errors that grow due to sensitivity to initial conditions are efficiently removed by confining the analysis solution in the unstable and neutral subspace of the system, the subspace spanned by Lyapunov vectors with positive and zero exponents, while the observational noise does not disturb the system along the stable directions. The formulation of the AUS approach in the context of four-dimensional variational assimilation (4DVar-AUS) and the extended Kalman filter (EKF-AUS) and its application to chaotic models is reviewed. In both instances, the AUS algorithms are at least as efficient but simpler to implement and computationally less demanding than their original counterparts. As predicted by the theory when error dynamics is linear, the optimal subspace dimension for 4DVar-AUS is given by the number of positive and null Lyapunov exponents, while the EKF-AUS algorithm, using the same unstable and neutral subspace, recovers the solution of the full EKF algorithm, but dealing with error covariance matrices of a much smaller dimension and significantly reducing the computational burden. Examples of the application to a simplified model of the atmospheric circulation and to the optimal velocity model for traffic dynamics are given. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’. (paper)

  17. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control

    Science.gov (United States)

    Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan

    2016-01-01

    In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740

  18. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling.

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  19. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  20. Design and Numerical Analysis of a Novel Counter-Rotating Self-Adaptable Wave Energy Converter Based on CFD Technology

    Directory of Open Access Journals (Sweden)

    Chongfei Sun

    2018-03-01

    Full Text Available The lack of an efficient and reliable power supply is currently one of the bottlenecks restricting the practical application of unmanned ocean detectors. Wave energy is the most widely distributed ocean energy, with the obvious advantages of high energy density and predictability. In this paper, a novel wave energy converter (WEC for power supply of low-power unmanned ocean detectors is proposed, which is a small-scale counter-rotating self-adaptive point absorber-type WEC. The double-layer counter-rotating absorbers can achieve the torque balance of the whole device. Besides, the self-adaptation of the blade to the water flow can maintain a unidirectional continuous rotation of the single-layer absorber. The WEC has several advantages, including small occupied space, simple exchange process and convenient modular integration. It is expected to meet the power demand of low-power ocean detectors. Through modeling and CFD analysis, it was found that the power and efficiency characteristics of WEC are greatly influenced by the relative flow velocity, the blade angle of the absorber and the interaction between the upper and lower absorbers. A physical prototype of the WEC was made and some related experiments were conducted to verify the feasibility of WEC working principle and the reliability of CFD analysis.

  1. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    Science.gov (United States)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  2. A Timed Colored Petri Net Simulation-Based Self-Adaptive Collaboration Method for Production-Logistics Systems

    Directory of Open Access Journals (Sweden)

    Zhengang Guo

    2017-03-01

    Full Text Available Complex and customized manufacturing requires a high level of collaboration between production and logistics in a flexible production system. With the widespread use of Internet of Things technology in manufacturing, a great amount of real-time and multi-source manufacturing data and logistics data is created, that can be used to perform production-logistics collaboration. To solve the aforementioned problems, this paper proposes a timed colored Petri net simulation-based self-adaptive collaboration method for Internet of Things-enabled production-logistics systems. The method combines the schedule of token sequences in the timed colored Petri net with real-time status of key production and logistics equipment. The key equipment is made ‘smart’ to actively publish or request logistics tasks. An integrated framework based on a cloud service platform is introduced to provide the basis for self-adaptive collaboration of production-logistics systems. A simulation experiment is conducted by using colored Petri nets (CPN Tools to validate the performance and applicability of the proposed method. Computational experiments demonstrate that the proposed method outperforms the event-driven method in terms of reductions of waiting time, makespan, and electricity consumption. This proposed method is also applicable to other manufacturing systems to implement production-logistics collaboration.

  3. Parametric recursive system identification and self-adaptive modeling of the human energy metabolism for adaptive control of fat weight.

    Science.gov (United States)

    Őri, Zsolt P

    2017-05-01

    A mathematical model has been developed to facilitate indirect measurements of difficult to measure variables of the human energy metabolism on a daily basis. The model performs recursive system identification of the parameters of the metabolic model of the human energy metabolism using the law of conservation of energy and principle of indirect calorimetry. Self-adaptive models of the utilized energy intake prediction, macronutrient oxidation rates, and daily body composition changes were created utilizing Kalman filter and the nominal trajectory methods. The accuracy of the models was tested in a simulation study utilizing data from the Minnesota starvation and overfeeding study. With biweekly macronutrient intake measurements, the average prediction error of the utilized carbohydrate intake was -23.2 ± 53.8 kcal/day, fat intake was 11.0 ± 72.3 kcal/day, and protein was 3.7 ± 16.3 kcal/day. The fat and fat-free mass changes were estimated with an error of 0.44 ± 1.16 g/day for fat and -2.6 ± 64.98 g/day for fat-free mass. The daily metabolized macronutrient energy intake and/or daily macronutrient oxidation rate and the daily body composition change from directly measured serial data are optimally predicted with a self-adaptive model with Kalman filter that uses recursive system identification.

  4. Goal-Oriented Self-Adaptive hp Finite Element Simulation of 3D DC Borehole Resistivity Simulations

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    In this paper we present a goal-oriented self-adaptive hp Finite Element Method (hp-FEM) with shared data structures and a parallel multi-frontal direct solver. The algorithm automatically generates (without any user interaction) a sequence of meshes delivering exponential convergence of a prescribed quantity of interest with respect to the number of degrees of freedom. The sequence of meshes is generated from a given initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refinements on the finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. All computational algorithms, including automatic hp goal-oriented adaptivity and the solver work fully in parallel. We describe the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. We apply the methodology described to the three-dimensional simulation of the borehole resistivity measurement of direct current through casing in the presence of invasion.

  5. Hankel Matrix Correlation Function-Based Subspace Identification Method for UAV Servo System

    Directory of Open Access Journals (Sweden)

    Minghong She

    2018-01-01

    Full Text Available For the identification problem of closed-loop subspace model, we propose a zero space projection method based on the estimation of correlation function to fill the block Hankel matrix of identification model by combining the linear algebra with geometry. By using the same projection of related data in time offset set and LQ decomposition, the multiplication operation of projection is achieved and dynamics estimation of the unknown equipment system model is obtained. Consequently, we have solved the problem of biased estimation caused when the open-loop subspace identification algorithm is applied to the closed-loop identification. A simulation example is given to show the effectiveness of the proposed approach. In final, the practicability of the identification algorithm is verified by hardware test of UAV servo system in real environment.

  6. Visual tracking based on the sparse representation of the PCA subspace

    Science.gov (United States)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  7. Detecting anomalies in crowded scenes via locality-constrained affine subspace coding

    Science.gov (United States)

    Fan, Yaxiang; Wen, Gongjian; Qiu, Shaohua; Li, Deren

    2017-07-01

    Video anomaly event detection is the process of finding an abnormal event deviation compared with the majority of normal or usual events. The main challenges are the high structure redundancy and the dynamic changes in the scenes that are in surveillance videos. To address these problems, we present a framework for anomaly detection and localization in videos that is based on locality-constrained affine subspace coding (LASC) and a model updating procedure. In our algorithm, LASC attempts to reconstruct the test sample by its top-k nearest subspaces, which are obtained by segmenting the normal samples space using a clustering method. A sample with a large reconstruction cost is detected as abnormal by setting a threshold. To adapt to the scene changes over time, a model updating strategy is proposed. We experiment on two public datasets: the UCSD dataset and the Avenue dataset. The results demonstrate that our method achieves competitive performance at a 700 fps on a single desktop PC.

  8. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    International Nuclear Information System (INIS)

    Chen, Xudong

    2010-01-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging

  9. Expedited Holonomic Quantum Computation via Net Zero-Energy-Cost Control in Decoherence-Free Subspace.

    Science.gov (United States)

    Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao

    2016-11-25

    Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.

  10. Experimental fault-tolerant quantum cryptography in a decoherence-free subspace

    International Nuclear Information System (INIS)

    Zhang Qiang; Pan Jianwei; Yin Juan; Chen Tengyun; Lu Shan; Zhang Jun; Li Xiaoqiang; Yang Tao; Wang Xiangbin

    2006-01-01

    We experimentally implement a fault-tolerant quantum key distribution protocol with two photons in a decoherence-free subspace [Phys. Rev. A 72, 050304(R) (2005)]. It is demonstrated that our protocol can yield a good key rate even with a large bit-flip error rate caused by collective rotation, while the usual realization of the Bennett-Brassard 1984 protocol cannot produce any secure final key given the same channel. Since the experiment is performed in polarization space and does not need the calibration of a reference frame, important applications in free-space quantum communication are expected. Moreover, our method can also be used to robustly transmit an arbitrary two-level quantum state in a type of decoherence-free subspace

  11. Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption

    Directory of Open Access Journals (Sweden)

    Zheping Yan

    2014-01-01

    Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.

  12. Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation

    Directory of Open Access Journals (Sweden)

    Bourennane Salah

    2007-01-01

    Full Text Available A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.

  13. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  14. Quantum theory of dynamical collective subspace for large-amplitude collective motion

    International Nuclear Information System (INIS)

    Sakata, Fumihiko; Marumori, Toshio; Ogura, Masanori.

    1986-03-01

    By placing emphasis on conceptual correspondence to the ''classical'' theory which has been developed within the framework of the time-dependent Hartree-Fock theory, a full quantum theory appropriate for describing large-amplitude collective motion is proposed. A central problem of the quantum theory is how to determine an optimal representation called a dynamical representation; the representation is specific for the collective subspace where the large-amplitude collective motion is replicated as satisfactorily as possible. As an extension of the classical theory where the concept of an approximate integral surface plays an important role, the dynamical representation is properly characterized by introducing a concept of an approximate invariant subspace of the Hamiltonian. (author)

  15. Krylov subspace method for evaluating the self-energy matrices in electron transport calculations

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg; Hansen, Per Christian; Petersen, D. E.

    2008-01-01

    We present a Krylov subspace method for evaluating the self-energy matrices used in the Green's function formulation of electron transport in nanoscale devices. A procedure based on the Arnoldi method is employed to obtain solutions of the quadratic eigenvalue problem associated with the infinite...... calculations. Numerical tests within a density functional theory framework are provided to validate the accuracy and robustness of the proposed method, which in most cases is an order of magnitude faster than conventional methods.......We present a Krylov subspace method for evaluating the self-energy matrices used in the Green's function formulation of electron transport in nanoscale devices. A procedure based on the Arnoldi method is employed to obtain solutions of the quadratic eigenvalue problem associated with the infinite...

  16. Probabilistic energy management of a renewable microgrid with hydrogen storage using self-adaptive charge search algorithm

    International Nuclear Information System (INIS)

    Niknam, Taher; Golestaneh, Faranak; Shafiei, Mehdi

    2013-01-01

    Micro Grids (MGs) are clusters of the DER (Distributed Energy Resource) units and loads which can operate in both grid-connected and island modes. This paper addresses a probabilistic cost optimization scheme under uncertain environment for the MGs with several multiple Distributed Generation (DG) units. The purpose of the proposed approach is to make decisions regarding to optimizing the production of the DG units and power exchange with the upstream network for a Combined Heat and Power (CHP) system. A PEMFCPP (Proton Exchange Membrane Fuel cell power plant) is considered as a prime mover of the CHP system. An electrochemical model for representation and performance of the PEMFC is applied. In order to best use of the FCPP, hydrogen production and storage management are carried out. An economic model is organized to calculate the operation cost of the MG based on the electrochemical model of the PEMFC and hydrogen storage. The proposed optimization scheme comprises a self-adaptive Charged System Search (CSS) linked to the 2m + 1 point estimate method. The 2m + 1 point estimate method is employed to cover the uncertainty in the following data: the hourly market tariffs, electrical and thermal load demands, available output power of the PhotoVoltaic (PV) and Wind Turbines (WT) units, fuel prices, hydrogen selling price, operation temperature of the FC and pressure of the reactant gases of FC. The Self-adaptive CSS (SCSS) is organized based on the CSS algorithm and is upgraded by some modification approaches, mainly a self-adaptive reformation approach. In the proposed reformation method, two updating approaches are considered. Each particle based on the ability of those approaches to find optimal solutions in the past iterations, chooses one of them to improve its solution. The effectiveness of the proposed approach is verified on a multiple-DG MG in the grid-connected mode. -- Highlights: ► Consider the effect of Hydrogen produced by PEMFC on MGs. ► Combines

  17. Antifouling composites with self-adaptive controlled release based on an active compound intercalated into layered double hydroxides

    Science.gov (United States)

    Yang, Miaosen; Gu, Lianghua; Yang, Bin; Wang, Li; Sun, Zhiyong; Zheng, Jiyong; Zhang, Jinwei; Hou, Jian; Lin, Cunguo

    2017-12-01

    This paper reports a novel method to prepare the antifouling composites with properties of self-adaptive controlled release (defined as control the release rate autonomously and adaptively according to the change of environmental conditions) by intercalation of sodium paeonolsilate (PAS) into MgAl and ZnAl layered double hydroxide (LDH) with the molar ratio (M2+/M3+) of 2:1 and 3:1, respectively. The powder X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FT-IR) confirm the intercalation of PAS into the galleries of LDH. The controlled release behavior triggered by temperature for the PAS-LDH composites has been investigated, and the results show that the release rate of all PAS-LDH composites increases as the increase of temperature. However, the MgAl-PAS-LDH composites (Mg2Al-PAS-LDH and Mg3Al-PAS-LDH) exhibit the increased release rate of 0.21 ppm/°C from 15 to 30 °C in 3.5% NaCl solution, more than three times of the ZnAl-PAS-LDH composites (0.06 ppm/°C), owing to the confined microenvironment influenced by metal types in LDH layers. In addition, a possible diffusion-controlled process with surface diffusion, bulk diffusion and heterogeneous flat surface diffusion has been revealed via fitting four kinetic equations. Moreover, to verify the practical application of the PAS-LDH composites, a model coating denoted as Mg2Al-PAS-LDH coating was fabricated. The release result displays that the release rate increases or decreases as temperature altered at 15 and 25 °C alternately, indicating its self-adaptive controlled release behavior with temperature. Moreover, the superior resistance to the settlement of Ulva spores at 15 and 25 °C was observed for the Mg2Al-PAS-LDH coating, as a result of the controllable release of antifoulant. Therefore, this work provides a facile and effective method for the fabrication of antifouling composites with self-adaptive controlled release behavior in response to temperature, which can be used to prolong

  18. Application of Earthquake Subspace Detectors at Kilauea and Mauna Loa Volcanoes, Hawai`i

    Science.gov (United States)

    Okubo, P.; Benz, H.; Yeck, W.

    2016-12-01

    Recent studies have demonstrated the capabilities of earthquake subspace detectors for detailed cataloging and tracking of seismicity in a number of regions and settings. We are exploring the application of subspace detectors at the United States Geological Survey's Hawaiian Volcano Observatory (HVO) to analyze seismicity at Kilauea and Mauna Loa volcanoes. Elevated levels of microseismicity and occasional swarms of earthquakes associated with active volcanism here present cataloging challenges due the sheer numbers of earthquakes and an intrinsically low signal-to-noise environment featuring oceanic microseism and volcanic tremor in the ambient seismic background. With high-quality continuous recording of seismic data at HVO, we apply subspace detectors (Harris and Dodge, 2011, Bull. Seismol. Soc. Am., doi: 10.1785/0120100103) during intervals of noteworthy seismicity. Waveform templates are drawn from Magnitude 2 and larger earthquakes within clusters of earthquakes cataloged in the HVO seismic database. At Kilauea, we focus on seismic swarms in the summit caldera region where, despite continuing eruptions from vents in the summit region and in the east rift zone, geodetic measurements reflect a relatively inflated volcanic state. We also focus on seismicity beneath and adjacent to Mauna Loa's summit caldera that appears to be associated with geodetic expressions of gradual volcanic inflation, and where precursory seismicity clustered prior to both Mauna Loa's most recent eruptions in 1975 and 1984. We recover several times more earthquakes with the subspace detectors - down to roughly 2 magnitude units below the templates, based on relative amplitudes - compared to the numbers of cataloged earthquakes. The increased numbers of detected earthquakes in these clusters, and the ability to associate and locate them, allow us to infer details of the spatial and temporal distributions and possible variations in stresses within these key regions of the volcanoes.

  19. Uncertainty calculation for modal parameters used with stochastic subspace identification: an application to a bridge structure

    Science.gov (United States)

    Hsu, Wei-Ting; Loh, Chin-Hsiung; Chao, Shu-Hsien

    2015-03-01

    Stochastic subspace identification method (SSI) has been proven to be an efficient algorithm for the identification of liner-time-invariant system using multivariate measurements. Generally, the estimated modal parameters through SSI may be afflicted with statistical uncertainty, e.g. undefined measurement noises, non-stationary excitation, finite number of data samples etc. Therefore, the identified results are subjected to variance errors. Accordingly, the concept of the stabilization diagram can help users to identify the correct model, i.e. through removing the spurious modes. Modal parameters are estimated at successive model orders where the physical modes of the system are extracted and separated from the spurious modes. Besides, an uncertainty computation scheme was derived for the calculation of uncertainty bounds for modal parameters at some given model order. The uncertainty bounds of damping ratios are particularly interesting, as the estimation of damping ratios are difficult to obtain. In this paper, an automated stochastic subspace identification algorithm is addressed. First, the identification of modal parameters through covariance-driven stochastic subspace identification from the output-only measurements is used for discussion. A systematic way of investigation on the criteria for the stabilization diagram is presented. Secondly, an automated algorithm of post-processing on stabilization diagram is demonstrated. Finally, the computation of uncertainty bounds for each mode with all model order in the stabilization diagram is utilized to determine system natural frequencies and damping ratios. Demonstration of this study on the system identification of a three-span steel bridge under operation condition is presented. It is shown that the proposed new operation procedure for the automated covariance-driven stochastic subspace identification can enhance the robustness and reliability in structural health monitoring.

  20. Projected Gauss-Seidel subspace minimization method for interactive rigid body dynamics

    DEFF Research Database (Denmark)

    Silcowitz-Hansen, Morten; Abel, Sarah Maria Niebe; Erleben, Kenny

    2010-01-01

    artifacts such as viscous or damped contact response. In this paper, we present a new approach to contact force determination. We formulate the contact force problem as a nonlinear complementarity problem, and discretize the problem to derive the Projected Gauss–Seidel method. We combine the Projected Gauss......–Seidel method with a subspace minimization method. Our new method shows improved qualities and superior convergence properties for specific configurations....

  1. Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods

    Czech Academy of Sciences Publication Activity Database

    Paige, C. C.; Strakoš, Zdeněk

    2002-01-01

    Roč. 23, č. 6 (2002), s. 1899-1924 ISSN 1064-8275 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: AV0Z1030915 Keywords : linear equations * eigenproblem * large sparse matrices * iterative solutions * Krylov subspace methods * Arnoldi method * GMRES * modified Gram-Schmidt * least squares * total least squares * singular values Subject RIV: BA - General Mathematics Impact factor: 1.291, year: 2002

  2. On Optimal Short Recurrences for Generating Orthogonal Krylov Subspace Bases. Dedicated to Gene Golub

    Czech Academy of Sciences Publication Activity Database

    Liesen, J.; Strakoš, Zdeněk

    2008-01-01

    Roč. 50, č. 3 (2008), s. 485-503 ISSN 0036-1445 R&D Projects: GA AV ČR 1ET400300415; GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : Krylov subspace methods * orthogonal bases * short reccurences * conjugate gradient -like methods Subject RIV: IN - Informatics, Computer Science Impact factor: 2.739, year: 2008

  3. Subspace based adaptive denoising of surface EMG from neurological injury patients

    Science.gov (United States)

    Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping

    2014-10-01

    Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.

  4. Banach C*-algebras not containing a subspace isomorphic to C0

    International Nuclear Information System (INIS)

    Basit, B.

    1989-09-01

    If X is a locally Hausdorff space and C 0 (X) the Banach algebra of continuous functions defined on X vanishing at infinity, we showed that a subalgebra A of C 0 (X) is finite dimensional if it does not contain a subspace isomorphic to the Banach space C 0 of convergent to zero complex sequences. In this paper we extend this result to noncommutative Banach C*-algebras and Banach* algebras. 10 refs

  5. N-screen aware multicriteria hybrid recommender system using weight based subspace clustering.

    Science.gov (United States)

    Ullah, Farman; Sarwar, Ghulam; Lee, Sungchang

    2014-01-01

    This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements.

  6. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    Science.gov (United States)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  7. A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering

    Directory of Open Access Journals (Sweden)

    Yubao Sun

    2015-01-01

    Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.

  8. Numerical solution of stiff burnup equation with short half lived nuclides by the Krylov subspace method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro; Sugimura, Naoki

    2007-01-01

    The Krylov subspace method is applied to solve nuclide burnup equations used for lattice physics calculations. The Krylov method is an efficient approach for solving ordinary differential equations with stiff nature such as the nuclide burnup with short lived nuclides. Some mathematical fundamentals of the Krylov subspace method and its application to burnup equations are discussed. Verification calculations are carried out in a PWR pin-cell geometry with UO 2 fuel. A detailed burnup chain that includes 193 fission products and 28 heavy nuclides is used in the verification calculations. Shortest half life found in the present burnup chain is approximately 30 s ( 106 Rh). Therefore, conventional methods (e.g., the Taylor series expansion with scaling and squaring) tend to require longer computation time due to numerical stiffness. Comparison with other numerical methods (e.g., the 4-th order Runge-Kutta-Gill) reveals that the Krylov subspace method can provide accurate solution for a detailed burnup chain used in the present study with short computation time. (author)

  9. Independent Subspace Analysis of the Sea Surface Temperature Variability: Non-Gaussian Sources and Sensitivity to Sampling and Dimensionality

    Directory of Open Access Journals (Sweden)

    Carlos A. L. Pires

    2017-01-01

    Full Text Available We propose an expansion of multivariate time-series data into maximally independent source subspaces. The search is made among rotations of prewhitened data which maximize non-Gaussianity of candidate sources. We use a tensorial invariant approximation of the multivariate negentropy in terms of a linear combination of squared coskewness and cokurtosis. By solving a high-order singular value decomposition problem, we extract the axes associated with most non-Gaussianity. Moreover, an estimate of the Gaussian subspace is provided by the trailing singular vectors. The independent subspaces are obtained through the search of “quasi-independent” components within the estimated non-Gaussian subspace, followed by the identification of groups with significant joint negentropies. Sources result essentially from the coherency of extremes of the data components. The method is then applied to the global sea surface temperature anomalies, equatorward of 65°, after being tested with non-Gaussian surrogates consistent with the data anomalies. The main emerging independent components and subspaces, supposedly generated by independent forcing, include different variability modes, namely, The East-Pacific, the Central Pacific, and the Atlantic Niños, the Atlantic Multidecadal Oscillation, along with the subtropical dipoles in the Indian, South Pacific, and South-Atlantic oceans. Benefits and usefulness of independent subspaces are then discussed.

  10. The fabrication techniques of Z-pinch targets. Techniques of fabricating self-adapted Z-pinch wire-arrays

    International Nuclear Information System (INIS)

    Qiu Longhui; Wei Yun; Liu Debin; Sun Zuoke; Yuan Yuping

    2002-01-01

    In order to fabricate wire arrays for use in the Z-pinch physical experiments, the fabrication techniques are investigated as follow: Thickness of about 1-1.5 μm of gold is electroplated on the surface of ultra-fine tungsten wires. Fibers of deuterated-polystyrene (DPS) with diameters from 30 to 100 microns are made from molten DPS. And two kinds of planar wire-arrays and four types of annular wire-arrays are designed, which are able to adapt to the variation of the distance between the cathode and anode inside the target chamber. Furthermore, wire-arrays with diameters form 5-24 μm are fabricated with tungsten wires, respectively. The on-site test shows that the wire-arrays can self-adapt to the distance changes perfectly

  11. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  12. Self-Adapting Routing Overlay Network for Frequently Changing Application Traffic in Content-Based Publish/Subscribe System

    Directory of Open Access Journals (Sweden)

    Meng Chi

    2014-01-01

    Full Text Available In the large-scale distributed simulation area, the topology of the overlay network cannot always rapidly adapt to frequently changing application traffic to reduce the overall traffic cost. In this paper, we propose a self-adapting routing strategy for frequently changing application traffic in content-based publish/subscribe system. The strategy firstly trains the traffic information and then uses this training information to predict the application traffic in the future. Finally, the strategy reconfigures the topology of the overlay network based on this predicting information to reduce the overall traffic cost. A predicting path is also introduced in this paper to reduce the reconfiguration numbers in the process of the reconfigurations. Compared to other strategies, the experimental results show that the strategy proposed in this paper could reduce the overall traffic cost of the publish/subscribe system in less reconfigurations.

  13. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  14. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Science.gov (United States)

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  15. A Readout Integrated Circuit (ROIC) employing self-adaptive background current compensation technique for Infrared Focal Plane Array (IRFPA)

    Science.gov (United States)

    Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan

    2018-05-01

    A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.

  16. A self-adaptive chaotic particle swarm algorithm for short term hydroelectric system scheduling in deregulated environment

    International Nuclear Information System (INIS)

    Jiang Chuanwen; Bompard, Etorre

    2005-01-01

    This paper proposes a short term hydroelectric plant dispatch model based on the rule of maximizing the benefit. For the optimal dispatch model, which is a large scale nonlinear planning problem with multi-constraints and multi-variables, this paper proposes a novel self-adaptive chaotic particle swarm optimization algorithm to solve the short term generation scheduling of a hydro-system better in a deregulated environment. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed approach introduces chaos mapping and an adaptive scaling term into the particle swarm optimization algorithm, which increases its convergence rate and resulting precision. The new method has been examined and tested on a practical hydro-system. The results are promising and show the effectiveness and robustness of the proposed approach in comparison with the traditional particle swarm optimization algorithm

  17. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2014-01-01

    Full Text Available Artificial bee colony (ABC algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA, artificial colony optimization (ACO, and particle swarm optimization (PSO. However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  18. An improved self-adaptive ant colony algorithm based on genetic strategy for the traveling salesman problem

    Science.gov (United States)

    Wang, Pan; Zhang, Yi; Yan, Dong

    2018-05-01

    Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.

  19. Breaking of separability condition for dynamical collective subspace; Onset of quantum chaos in large-amplitude collective motion

    Energy Technology Data Exchange (ETDEWEB)

    Sakata, Fumihiko [Tokyo Univ., Tanashi (Japan). Inst. for Nuclear Study; Yamamoto, Yoshifumi; Marumori, Toshio; Iida, Shinji; Tsukuma, Hidehiko

    1989-11-01

    It is the purpose of the present paper to study 'global structure' of the state space of an N-body interacting fermion system, which exhibits regular, transient and stochastic phases depending on strength of the interaction. An optimum representation called a dynamical representation plays an essential role in this investigation. The concept of the dynamical representation has been introduced in the quantum theory of dynamical subspace in our previous paper, in order to determine self-consistently an optimum collective subspace as well as an optimum collective Hamiltonian. In the theory, furthermore, dynamical conditions called separability and stability conditions have been provided in order to identify the optimum collective subspace as an approximate invariant subspace of the Hamiltonian. Physical meaning of these conditions are clarified from a viewpoint to relate breaking of them with bifurcation of the collectivity and an onset of quantum chaos from the regular collective motion, by illustrating the general idea with numerical results obtained for a simple soluble model. It turns out that the onset of the stochastic phase is associated with dissolution of the quantum numbers to specify the collective subspace and this dissolution is induced by the breaking of the separability condition in the dynamical representation. (author).

  20. Quantum Gate Operations in Decoherence-Free Subspace with Superconducting Charge Qubits inside a Cavity

    International Nuclear Information System (INIS)

    Yi-Min, Wang; Yan-Li, Zhou; Lin-Mei, Liang; Cheng-Zu, Li

    2009-01-01

    We propose a feasible scheme to achieve universal quantum gate operations in decoherence-free subspace with superconducting charge qubits placed in a microwave cavity. Single-logic-qubit gates can be realized with cavity assisted interaction, which possesses the advantages of unconventional geometric gate operation. The two-logic-qubit controlled-phase gate between subsystems can be constructed with the help of a variable electrostatic transformer. The collective decoherence can be successfully avoided in our well-designed system. Moreover, GHZ state for logical qubits can also be easily produced in this system

  1. Consistency analysis of subspace identification methods based on a linear regression approach

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2001-01-01

    In the literature results can be found which claim consistency for the subspace method under certain quite weak assumptions. Unfortunately, a new result gives a counter example showing inconsistency under these assumptions and then gives new more strict sufficient assumptions which however does n...... not include important model structures as e.g. Box-Jenkins. Based on a simple least squares approach this paper shows the possible inconsistency under the weak assumptions and develops only slightly stricter assumptions sufficient for consistency and which includes any model structure...

  2. Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization

    International Nuclear Information System (INIS)

    Xiao Yunhai; Hu Qingjie

    2008-01-01

    An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection

  3. Experimental Study of Generalized Subspace Filters for the Cocktail Party Situation

    DEFF Research Database (Denmark)

    Christensen, Knud Bank; Christensen, Mads Græsbøll; Boldt, Jesper B.

    2016-01-01

    This paper investigates the potential performance of generalized subspace filters for speech enhancement in cocktail party situations with very poor signal/noise ratio, e.g. down to -15 dB. Performance metrics output signal/noise ratio, signal/ distortion ratio, speech quality rating and speech...... intelligibility rating are mapped as functions of two algorithm parameters, revealing clear trade-off options between noise, distortion and subjective performances and a recommended choice of trade-off. Given sufficiently good noise statistics, SNR improvements around 20 dB as well as PESQ quality and STOI...

  4. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    Science.gov (United States)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  5. A Comfort-Aware Energy Efficient HVAC System Based on the Subspace Identification Method

    Directory of Open Access Journals (Sweden)

    O. Tsakiridis

    2016-01-01

    Full Text Available A proactive heating method is presented aiming at reducing the energy consumption in a HVAC system while maintaining the thermal comfort of the occupants. The proposed technique fuses time predictions for the zones’ temperatures, based on a deterministic subspace identification method, and zones’ occupancy predictions, based on a mobility model, in a decision scheme that is capable of regulating the balance between the total energy consumed and the total discomfort cost. Simulation results for various occupation-mobility models demonstrate the efficiency of the proposed technique.

  6. Practical Low Data-Complexity Subspace-Trail Cryptanalysis of Round-Reduced PRINCE

    DEFF Research Database (Denmark)

    Grassi, Lorenzo; Rechberger, Christian

    2016-01-01

    Subspace trail cryptanalysis is a very recent new cryptanalysis technique, and includes differential, truncated differential, impossible differential, and integral attacks as special cases. In this paper, we consider PRINCE, a widely analyzed block cipher proposed in 2012. After the identification......-plaintext category. The attacks have been verified using a C implementation. Of independent interest, we consider a variant of PRINCE in which ShiftRows and MixLayer operations are exchanged in position. In particular, our result shows that the position of ShiftRows and MixLayer operations influences the security...

  7. Towards automatic music transcription: note extraction based on independent subspace analysis

    Science.gov (United States)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  8. Investigation of the stochastic subspace identification method for on-line wind turbine tower monitoring

    Science.gov (United States)

    Dai, Kaoshan; Wang, Ying; Lu, Wensheng; Ren, Xiaosong; Huang, Zhenhua

    2017-04-01

    Structural health monitoring (SHM) of wind turbines has been applied in the wind energy industry to obtain their real-time vibration parameters and to ensure their optimum performance. For SHM, the accuracy of its results and the efficiency of its measurement methodology and data processing algorithm are the two major concerns. Selection of proper measurement parameters could improve such accuracy and efficiency. The Stochastic Subspace Identification (SSI) is a widely used data processing algorithm for SHM. This research discussed the accuracy and efficiency of SHM using SSI method to identify vibration parameters of on-line wind turbine towers. Proper measurement parameters, such as optimum measurement duration, are recommended.

  9. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  10. Optimal Sizing for Wind/PV/Battery System Using Fuzzy c-Means Clustering with Self-Adapted Cluster Number

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2017-01-01

    Full Text Available Integrating wind generation, photovoltaic power, and battery storage to form hybrid power systems has been recognized to be promising in renewable energy development. However, considering the system complexity and uncertainty of renewable energies, such as wind and solar types, it is difficult to obtain practical solutions for these systems. In this paper, optimal sizing for a wind/PV/battery system is realized by trade-offs between technical and economic factors. Firstly, the fuzzy c-means clustering algorithm was modified with self-adapted parameters to extract useful information from historical data. Furthermore, the Markov model is combined to determine the chronological system states of natural resources and load. Finally, a power balance strategy is introduced to guide the optimization process with the genetic algorithm to establish the optimal configuration with minimized cost while guaranteeing reliability and environmental factors. A case of island hybrid power system is analyzed, and the simulation results are compared with the general FCM method and chronological method to validate the effectiveness of the mentioned method.

  11. PDE-Foam - a probability-density estimation method using self-adapting phase-space binning

    CERN Document Server

    Dannheim, Dominik; Voigt, Alexander; Grahn, Karl-Johan; Speckmayer, Peter

    2009-01-01

    Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. To efficiently use large event samples to estimate the probability density, a binary search tree (range searching) is used in the PDE-RS implementation. It is a generalisation of standard likelihood methods and a powerful classification tool for problems with highly non-linearly correlated observables. In this paper, we present an innovative improvement of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multidimensional phase space, minimizing the variance of the signal and background densities inside the cells. The binned density information is stored in binary trees, allowing for a very ...

  12. Modular high-voltage bias generator powered by dual-looped self-adaptive wireless power transmission.

    Science.gov (United States)

    Xie, Kai; Huang, An-Feng; Li, Xiao-Ping; Guo, Shi-Zhong; Zhang, Han-Lu

    2015-04-01

    We proposed a modular high-voltage (HV) bias generator powered by a novel transmitter-sharing inductive coupled wireless power transmission technology, aimed to extend the generator's flexibility and configurability. To solve the problems caused through an uncertain number of modules, a dual-looped self-adaptive control method is proposed that is capable of tracking resonance frequency while maintaining a relatively stable induction voltage for each HV module. The method combines a phase-locked loop and a current feedback loop, which ensures an accurate resonance state and a relatively constant boost ratio for each module, simplifying the architecture of the boost stage and improving the total efficiency. The prototype was built and tested. The input voltage drop of each module is less than 14% if the module number varies from 3 to 10; resonance tracking is completed within 60 ms. The efficiency of the coupling structure reaches up to 95%, whereas the total efficiency approaches 73% for a rated output. Furthermore, this technology can be used in various multi-load wireless power supply applications.

  13. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    Science.gov (United States)

    Sung, Wen-Tsai; Lin, Jia-Syun

    2013-01-01

    This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  14. Monolithic quasi-sliding-mode controller for SIDO buck converter with a self-adaptive free-wheeling current level

    International Nuclear Information System (INIS)

    Wu Xiaobo; Liu Qing; Zhao Menglian; Chen Mingyang

    2013-01-01

    An analog implementation of a novel fixed-frequency quasi-sliding-mode controller for single-inductor dual-output (SIDO) buck converter in pseudo-continuous conduction mode (PCCM) with a self-adaptive freewheeling current level (SFCL) is presented. Both small and large signal variations around the operation point are considered to achieve better transient response so as to reduce the cross-regulation of this SIDO buck converter. Moreover, an internal integral loop is added to suppress the steady-state regulation error introduced by conventional PWM-based sliding mode controllers. Instead of keeping it as a constant value, the free-wheeling current level varies according to the load condition to maintain high power efficiency and less cross-regulation at the same time. To verify the feasibility of the proposed controller, an SIDO buck converter with two regulated output voltages, 1.8 V and 3.3 V, is designed and fabricated in HEJIAN 0.35 μm CMOS process. Simulation and experiment results show that the transient time of this SIDO buck converter drops to 10 μs while the cross-regulation is reduced to 0.057 mV/mA, when its first load changes from 50 to 100 mA. (semiconductor integrated circuits)

  15. Monolithic quasi-sliding-mode controller for SIDO buck converter with a self-adaptive free-wheeling current level

    Science.gov (United States)

    Xiaobo, Wu; Qing, Liu; Menglian, Zhao; Mingyang, Chen

    2013-01-01

    An analog implementation of a novel fixed-frequency quasi-sliding-mode controller for single-inductor dual-output (SIDO) buck converter in pseudo-continuous conduction mode (PCCM) with a self-adaptive freewheeling current level (SFCL) is presented. Both small and large signal variations around the operation point are considered to achieve better transient response so as to reduce the cross-regulation of this SIDO buck converter. Moreover, an internal integral loop is added to suppress the steady-state regulation error introduced by conventional PWM-based sliding mode controllers. Instead of keeping it as a constant value, the free-wheeling current level varies according to the load condition to maintain high power efficiency and less cross-regulation at the same time. To verify the feasibility of the proposed controller, an SIDO buck converter with two regulated output voltages, 1.8 V and 3.3 V, is designed and fabricated in HEJIAN 0.35 μm CMOS process. Simulation and experiment results show that the transient time of this SIDO buck converter drops to 10 μs while the cross-regulation is reduced to 0.057 mV/mA, when its first load changes from 50 to 100 mA.

  16. Self-adaptive Newton-based iteration strategy for the LES of turbulent multi-scale flows

    International Nuclear Information System (INIS)

    Daude, F.; Mary, I.; Comte, P.

    2014-01-01

    An improvement of the efficiency of implicit schemes based on Newton-like methods for the simulation of turbulent flows by compressible LES or DNS is proposed. It hinges on a zonal Self-Adaptive Newton method (hereafter denoted SAN), capable of taking advantage of Newton convergence rate heterogeneities in multi-scale flow configurations due to a strong spatial variation of the mesh resolution, such as transitional or turbulent flows controlled by small actuators or passive devices. Thanks to a predictor of the local Newton convergence rate, SAN provides computational savings by allocating resources in regions where they are most needed. The consistency with explicit time integration and the efficiency of the method are checked in three test cases: - The standard test-case of 2-D linear advection of a vortex, on three different two-block grids. - Transition to 3-D turbulence on the lee-side of an airfoil at high angle of attack, which features a challenging laminar separation bubble with a turbulent reattachment. - A passively-controlled turbulent transonic cavity flow, for which the CPU time is reduced by a factor of 10 with respect to the baseline algorithm, illustrates the interest of the proposed algorithm. (authors)

  17. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-Tsai Sung

    2013-12-01

    Full Text Available This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  18. Light-Weight and Versatile Monitor for a Self-Adaptive Software Framework for IoT Systems

    Directory of Open Access Journals (Sweden)

    Young-Joo Kim

    2016-01-01

    Full Text Available Today, various Internet of Things (IoT devices and applications are being developed. Such IoT devices have different hardware (HW and software (SW capabilities; therefore, most applications require customization when IoT devices are changed or new applications are created. However, the applications executed on these devices are not optimized for power and performance because IoT device systems do not provide suitable static and dynamic information about fast-changing system resources and applications. Therefore, this paper proposes a light-weight and versatile monitor for a self-adaptive software framework to automatically control system resources according to the system status. The monitor helps running applications guarantee low power consumption and high performance for an optimal environment. The proposed monitor has two components: a monitoring component, which provides real-time static and dynamic information about system resources and applications, and a controlling component, which supports real-time control of system resources. For the experimental verification, we created a video transport system based on IoT devices and measured the CPU utilization by dynamic voltage and frequency scaling (DVFS for the monitor. The results demonstrate that, for up to 50 monitored processes, the monitor shows an average CPU utilization of approximately 4% in the three DVFS modes and demonstrates maximum optimization in the Performance mode of DVFS.

  19. An angle-based subspace anomaly detection approach to high-dimensional data: With an application to industrial fault detection

    International Nuclear Information System (INIS)

    Zhang, Liangwei; Lin, Jing; Karim, Ramin

    2015-01-01

    The accuracy of traditional anomaly detection techniques implemented on full-dimensional spaces degrades significantly as dimensionality increases, thereby hampering many real-world applications. This work proposes an approach to selecting meaningful feature subspace and conducting anomaly detection in the corresponding subspace projection. The aim is to maintain the detection accuracy in high-dimensional circumstances. The suggested approach assesses the angle between all pairs of two lines for one specific anomaly candidate: the first line is connected by the relevant data point and the center of its adjacent points; the other line is one of the axis-parallel lines. Those dimensions which have a relatively small angle with the first line are then chosen to constitute the axis-parallel subspace for the candidate. Next, a normalized Mahalanobis distance is introduced to measure the local outlier-ness of an object in the subspace projection. To comprehensively compare the proposed algorithm with several existing anomaly detection techniques, we constructed artificial datasets with various high-dimensional settings and found the algorithm displayed superior accuracy. A further experiment on an industrial dataset demonstrated the applicability of the proposed algorithm in fault detection tasks and highlighted another of its merits, namely, to provide preliminary interpretation of abnormality through feature ordering in relevant subspaces. - Highlights: • An anomaly detection approach for high-dimensional reliability data is proposed. • The approach selects relevant subspaces by assessing vectorial angles. • The novel ABSAD approach displays superior accuracy over other alternatives. • Numerical illustration approves its efficacy in fault detection applications

  20. Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.

    Science.gov (United States)

    Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin

    2017-07-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.

  1. Robust Switching Control and Subspace Identification for Flutter of Flexible Wing

    Directory of Open Access Journals (Sweden)

    Yizhe Wang

    2018-01-01

    Full Text Available Active flutter suppression and subspace identification for a flexible wing model using micro fiber composite actuator were experimentally studied in a low speed wind tunnel. NACA0006 thin airfoil model was used for the experimental object to verify the performance of identification algorithm and designed controller. The equation of the fluid, vibration, and piezoelectric coupled motion was theoretically analyzed and experimentally identified under the open-loop and closed-loop condition by subspace method for controller design. A robust pole placement algorithm in terms of linear matrix inequality that accommodates the model uncertainty caused by identification deviation and flow speed variation was utilized to stabilize the divergent aeroelastic system. For further enlarging the flutter envelope, additional controllers were designed subject to the models beyond the flutter speed. Wind speed was measured online as the decision parameter of switching between the controllers. To ensure the stability of arbitrary switching, Common Lyapunov function method was applied to design the robust pole placement controllers for different models to ensure that the closed-loop system shared a common Lyapunov function. Wind tunnel result showed that the designed controllers could stabilize the time varying aeroelastic system over a wide range under arbitrary switching.

  2. An Improved EMD-Based Dissimilarity Metric for Unsupervised Linear Subspace Learning

    Directory of Open Access Journals (Sweden)

    Xiangchun Yu

    2018-01-01

    Full Text Available We investigate a novel way of robust face image feature extraction by adopting the methods based on Unsupervised Linear Subspace Learning to extract a small number of good features. Firstly, the face image is divided into blocks with the specified size, and then we propose and extract pooled Histogram of Oriented Gradient (pHOG over each block. Secondly, an improved Earth Mover’s Distance (EMD metric is adopted to measure the dissimilarity between blocks of one face image and the corresponding blocks from the rest of face images. Thirdly, considering the limitations of the original Locality Preserving Projections (LPP, we proposed the Block Structure LPP (BSLPP, which effectively preserves the structural information of face images. Finally, an adjacency graph is constructed and a small number of good features of a face image are obtained by methods based on Unsupervised Linear Subspace Learning. A series of experiments have been conducted on several well-known face databases to evaluate the effectiveness of the proposed algorithm. In addition, we construct the noise, geometric distortion, slight translation, slight rotation AR, and Extended Yale B face databases, and we verify the robustness of the proposed algorithm when faced with a certain degree of these disturbances.

  3. Mitigating Wind Induced Noise in Outdoor Microphone Signals Using a Singular Spectral Subspace Method

    Directory of Open Access Journals (Sweden)

    Omar Eldwaik

    2018-01-01

    Full Text Available Wind induced noise is one of the major concerns of outdoor acoustic signal acquisition. It affects many field measurement and audio recording scenarios. Filtering such noise is known to be difficult due to its broadband and time varying nature. In this paper, a new method to mitigate wind induced noise in microphone signals is developed. Instead of applying filtering techniques, wind induced noise is statistically separated from wanted signals in a singular spectral subspace. The paper is presented in the context of handling microphone signals acquired outdoor for acoustic sensing and environmental noise monitoring or soundscapes sampling. The method includes two complementary stages, namely decomposition and reconstruction. The first stage decomposes mixed signals in eigen-subspaces, selects and groups the principal components according to their contributions to wind noise and wanted signals in the singular spectrum domain. The second stage reconstructs the signals in the time domain, resulting in the separation of wind noise and wanted signals. Results show that microphone wind noise is separable in the singular spectrum domain evidenced by the weighted correlation. The new method might be generalized to other outdoor sound acquisition applications.

  4. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-01-01

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  5. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. An efficient preconditioning technique using Krylov subspace methods for 3D characteristics solvers

    International Nuclear Information System (INIS)

    Dahmani, M.; Le Tellier, R.; Roy, R.; Hebert, A.

    2005-01-01

    The Generalized Minimal RESidual (GMRES) method, using a Krylov subspace projection, is adapted and implemented to accelerate a 3D iterative transport solver based on the characteristics method. Another acceleration technique called the self-collision rebalancing technique (SCR) can also be used to accelerate the solution or as a left preconditioner for GMRES. The GMRES method is usually used to solve a linear algebraic system (Ax=b). It uses K(r (o) ,A) as projection subspace and AK(r (o) ,A) for the orthogonalization of the residual. This paper compares the performance of these two combined methods on various problems. To implement the GMRES iterative method, the characteristics equations are derived in linear algebra formalism by using the equivalence between the method of characteristics and the method of collision probability to end up with a linear algebraic system involving fluxes and currents. Numerical results show good performance of the GMRES technique especially for the cases presenting large material heterogeneity with a scattering ratio close to 1. Similarly, the SCR preconditioning slightly increases the GMRES efficiency

  7. Gene selection for microarray data classification via subspace learning and manifold regularization.

    Science.gov (United States)

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  8. Structural damage diagnosis based on on-line recursive stochastic subspace identification

    International Nuclear Information System (INIS)

    Loh, Chin-Hsiung; Weng, Jian-Huang; Liu, Yi-Cheng; Lin, Pei-Yang; Huang, Shieh-Kung

    2011-01-01

    This paper presents a recursive stochastic subspace identification (RSSI) technique for on-line and almost real-time structural damage diagnosis using output-only measurements. Through RSSI the time-varying natural frequencies of a system can be identified. To reduce the computation time in conducting LQ decomposition in RSSI, the Givens rotation as well as the matrix operation appending a new data set are derived. The relationship between the size of the Hankel matrix and the data length in each shifting moving window is examined so as to extract the time-varying features of the system without loss of generality and to establish on-line and almost real-time system identification. The result from the RSSI technique can also be applied to structural damage diagnosis. Off-line data-driven stochastic subspace identification was used first to establish the system matrix from the measurements of an undamaged (reference) case. Then the RSSI technique incorporating a Kalman estimator is used to extract the dynamic characteristics of the system through continuous monitoring data. The predicted residual error is defined as a damage feature and through the outlier statistics provides an indicator of damage. Verification of the proposed identification algorithm by using the bridge scouring test data and white noise response data of a reinforced concrete frame structure is conducted

  9. Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces

    International Nuclear Information System (INIS)

    Vourdas, A.

    2014-01-01

    The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H 1 ,H 2 ), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H 1 ),P(H 2 ), to the subspaces H 1 , H 2 . As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities

  10. High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods

    Science.gov (United States)

    Yoon, Yeo-Sun; Amin, Moeness G.

    2008-04-01

    Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.

  11. On the selection of user-defined parameters in data-driven stochastic subspace identification

    Science.gov (United States)

    Priori, C.; De Angelis, M.; Betti, R.

    2018-02-01

    The paper focuses on the time domain output-only technique called Data-Driven Stochastic Subspace Identification (DD-SSI); in order to identify modal models (frequencies, damping ratios and mode shapes), the role of its user-defined parameters is studied, and rules to determine their minimum values are proposed. Such investigation is carried out using, first, the time histories of structural responses to stationary excitations, with a large number of samples, satisfying the hypothesis on the input imposed by DD-SSI. Then, the case of non-stationary seismic excitations with a reduced number of samples is considered. In this paper, partitions of the data matrix different from the one proposed in the SSI literature are investigated, together with the influence of different choices of the weighting matrices. The study is carried out considering two different applications: (1) data obtained from vibration tests on a scaled structure and (2) in-situ tests on a reinforced concrete building. Referring to the former, the identification of a steel frame structure tested on a shaking table is performed using its responses in terms of absolute accelerations to a stationary (white noise) base excitation and to non-stationary seismic excitations of low intensity. Black-box and modal models are identified in both cases and the results are compared with those from an input-output subspace technique. With regards to the latter, the identification of a complex hospital building is conducted using data obtained from ambient vibration tests.

  12. Robust Adaptive Beamforming with Sensor Position Errors Using Weighted Subspace Fitting-Based Covariance Matrix Reconstruction.

    Science.gov (United States)

    Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang

    2018-05-08

    When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.

  13. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    Science.gov (United States)

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  14. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.; He, Q. [Arizona State Univ., Tempe, AZ (United States)

    1994-12-31

    In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

  15. On the Kalman Filter error covariance collapse into the unstable subspace

    Directory of Open Access Journals (Sweden)

    A. Trevisan

    2011-03-01

    Full Text Available When the Extended Kalman Filter is applied to a chaotic system, the rank of the error covariance matrices, after a sufficiently large number of iterations, reduces to N+ + N0 where N+ and N0 are the number of positive and null Lyapunov exponents. This is due to the collapse into the unstable and neutral tangent subspace of the solution of the full Extended Kalman Filter. Therefore the solution is the same as the solution obtained by confining the assimilation to the space spanned by the Lyapunov vectors with non-negative Lyapunov exponents. Theoretical arguments and numerical verification are provided to show that the asymptotic state and covariance estimates of the full EKF and of its reduced form, with assimilation in the unstable and neutral subspace (EKF-AUS are the same. The consequences of these findings on applications of Kalman type Filters to chaotic models are discussed.

  16. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  17. Decomposition of Near-Infrared Spectroscopy Signals Using Oblique Subspace Projections: Applications in Brain Hemodynamic Monitoring

    Directory of Open Access Journals (Sweden)

    Alexander Caicedo

    2016-11-01

    Full Text Available Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP, assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + _. SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first three days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen

  18. Decomposition of Near-Infrared Spectroscopy Signals Using Oblique Subspace Projections: Applications in Brain Hemodynamic Monitoring.

    Science.gov (United States)

    Caicedo, Alexander; Varon, Carolina; Hunyadi, Borbala; Papademetriou, Maria; Tachtsidis, Ilias; Van Huffel, Sabine

    2016-01-01

    Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS) and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP), assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + ϵ . SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP) with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first 3 days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen saturation from the

  19. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  20. pSum-SaDE: A Modified p-Median Problem and Self-Adaptive Differential Evolution Algorithm for Text Summarization

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliev

    2011-01-01

    Full Text Available Extractive multidocument summarization is modeled as a modified p-median problem. The problem is formulated with taking into account four basic requirements, namely, relevance, information coverage, diversity, and length limit that should satisfy summaries. To solve the optimization problem a self-adaptive differential evolution algorithm is created. Differential evolution has been proven to be an efficient and robust algorithm for many real optimization problems. However, it still may converge toward local optimum solutions, need to manually adjust the parameters, and finding the best values for the control parameters is a consuming task. In the paper is proposed a self-adaptive scaling factor in original DE to increase the exploration and exploitation ability. This paper has found that self-adaptive differential evolution can efficiently find the best solution in comparison with the canonical differential evolution. We implemented our model on multi-document summarization task. Experiments have shown that the proposed model is competitive on the DUC2006 dataset.

  1. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  2. Estimating absolute configurational entropies of macromolecules: the minimally coupled subspace approach.

    Directory of Open Access Journals (Sweden)

    Ulf Hensen

    Full Text Available We develop a general minimally coupled subspace approach (MCSA to compute absolute entropies of macromolecules, such as proteins, from computer generated canonical ensembles. Our approach overcomes limitations of current estimates such as the quasi-harmonic approximation which neglects non-linear and higher-order correlations as well as multi-minima characteristics of protein energy landscapes. Here, Full Correlation Analysis, adaptive kernel density estimation, and mutual information expansions are combined and high accuracy is demonstrated for a number of test systems ranging from alkanes to a 14 residue peptide. We further computed the configurational entropy for the full 67-residue cofactor of the TATA box binding protein illustrating that MCSA yields improved results also for large macromolecular systems.

  3. Adaptive Detectors for Two Types of Subspace Targets in an Inverse Gamma Textured Background

    Directory of Open Access Journals (Sweden)

    Ding Hao

    2017-06-01

    Full Text Available Considering an inverse Gamma prior distribution model for texture, the adaptive detection problems for both first order Gaussian and second order Gaussian subspace targets are researched in a compound Gaussian sea clutter. Test statistics are derived on the basis of the two-step generalized likelihood ratio test. From these tests, new adaptive detectors are proposed by substituting the covariance matrix with estimation results from the Sample Covariance Matrix (SCM, normalized SCM, and fixed point estimator. The proposed detectors consider the prior distribution model for sea clutter during the design stage, and they model parameters that match the working environment during the detection stage. Analysis and validation results indicate that the detection performance of the proposed detectors out performs existing AMF (Adaptive Matched Filter, AMF and ANMF (Adaptive Normalized Matched Filter, ANMF detectors.

  4. Fault Tolerant Flight Control Using Sliding Modes and Subspace Identification-Based Predictive Control

    KAUST Repository

    Siddiqui, Bilal A.; El-Ferik, Sami; Abdelkader, Mohamed

    2016-01-01

    In this work, a cascade structure of a time-scale separated integral sliding mode and model predictive control is proposed as a viable alternative for fault-tolerant control. A multi-variable sliding mode control law is designed as the inner loop of the flight control system. Subspace identification is carried out on the aircraft in closed loop. The identified plant is then used for model predictive controllers in the outer loop. The overall control law demonstrates improved robustness to measurement noise, modeling uncertainties, multiple faults and severe wind turbulence and gusts. In addition, the flight control system employs filters and dead-zone nonlinear elements to reduce chattering and improve handling quality. Simulation results demonstrate the efficiency of the proposed controller using conventional fighter aircraft without control redundancy.

  5. Prewhitening for Rank-Deficient Noise in Subspace Methods for Noise Reduction

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    2005-01-01

    A fundamental issue in connection with subspace methods for noise reduction is that the covariance matrix for the noise is required to have full rank, in order for the prewhitening step to be defined. However, there are important cases where this requirement is not fulfilled, e.g., when the noise...... has narrow-band characteristics, or in the case of tonal noise. We extend the concept of prewhitening to include the case when the noise covariance matrix is rank deficient, using a weighted pseudoinverse and the quotient SVD, and we show how to formulate a general rank-reduction algorithm that works...... also for rank deficient noise. We also demonstrate how to formulate this algorithm by means of a quotient ULV decomposition, which allows for faster computation and updating. Finally we apply our algorithm to a problem involving a speech signal contaminated by narrow-band noise....

  6. Fault Tolerant Flight Control Using Sliding Modes and Subspace Identification-Based Predictive Control

    KAUST Repository

    Siddiqui, Bilal A.

    2016-07-26

    In this work, a cascade structure of a time-scale separated integral sliding mode and model predictive control is proposed as a viable alternative for fault-tolerant control. A multi-variable sliding mode control law is designed as the inner loop of the flight control system. Subspace identification is carried out on the aircraft in closed loop. The identified plant is then used for model predictive controllers in the outer loop. The overall control law demonstrates improved robustness to measurement noise, modeling uncertainties, multiple faults and severe wind turbulence and gusts. In addition, the flight control system employs filters and dead-zone nonlinear elements to reduce chattering and improve handling quality. Simulation results demonstrate the efficiency of the proposed controller using conventional fighter aircraft without control redundancy.

  7. Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2014-01-01

    Full Text Available Small aperture microphone arrays provide many advantages for portable devices and hearing aid equipment. In this paper, a subspace based localization method is proposed for acoustic source using small aperture arrays. The effects of array aperture on localization are analyzed by using array response (array manifold. Besides array aperture, the frequency of acoustic source and the variance of signal power are simulated to demonstrate how to optimize localization performance, which is carried out by introducing frequency error with the proposed method. The proposed method for 5 mm array aperture is validated by simulations and experiments with MEMS microphone arrays. Different types of acoustic sources can be localized with the highest precision of 6 degrees even in the presence of wind noise and other noises. Furthermore, the proposed method reduces the computational complexity compared with other methods.

  8. s-Step Krylov Subspace Methods as Bottom Solvers for Geometric Multigrid

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lijewski, Mike [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Straalen, Brian Van [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carson, Erin [Univ. of California, Berkeley, CA (United States); Knight, Nicholas [Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)

    2014-08-14

    Geometric multigrid solvers within adaptive mesh refinement (AMR) applications often reach a point where further coarsening of the grid becomes impractical as individual sub domain sizes approach unity. At this point the most common solution is to use a bottom solver, such as BiCGStab, to reduce the residual by a fixed factor at the coarsest level. Each iteration of BiCGStab requires multiple global reductions (MPI collectives). As the number of BiCGStab iterations required for convergence grows with problem size, and the time for each collective operation increases with machine scale, bottom solves in large-scale applications can constitute a significant fraction of the overall multigrid solve time. In this paper, we implement, evaluate, and optimize a communication-avoiding s-step formulation of BiCGStab (CABiCGStab for short) as a high-performance, distributed-memory bottom solver for geometric multigrid solvers. This is the first time s-step Krylov subspace methods have been leveraged to improve multigrid bottom solver performance. We use a synthetic benchmark for detailed analysis and integrate the best implementation into BoxLib in order to evaluate the benefit of a s-step Krylov subspace method on the multigrid solves found in the applications LMC and Nyx on up to 32,768 cores on the Cray XE6 at NERSC. Overall, we see bottom solver improvements of up to 4.2x on synthetic problems and up to 2.7x in real applications. This results in as much as a 1.5x improvement in solver performance in real applications.

  9. Rank-defective millimeter-wave channel estimation based on subspace-compressive sensing

    Directory of Open Access Journals (Sweden)

    Majid Shakhsi Dastgahian

    2016-11-01

    Full Text Available Millimeter-wave communication (mmWC is considered as one of the pioneer candidates for 5G indoor and outdoor systems in E-band. To subdue the channel propagation characteristics in this band, high dimensional antenna arrays need to be deployed at both the base station (BS and mobile sets (MS. Unlike the conventional MIMO systems, Millimeter-wave (mmW systems lay away to employ the power predatory equipment such as ADC or RF chain in each branch of MIMO system because of hardware constraints. Such systems leverage to the hybrid precoding (combining architecture for downlink deployment. Because there is a large array at the transceiver, it is impossible to estimate the channel by conventional methods. This paper develops a new algorithm to estimate the mmW channel by exploiting the sparse nature of the channel. The main contribution is the representation of a sparse channel model and the exploitation of a modified approach based on Multiple Measurement Vector (MMV greedy sparse framework and subspace method of Multiple Signal Classification (MUSIC which work together to recover the indices of non-zero elements of an unknown channel matrix when the rank of the channel matrix is defected. In practical rank-defective channels, MUSIC fails, and we need to propose new extended MUSIC approaches based on subspace enhancement to compensate the limitation of MUSIC. Simulation results indicate that our proposed extended MUSIC algorithms will have proper performances and moderate computational speeds, and that they are even able to work in channels with an unknown sparsity level.

  10. Random Deep Belief Networks for Recognizing Emotions from Speech Signals.

    Science.gov (United States)

    Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang

    2017-01-01

    Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.

  11. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  12. REVIEW APPROACHES ECONOMIC DEVELOPMENT OF THE TERRITORY OF THE ARCTIC ZONE OF THE RUSSIAN FEDERATION, PRESENTED IN THE FORM OF TARGET SUBSPACE

    Directory of Open Access Journals (Sweden)

    N. I. Didenko

    2015-01-01

    Full Text Available This paper presents a conceptual idea of the organization of management of development of the Arctic area of the Russian Federation in the form of a set of target subspace. Among the possible types of target subspace comprising the Arctic zone of the Russian Federation, allocated seven subspace: basic city mobile Camps, site production of mineral resources, recreational area, fishing area, the Northern Sea Route, infrastructure protection safe existence in the Arctic. The task of determining the most appropriate theoretical approach for the development of each target subspaces. To this end, the theoretical approaches of economic growth and development of the theory of "economic base» (Economic Base Theory; resource theory (Staple Theory; Theory sectors (Sector Theory; theory of growth poles (Growth Pole Theory; neoclassical theory (Neoclassical Growth Theory; theory of inter-regional trade (Interregional Trade Theory; theory of the commodity cycle; entrepreneurial theory (Entrepreneurship Theories.

  13. Development of a Burnup Module DECBURN Based on the Krylov Subspace Method

    Energy Technology Data Exchange (ETDEWEB)

    Cho, J. Y.; Kim, K. S.; Shim, H. J.; Song, J. S

    2008-05-15

    This report is to develop a burnup module DECBURN that is essential for the reactor analysis and the assembly homogenization codes to trace the fuel composition change during the core burnup. The developed burnup module solves the burnup equation by the matrix exponential method based on the Krylov Subspace method. The final solution of the matrix exponential is obtained by the matrix scaling and squaring method. To develop DECBURN module, this report includes the followings as: (1) Krylov Subspace Method for Burnup Equation, (2) Manufacturing of the DECBURN module, (3) Library Structure Setup and Library Manufacturing, (4) Examination of the DECBURN module, (5) Implementation to the DeCART code and Verification. DECBURN library includes the decay constants, one-group cross section and the fission yields. Examination of the DECBURN module is performed by manufacturing a driver program, and the results of the DECBURN module is compared with those of the ORIGEN program. Also, the implemented DECBURN module to the DeCART code is applied to the LWR depletion benchmark and a OPR-1000 pin cell problem, and the solutions are compared with the HELIOS code to verify the computational soundness and accuracy. In this process, the criticality calculation method and the predictor-corrector scheme are introduced to the DeCART code for a function of the homogenization code. The examination by a driver program shows that the DECBURN module produces exactly the same solution with the ORIGEN program. DeCART code that equips the DECBURN module produces a compatible solution to the other codes for the LWR depletion benchmark. Also the multiplication factors of the DeCART code for the OPR-1000 pin cell problem agree to the HELIOS code within 100 pcm over the whole burnup steps. The multiplication factors with the criticality calculation are also compatible with the HELIOS code. These results mean that the developed DECBURN module works soundly and produces an accurate solution

  14. Curve Evolution in Subspaces and Exploring the Metameric Class of Histogram of Gradient Orientation based Features using Nonlinear Projection Methods

    DEFF Research Database (Denmark)

    Tatu, Aditya Jayant

    This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...

  15. Perturbed invariant subspaces and approximate generalized functional variable separation solution for nonlinear diffusion-convection equations with weak source

    Science.gov (United States)

    Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng

    2018-03-01

    In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.

  16. An additive subspace preconditioning method for the iterative solution of some problems with extreme contrasts in coefficients

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe

    2014-01-01

    Roč. 22, č. 4 (2014), s. 289-310 ISSN 1570-2820 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : preconditioning * additive subspace * small eigenvalues Subject RIV: BA - General Mathematics Impact factor: 2.310, year: 2014 http://www.degruyter.com/view/j/jnma.2014.22.issue-4/jnma-2014-0013/jnma-2014-0013. xml

  17. Two-Level Chebyshev Filter Based Complementary Subspace Method: Pushing the Envelope of Large-Scale Electronic Structure Calculations.

    Science.gov (United States)

    Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E

    2018-06-12

    We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).

  18. Data-driven modeling and predictive control for boiler-turbine unit using fuzzy clustering and subspace methods.

    Science.gov (United States)

    Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y

    2014-05-01

    This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  19. A Poisson nonnegative matrix factorization method with parameter subspace clustering constraint for endmember extraction in hyperspectral imagery

    Science.gov (United States)

    Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei

    2017-06-01

    A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.

  20. Modal–Physical Hybrid System Identification of High-rise Building via Subspace and Inverse-Mode Methods

    Directory of Open Access Journals (Sweden)

    Kohei Fujita

    2017-08-01

    Full Text Available A system identification (SI problem of high-rise buildings is investigated under restricted data environments. The shear and bending stiffnesses of a shear-bending model (SB model representing the high-rise buildings are identified via the smart combination of the subspace and inverse-mode methods. Since the shear and bending stiffnesses of the SB model can be identified in the inverse-mode method by using the lowest mode of horizontal displacements and floor rotation angles, the lowest mode of the objective building is identified first by using the subspace method. Identification of the lowest mode is performed by using the amplitude of transfer functions derived in the subspace method. Considering the resolution in measuring the floor rotation angles in lower stories, floor rotation angles in most stories are predicted from the floor rotation angle at the top floor. An empirical equation of floor rotation angles is proposed by investigating those for various building models. From the viewpoint of application of the present SI method to practical situations, a non-simultaneous measurement system is also proposed. In order to investigate the reliability and accuracy of the proposed SI method, a 10-story building frame subjected to micro-tremor is examined.

  1. Bio-inspired varying subspace based computational framework for a class of nonlinear constrained optimal trajectory planning problems.

    Science.gov (United States)

    Xu, Y; Li, N

    2014-09-01

    Biological species have produced many simple but efficient rules in their complex and critical survival activities such as hunting and mating. A common feature observed in several biological motion strategies is that the predator only moves along paths in a carefully selected or iteratively refined subspace (or manifold), which might be able to explain why these motion strategies are effective. In this paper, a unified linear algebraic formulation representing such a predator-prey relationship is developed to simplify the construction and refinement process of the subspace (or manifold). Specifically, the following three motion strategies are studied and modified: motion camouflage, constant absolute target direction and local pursuit. The framework constructed based on this varying subspace concept could significantly reduce the computational cost in solving a class of nonlinear constrained optimal trajectory planning problems, particularly for the case with severe constraints. Two non-trivial examples, a ground robot and a hypersonic aircraft trajectory optimization problem, are used to show the capabilities of the algorithms in this new computational framework.

  2. Bio-inspired varying subspace based computational framework for a class of nonlinear constrained optimal trajectory planning problems

    International Nuclear Information System (INIS)

    Xu, Y; Li, N

    2014-01-01

    Biological species have produced many simple but efficient rules in their complex and critical survival activities such as hunting and mating. A common feature observed in several biological motion strategies is that the predator only moves along paths in a carefully selected or iteratively refined subspace (or manifold), which might be able to explain why these motion strategies are effective. In this paper, a unified linear algebraic formulation representing such a predator–prey relationship is developed to simplify the construction and refinement process of the subspace (or manifold). Specifically, the following three motion strategies are studied and modified: motion camouflage, constant absolute target direction and local pursuit. The framework constructed based on this varying subspace concept could significantly reduce the computational cost in solving a class of nonlinear constrained optimal trajectory planning problems, particularly for the case with severe constraints. Two non-trivial examples, a ground robot and a hypersonic aircraft trajectory optimization problem, are used to show the capabilities of the algorithms in this new computational framework. (paper)

  3. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    Science.gov (United States)

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  4. Improved neutron-gamma discrimination for a 3He neutron detector using subspace learning methods

    Science.gov (United States)

    Wang, C. L.; Funk, L. L.; Riedel, R. A.; Berry, K. D.

    2017-05-01

    3He gas based neutron Linear-Position-Sensitive Detectors (LPSDs) have been used for many neutron scattering instruments. Traditional Pulse-height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio (NGD ratio) on the order of 105-106. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher Linear Discriminant Analysis (FLDA) and three Multivariate Analyses (MVAs) of the features were performed. The NGD ratios are improved by about 102-103 times compared with the traditional PHA method. Our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.

  5. Quasistatic Seismic Damage Indicators for RC Structures from Dissipating Energies in Tangential Subspaces

    Directory of Open Access Journals (Sweden)

    Wilfried B. Krätzig

    2014-01-01

    Full Text Available This paper applies recent research on structural damage description to earthquake-resistant design concepts. Based on the primary design aim of life safety, this work adopts the necessity of additional protection aims for property, installation, and equipment. This requires the definition of damage indicators, which are able to quantify the arising structural damage. As in present design, it applies nonlinear quasistatic (pushover concepts due to code provisions as simplified dynamic design tools. Substituting so nonlinear time-history analyses, seismic low-cycle fatigue of RC structures is approximated in similar manner. The treatment will be embedded into a finite element environment, and the tangential stiffness matrix KT in tangential subspaces then is identified as the most general entry for structural damage information. Its spectra of eigenvalues λi or natural frequencies ωi of the structure serve to derive damage indicators Di, applicable to quasistatic evaluation of seismic damage. Because det KT=0 denotes structural failure, such damage indicators range from virgin situation Di=0 to failure Di=1 and thus correspond with Fema proposals on performance-based seismic design. Finally, the developed concept is checked by reanalyses of two experimentally investigated RC frames.

  6. Study of I11-conditioning of Linac stereotactic irradiation subspaces using singular values decomposition analysis

    International Nuclear Information System (INIS)

    Platoni, K.; Lefkopoulos, D.; Grandjean, P.; Schlienger, M.

    1999-01-01

    A Linac sterotactic irradiation space is characterized by different angular separations of beams because of the geometry of the stereotactic irradiation. The regions of the stereotactic space characterized by low angular separations are one of the causes of ill-conditioning of the stereotactic irradiation inverse problem. The singular value decomposition (SVD) is a powerful mathematical analysis that permits the measurement of the ill-conditioning of the stereotactic irradiation problem. This study examines the ill-conditioning of the stereotactic irradiation space, provoked by the different angular separations of beams, using the SVD analysis. We subdivided the maximum irradiation space (MIS: (AA) AP x (AA) RL =180 x 180 ) into irradiation subspaces (ISSs), each characterized by its own angular separation. We studied the influence of ISSs on the SVD analysis and the evolution of the reconstruction quality of well defined three-dimensional dose matrices in each configuration. The more the ISS is characterized by low angular separation the more the condition number and the reconstruction inaccuracy are increased. Based on the above results we created two reduced irradiation spaces (RIS: (AA) AP x (AA) RL =180 x 140 and (AA) AP x (AA) RL =180 x 120 ) and compared the reconstruction quality of the RISs with respect to the MIS. The more an irradiation space is free of low angular separations the more the irradiation space contains useful singular components. (orig.)

  7. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas

    2017-12-27

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  8. Structural damage detection based on stochastic subspace identification and statistical pattern recognition: I. Theory

    Science.gov (United States)

    Ren, W. X.; Lin, Y. Q.; Fang, S. E.

    2011-11-01

    One of the key issues in vibration-based structural health monitoring is to extract the damage-sensitive but environment-insensitive features from sampled dynamic response measurements and to carry out the statistical analysis of these features for structural damage detection. A new damage feature is proposed in this paper by using the system matrices of the forward innovation model based on the covariance-driven stochastic subspace identification of a vibrating system. To overcome the variations of the system matrices, a non-singularity transposition matrix is introduced so that the system matrices are normalized to their standard forms. For reducing the effects of modeling errors, noise and environmental variations on measured structural responses, a statistical pattern recognition paradigm is incorporated into the proposed method. The Mahalanobis and Euclidean distance decision functions of the damage feature vector are adopted by defining a statistics-based damage index. The proposed structural damage detection method is verified against one numerical signal and two numerical beams. It is demonstrated that the proposed statistics-based damage index is sensitive to damage and shows some robustness to the noise and false estimation of the system ranks. The method is capable of locating damage of the beam structures under different types of excitations. The robustness of the proposed damage detection method to the variations in environmental temperature is further validated in a companion paper by a reinforced concrete beam tested in the laboratory and a full-scale arch bridge tested in the field.

  9. Constitutive relations in multidimensional isotropic elasticity and their restrictions to subspaces of lower dimensions

    Science.gov (United States)

    Georgievskii, D. V.

    2017-07-01

    The mechanical meaning and the relationships among material constants in an n-dimensional isotropic elastic medium are discussed. The restrictions of the constitutive relations (Hooke's law) to subspaces of lower dimension caused by the conditions that an m-dimensional strain state or an m-dimensional stress state (1 ≤ m < n) is realized in the medium. Both the terminology and the general idea of the mathematical construction are chosen by analogy with the case n = 3 and m = 2, which is well known in the classical plane problem of elasticity theory. The quintuples of elastic constants of the same medium that enter both the n-dimensional relations and the relations written out for any m-dimensional restriction are expressed in terms of one another. These expressions in terms of the known constants, for example, of a three-dimensional medium, i.e., the classical elastic constants, enable us to judge the material properties of this medium immersed in a space of larger dimension.

  10. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas; Richtarik, Peter

    2017-01-01

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  11. Energy Landscape of Pentapeptides in a Higher-Order (ϕ,ψ Conformational Subspace

    Directory of Open Access Journals (Sweden)

    Karim M. ElSawy

    2016-01-01

    Full Text Available The potential energy landscape of pentapeptides was mapped in a collective coordinate principal conformational subspace derived from principal component analysis of a nonredundant representative set of protein structures from the PDB. Three pentapeptide sequences that are known to be distinct in terms of their secondary structure characteristics, (Ala5, (Gly5, and Val.Asn.Thr.Phe.Val, were considered. Partitioning the landscapes into different energy valleys allowed for calculation of the relative propensities of the peptide secondary structures in a statistical mechanical framework. The distribution of the observed conformations of pentapeptide data showed good correspondence to the topology of the energy landscape of the (Ala5 sequence where, in accord with reported trends, the α-helix showed a predominant propensity at 298 K. The topography of the landscapes indicates that the stabilization of the α-helix in the (Ala5 sequence is enthalpic in nature while entropic factors are important for stabilization of the β-sheet in the Val.Asn.Thr.Phe.Val sequence. The results indicate that local interactions within small pentapeptide segments can lead to conformational preference of one secondary structure over the other where account of conformational entropy is important in order to reveal such preference. The method, therefore, can provide critical structural information for ab initio protein folding methods.

  12. Model reduction and frequency residuals for a robust estimation of nonlinearities in subspace identification

    Science.gov (United States)

    De Filippis, G.; Noël, J. P.; Kerschen, G.; Soria, L.; Stephan, C.

    2017-09-01

    The introduction of the frequency-domain nonlinear subspace identification (FNSI) method in 2013 constitutes one in a series of recent attempts toward developing a realistic, first-generation framework applicable to complex structures. If this method showed promising capabilities when applied to academic structures, it is still confronted with a number of limitations which needs to be addressed. In particular, the removal of nonphysical poles in the identified nonlinear models is a distinct challenge. In the present paper, it is proposed as a first contribution to operate directly on the identified state-space matrices to carry out spurious pole removal. A modal-space decomposition of the state and output matrices is examined to discriminate genuine from numerical poles, prior to estimating the extended input and feedthrough matrices. The final state-space model thus contains physical information only and naturally leads to nonlinear coefficients free of spurious variations. Besides spurious variations due to nonphysical poles, vibration modes lying outside the frequency band of interest may also produce drifts of the nonlinear coefficients. The second contribution of the paper is to include residual terms, accounting for the existence of these modes. The proposed improved FNSI methodology is validated numerically and experimentally using a full-scale structure, the Morane-Saulnier Paris aircraft.

  13. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  14. Damage location and quantification of a pretensioned concrete beam using stochastic subspace identification

    Science.gov (United States)

    Cancelli, Alessandro; Micheli, Laura; Laflamme, Simon; Alipour, Alice; Sritharan, Sri; Ubertini, Filippo

    2017-04-01

    Stochastic subspace identification (SSID) is a first-order linear system identification technique enabling modal analysis through the time domain. Research in the field of structural health monitoring has demonstrated that SSID can be used to successfully retrieve modal properties, including modal damping ratios, using output-only measurements. In this paper, the utilization of SSID for indirectly retrieving structures' stiffness matrix was investigated, through the study of a simply supported reinforced concrete beam subjected to dynamic loads. Hence, by introducing a physical model of the structure, a second-order identification method is achieved. The reconstruction is based on system condensation methods, which enables calculation of reduced order stiffness, damping, and mass matrices for the structural system. The methods compute the reduced order matrices directly from the modal properties, obtained through the use of SSID. Lastly, the reduced properties of the system are used to reconstruct the stiffness matrix of the beam. The proposed approach is first verified through numerical simulations and then validated using experimental data obtained from a full-scale reinforced concrete beam that experienced progressive damage. Results show that the SSID technique can be used to diagnose, locate, and quantify damage through the reconstruction of the stiffness matrix.

  15. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  16. Stable and self-adaptive performance of mechanically pumped CO2 two-phase loops for AMS-02 tracker thermal control in vacuum

    International Nuclear Information System (INIS)

    Zhang, Z.; Sun, X.-H.; Tong, G.-N.; Huang, Z.-C.; He, Z.-H.; Pauw, A.; Es, J. van; Battiston, R.; Borsini, S.; Laudi, E.; Verlaat, B.; Gargiulo, C.

    2011-01-01

    A mechanically pumped CO 2 two-phase loop cooling system was developed for the temperature control of the silicon tracker of AMS-02, a cosmic particle detector to work in the International Space Station. The cooling system (called TTCS, or Tracker Thermal Control System), consists of two evaporators in parallel to collect heat from the tracker's front-end electronics, two radiators in parallel to emit the heat into space, and a centrifugal pump that circulates the CO 2 fluid that carries the heat to the radiators, and an accumulator that controls the pressure, and thus the temperature of the evaporators. Thermal vacuum tests were performed to check and qualify the system operation in simulated space thermal environment. In this paper, we reported the test results which show that the TTCS exhibited excellent temperature control ability, including temperature homogeneity and stability, and self-adaptive ability to the various external heat flux to the radiators. Highlights: → The active-pumped CO 2 two-phase cooling loop passed the thermal vacuum test. → It provides high temperature homogeneity and stability thermal boundaries. → Its working temperature is controllable in vacuum environment. → It possesses self-adaptive ability to imbalanced external heat fluxes.

  17. Distributed and self-adaptive vehicle speed estimation in the composite braking case for four-wheel drive hybrid electric car

    Science.gov (United States)

    Zhao, Z.-G.; Zhou, L.-J.; Zhang, J.-T.; Zhu, Q.; Hedrick, J.-K.

    2017-05-01

    Considering the controllability and observability of the braking torques of the hub motor, Integrated Starter Generator (ISG), and hydraulic brake for four-wheel drive (4WD) hybrid electric cars, a distributed and self-adaptive vehicle speed estimation algorithm for different braking situations has been proposed by fully utilising the Electronic Stability Program (ESP) sensor signals and multiple powersource signals. Firstly, the simulation platform of a 4WD hybrid electric car was established, which integrates an electronic-hydraulic composited braking system model and its control strategy, a nonlinear seven degrees-of-freedom vehicle dynamics model, and the Burckhardt tyre model. Secondly, combining the braking torque signals with the ESP signals, self-adaptive unscented Kalman sub-filter and main-filter adaptable to the observation noise were, respectively, designed. Thirdly, the fusion rules for the sub-filters and master filter were proposed herein, and the estimation results were compared with the simulated value of a real vehicle speed. Finally, based on the hardware in-the-loop platform and by picking up the regenerative motor torque signals and wheel cylinder pressure signals, the proposed speed estimation algorithm was tested under the case of moderate braking on the highly adhesive road, and the case of Antilock Braking System (ABS) action on the slippery road, as well as the case of ABS action on the icy road. Test results show that the presented vehicle speed estimation algorithm has not only a high precision but also a strong adaptability in the composite braking case.

  18. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    Science.gov (United States)

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  19. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    Science.gov (United States)

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  20. An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods

    International Nuclear Information System (INIS)

    Zhang Hongbo; Wu Hongchun; Cao Liangzhi

    2011-01-01

    Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering

  1. Comparative analysis of different weight matrices in subspace system identification for structural health monitoring

    Science.gov (United States)

    Shokravi, H.; Bakhary, NH

    2017-11-01

    Subspace System Identification (SSI) is considered as one of the most reliable tools for identification of system parameters. Performance of a SSI scheme is considerably affected by the structure of the associated identification algorithm. Weight matrix is a variable in SSI that is used to reduce the dimensionality of the state-space equation. Generally one of the weight matrices of Principle Component (PC), Unweighted Principle Component (UPC) and Canonical Variate Analysis (CVA) are used in the structure of a SSI algorithm. An increasing number of studies in the field of structural health monitoring are using SSI for damage identification. However, studies that evaluate the performance of the weight matrices particularly in association with accuracy, noise resistance, and time complexity properties are very limited. In this study, the accuracy, noise-robustness, and time-efficiency of the weight matrices are compared using different qualitative and quantitative metrics. Three evaluation metrics of pole analysis, fit values and elapsed time are used in the assessment process. A numerical model of a mass-spring-dashpot and operational data is used in this research paper. It is observed that the principal components obtained using PC algorithms are more robust against noise uncertainty and give more stable results for the pole distribution. Furthermore, higher estimation accuracy is achieved using UPC algorithm. CVA had the worst performance for pole analysis and time efficiency analysis. The superior performance of the UPC algorithm in the elapsed time is attributed to using unit weight matrices. The obtained results demonstrated that the process of reducing dimensionality in CVA and PC has not enhanced the time efficiency but yield an improved modal identification in PC.

  2. An efficient scenario-based and fuzzy self-adaptive learning particle swarm optimization approach for dynamic economic emission dispatch considering load and wind power uncertainties

    International Nuclear Information System (INIS)

    Bahmani-Firouzi, Bahman; Farjah, Ebrahim; Azizipanah-Abarghooee, Rasoul

    2013-01-01

    Renewable energy resources such as wind power plants are playing an ever-increasing role in power generation. This paper extends the dynamic economic emission dispatch problem by incorporating wind power plant. This problem is a multi-objective optimization approach in which total electrical power generation costs and combustion emissions are simultaneously minimized over a short-term time span. A stochastic approach based on scenarios is suggested to model the uncertainty associated with hourly load and wind power forecasts. A roulette wheel technique on the basis of probability distribution functions of load and wind power is implemented to generate scenarios. As a result, the stochastic nature of the suggested problem is emancipated by decomposing it into a set of equivalent deterministic problem. An improved multi-objective particle swarm optimization algorithm is applied to obtain the best expected solutions for the proposed stochastic programming framework. To enhance the overall performance and effectiveness of the particle swarm optimization, a fuzzy adaptive technique, θ-search and self-adaptive learning strategy for velocity updating are used to tune the inertia weight factor and to escape from local optima, respectively. The suggested algorithm goes through the search space in the polar coordinates instead of the Cartesian one; whereby the feasible space is more compact. In order to evaluate the efficiency and feasibility of the suggested framework, it is applied to two test systems with small and large scale characteristics. - Highlights: ► Formulates multi-objective DEED problem under a stochastic programming framework. ► Considers uncertainties related to forecasted values of load demand and wind power. ► Proposes an interactive fuzzy satisfying method based on the novel FSALPSO. ► Presents a new self-adaptive learning strategy to improve original PSO algorithm

  3. Margin-Wide Earthquake Subspace Scanning Along the Cascadia Subduction Zone Using the Cascadia Initiative Amphibious Dataset

    Science.gov (United States)

    Morton, E.; Bilek, S. L.; Rowe, C. A.

    2017-12-01

    Understanding the spatial extent and behavior of the interplate contact in the Cascadia Subduction Zone (CSZ) may prove pivotal to preparation for future great earthquakes, such as the M9 event of 1700. Current and historic seismic catalogs are limited in their integrity by their short duration, given the recurrence rate of great earthquakes, and by their rather high magnitude of completeness for the interplate seismic zone, due to its offshore distance from these land-based networks. This issue is addressed via the 2011-2015 Cascadia Initiative (CI) amphibious seismic array deployment, which combined coastal land seismometers with more than 60 ocean-bottom seismometers (OBS) situated directly above the presumed plate interface. We search the CI dataset for small, previously undetected interplate earthquakes to identify seismic patches on the megathrust. Using the automated subspace detection method, we search for previously undetected events. Our subspace comprises eigenvectors derived from CI OBS and on-land waveforms extracted for existing catalog events that appear to have occurred on the plate interface. Previous work focused on analysis of two repeating event clusters off the coast of Oregon spanning all 4 years of deployment. Here we expand earlier results to include detection and location analysis to the entire CSZ margin during the first year of CI deployment, with more than 200 new events detected for the central portion of the margin. Template events used for subspace scanning primarily occurred beneath the land surface along the coast, at the downdip edge of modeled high slip patches for the 1700 event, with most concentrated at the northwestern edge of the Olympic Peninsula.

  4. Time-domain simulations for metallic nano-structures - a Krylov-subspace approach beyond the limitations of FDTD

    Energy Technology Data Exchange (ETDEWEB)

    Koenig, Michael [Institut fuer Theoretische Festkoerperphysik, Universitaet Karlsruhe (Germany); Karlsruhe School of Optics and Photonics (KSOP), Universitaet Karlsruhe (Germany); Niegemann, Jens; Tkeshelashvili, Lasha; Busch, Kurt [Institut fuer Theoretische Festkoerperphysik, Universitaet Karlsruhe (Germany); DFG Forschungszentrum Center for Functional Nanostructures (CFN), Universitaet Karlsruhe (Germany); Karlsruhe School of Optics and Photonics (KSOP), Universitaet Karlsruhe (Germany)

    2008-07-01

    Numerical simulations of metallic nano-structures are crucial for the efficient design of plasmonic devices. Conventional time-domain solvers such as FDTD introduce large numerical errors especially at metallic surfaces. Our approach combines a discontinuous Galerkin method on an adaptive mesh for the spatial discretisation with a Krylov-subspace technique for the time-stepping procedure. Thus, the higher-order accuracy in both time and space is supported by unconditional stability. As illustrative examples, we compare numerical results obtained with our method against analytical reference solutions and results from FDTD calculations.

  5. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  6. Contractions without non-trivial invariant subspaces satisfying a positivity condition

    Directory of Open Access Journals (Sweden)

    Bhaggy Duggal

    2016-04-01

    Full Text Available Abstract An operator A ∈ B ( H $A\\in B(\\mathcal{H}$ , the algebra of bounded linear transformations on a complex infinite dimensional Hilbert space H $\\mathcal{H}$ , belongs to class A ( n $\\mathcal{A}(n$ (resp., A ( ∗ − n $\\mathcal{A}(*-n$ if | A | 2 ≤ | A n + 1 | 2 n + 1 $\\vert A\\vert^{2}\\leq\\vert A^{n+1}\\vert^{\\frac{2}{n+1}}$ (resp., | A ∗ | 2 ≤ | A n + 1 | 2 n + 1 $\\vert A^{*}\\vert^{2}\\leq \\vert A^{n+1}\\vert^{\\frac{2}{n+1}}$ for some integer n ≥ 1 $n\\geq1$ , and an operator A ∈ B ( H $A\\in B(\\mathcal{H}$ is called n-paranormal, denoted A ∈ P ( n $A\\in \\mathcal{P}(n$ (resp., ∗ − n $*-n$ -paranormal, denoted A ∈ P ( ∗ − n $A\\in \\mathcal{P}(*-n$ if ∥ A x ∥ n + 1 ≤ ∥ A n + 1 x ∥ ∥ x ∥ n $\\Vert Ax\\Vert ^{n+1}\\leq \\Vert A^{n+1}x\\Vert \\Vert x\\Vert ^{n}$ (resp., ∥ A ∗ x ∥ n + 1 ≤ ∥ A n + 1 x ∥ ∥ x ∥ n $\\Vert A^{*}x\\Vert ^{n+1}\\leq \\Vert A^{n+1}x\\Vert \\Vert x\\Vert ^{n}$ for some integer n ≥ 1 $n\\geq 1$ and all x ∈ H $x \\in\\mathcal{H}$ . In this paper, we prove that if A ∈ { A ( n ∪ P ( n } $A\\in\\{\\mathcal{A}(n\\cup \\mathcal{P}(n\\}$ (resp., A ∈ { A ( ∗ − n ∪ P ( ∗ − n } $A\\in\\{\\mathcal{A}(*-n\\cup \\mathcal{P}(*-n\\}$ is a contraction without a non-trivial invariant subspace, then A, | A n + 1 | 2 n + 1 − | A | 2 $\\vert A^{n+1}\\vert^{\\frac{2}{n+1}}-\\vert A\\vert^{2}$ and | A n + 1 | 2 − n + 1 n | A | 2 + 1 $\\vert A^{n+1}\\vert^{2}- {\\frac{n+1}{n}}\\vert A\\vert^{2}+ 1$ (resp., A, | A n + 1 | 2 n + 1 − | A ∗ | 2 $\\vert A^{n+1}\\vert^{\\frac{2}{n+1}}-\\vert A^{*}\\vert^{2}$ and | A n + 2 | 2 − n + 1 n | A | 2 + 1 ≥ 0 $\\vert A^{n+2}\\vert^{2}- {\\frac{n+1}{n}}\\vert A\\vert^{2}+ 1\\geq0$ are proper contractions.

  7. Random ensemble learning for EEG classification.

    Science.gov (United States)

    Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2018-01-01

    Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Estimating the number of components and detecting outliers using Angle Distribution of Loading Subspaces (ADLS) in PCA analysis.

    Science.gov (United States)

    Liu, Y J; Tran, T; Postma, G; Buydens, L M C; Jansen, J

    2018-08-22

    Principal Component Analysis (PCA) is widely used in analytical chemistry, to reduce the dimensionality of a multivariate data set in a few Principal Components (PCs) that summarize the predominant patterns in the data. An accurate estimate of the number of PCs is indispensable to provide meaningful interpretations and extract useful information. We show how existing estimates for the number of PCs may fall short for datasets with considerable coherence, noise or outlier presence. We present here how Angle Distribution of the Loading Subspaces (ADLS) can be used to estimate the number of PCs based on the variability of loading subspace across bootstrap resamples. Based on comprehensive comparisons with other well-known methods applied on simulated dataset, we show that ADLS (1) may quantify the stability of a PCA model with several numbers of PCs simultaneously; (2) better estimate the appropriate number of PCs when compared with the cross-validation and scree plot methods, specifically for coherent data, and (3) facilitate integrated outlier detection, which we introduce in this manuscript. We, in addition, demonstrate how the analysis of different types of real-life spectroscopic datasets may benefit from these advantages of ADLS. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  10. Random unitary maps for quantum state reconstruction

    International Nuclear Information System (INIS)

    Merkel, Seth T.; Riofrio, Carlos A.; Deutsch, Ivan H.; Flammia, Steven T.

    2010-01-01

    We study the possibility of performing quantum state reconstruction from a measurement record that is obtained as a sequence of expectation values of a Hermitian operator evolving under repeated application of a single random unitary map, U 0 . We show that while this single-parameter orbit in operator space is not informationally complete, it can be used to yield surprisingly high-fidelity reconstruction. For a d-dimensional Hilbert space with the initial observable in su(d), the measurement record lacks information about a matrix subspace of dimension ≥d-2 out of the total dimension d 2 -1. We determine the conditions on U 0 such that the bound is saturated, and show they are achieved by almost all pseudorandom unitary matrices. When we further impose the constraint that the physical density matrix must be positive, we obtain even higher fidelity than that predicted from the missing subspace. With prior knowledge that the state is pure, the reconstruction will be perfect (in the limit of vanishing noise) and for arbitrary mixed states, the fidelity is over 0.96, even for small d, and reaching F>0.99 for d>9. We also study the implementation of this protocol based on the relationship between random matrices and quantum chaos. We show that the Floquet operator of the quantum kicked top provides a means of generating the required type of measurement record, with implications on the relationship between quantum chaos and information gain.

  11. A self-adaption compensation control for hysteresis nonlinearity in piezo-actuated stages based on Pi-sigma fuzzy neural network

    Science.gov (United States)

    Xu, Rui; Zhou, Miaolei

    2018-04-01

    Piezo-actuated stages are widely applied in the high-precision positioning field nowadays. However, the inherent hysteresis nonlinearity in piezo-actuated stages greatly deteriorates the positioning accuracy of piezo-actuated stages. This paper first utilizes a nonlinear autoregressive moving average with exogenous inputs (NARMAX) model based on the Pi-sigma fuzzy neural network (PSFNN) to construct an online rate-dependent hysteresis model for describing the hysteresis nonlinearity in piezo-actuated stages. In order to improve the convergence rate of PSFNN and modeling precision, we adopt the gradient descent algorithm featuring three different learning factors to update the model parameters. The convergence of the NARMAX model based on the PSFNN is analyzed effectively. To ensure that the parameters can converge to the true values, the persistent excitation condition is considered. Then, a self-adaption compensation controller is designed for eliminating the hysteresis nonlinearity in piezo-actuated stages. A merit of the proposed controller is that it can directly eliminate the complex hysteresis nonlinearity in piezo-actuated stages without any inverse dynamic models. To demonstrate the effectiveness of the proposed model and control methods, a set of comparative experiments are performed on piezo-actuated stages. Experimental results show that the proposed modeling and control methods have excellent performance.

  12. [The use of self-adapting system files (SAF) for controlling microbial biofilms of root canals in the treatment of apical periodontitis].

    Science.gov (United States)

    Tsarev, V N; Mamedova, L A; Siukaeva, T N; Podporin, M S

    The aim of this study was to conduct a clinical and laboratory study and evaluate the effectiveness of endodontic root canal treatment using a self-adapting files system (SAF) in the complex treatment of patients with chronic apical periodontitis. 3% sodium hypochlorite solution was used as irrigation agent in all groups which included 20 patients treated with conventional manual tools, 21 patients receiving treatment with ultrasonic activation of irrigant and 26 patients treated with SAF system. Root canal biofilm structure was studied by scanning electron microscopy (SEM) using a Quantum 3D microscope (USA). Clinical efficiency of the root canal treatment was assessed by complications frequency a year after treatment. SEM revealed the presence of high levels of microbial contamination of dentine tubules in the apical portion of the tooth. In standard method group the percentage of re-treatment and surgery was higher than in the studied groups. Use of SAF irrigation system was associated with a decrease in the number of identified pathogens. However, the study revealed high resistance of Enterococcus spp., Porphyromonas gingivalis, Candida albicans to all types of endodontic treatment, so the improvement of methods of root canal microbial biofilms removing need to be continued.

  13. Classification of breast masses in ultrasound images using self-adaptive differential evolution extreme learning machine and rough set feature selection.

    Science.gov (United States)

    Prabusankarlal, Kadayanallur Mahadevan; Thirumoorthy, Palanisamy; Manavalan, Radhakrishnan

    2017-04-01

    A method using rough set feature selection and extreme learning machine (ELM) whose learning strategy and hidden node parameters are optimized by self-adaptive differential evolution (SaDE) algorithm for classification of breast masses is investigated. A pathologically proven database of 140 breast ultrasound images, including 80 benign and 60 malignant, is used for this study. A fast nonlocal means algorithm is applied for speckle noise removal, and multiresolution analysis of undecimated discrete wavelet transform is used for accurate segmentation of breast lesions. A total of 34 features, including 29 textural and five morphological, are applied to a [Formula: see text]-fold cross-validation scheme, in which more relevant features are selected by quick-reduct algorithm, and the breast masses are discriminated into benign or malignant using SaDE-ELM classifier. The diagnosis accuracy of the system is assessed using parameters, such as accuracy (Ac), sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), Matthew's correlation coefficient (MCC), and area ([Formula: see text]) under receiver operating characteristics curve. The performance of the proposed system is also compared with other classifiers, such as support vector machine and ELM. The results indicated that the proposed SaDE algorithm has superior performance with [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] compared to other classifiers.

  14. Optimal Throughput and Self-adaptability of Robust Real-Time IEEE 802.15.4 MAC for AMI Mesh Network

    International Nuclear Information System (INIS)

    Shabani, Hikma; Ahmed, Musse Mohamud; Khan, Sheroz; Hameed, Shahab Ahmed; Habaebi, Mohamed Hadi

    2013-01-01

    A smart grid refers to a modernization of the electricity system that brings intelligence, reliability, efficiency and optimality to the power grid. To provide an automated and widely distributed energy delivery, the smart grid will be branded by a two-way flow of electricity and information system between energy suppliers and their customers. Thus, the smart grid is a power grid that integrates data communication networks which provide the collected and analysed data at all levels in real time. Therefore, the performance of communication systems is so vital for the success of smart grid. Merit to the ZigBee/IEEE802.15.4std low cost, low power, low data rate, short range, simplicity and free licensed spectrum that makes wireless sensor networks (WSNs) the most suitable wireless technology for smart grid applications. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In order to improve the performance of communication systems, this paper proposes an optimal throughput and self-adaptability of ZigBee/IEEE802.15.4std for smart grid

  15. Image reconstruction for an electrical capacitance tomography system based on a least-squares support vector machine and a self-adaptive particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang

    2011-01-01

    The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality

  16. Visual characterization and diversity quantification of chemical libraries: 2. Analysis and selection of size-independent, subspace-specific diversity indices.

    Science.gov (United States)

    Colliandre, Lionel; Le Guilloux, Vincent; Bourg, Stephane; Morin-Allory, Luc

    2012-02-27

    High Throughput Screening (HTS) is a standard technique widely used to find hit compounds in drug discovery projects. The high costs associated with such experiments have highlighted the need to carefully design screening libraries in order to avoid wasting resources. Molecular diversity is an established concept that has been used to this end for many years. In this article, a new approach to quantify the molecular diversity of screening libraries is presented. The approach is based on the Delimited Reference Chemical Subspace (DRCS) methodology, a new method that can be used to delimit the densest subspace spanned by a reference library in a reduced 2D continuous space. A total of 22 diversity indices were implemented or adapted to this methodology, which is used here to remove outliers and obtain a relevant cell-based partition of the subspace. The behavior of these indices was assessed and compared in various extreme situations and with respect to a set of theoretical rules that a diversity function should satisfy when libraries of different sizes have to be compared. Some gold standard indices are found inappropriate in such a context, while none of the tested indices behave perfectly in all cases. Five DRCS-based indices accounting for different aspects of diversity were finally selected, and a simple framework is proposed to use them effectively. Various libraries have been profiled with respect to more specific subspaces, which further illustrate the interest of the method.

  17. Spatio-temporal evolution of the 2011 Prague, Oklahoma aftershock sequence revealed using subspace detection and relocation

    Science.gov (United States)

    McMahon, Nicole D; Aster, Richard C.; Yeck, William; McNamara, Daniel E.; Benz, Harley M.

    2017-01-01

    The 6 November 2011 Mw 5.7 earthquake near Prague, Oklahoma is the second largest earthquake ever recorded in the state. A Mw 4.8 foreshock and the Mw 5.7 mainshock triggered a prolific aftershock sequence. Utilizing a subspace detection method, we increase by fivefold the number of precisely located events between 4 November and 5 December 2011. We find that while most aftershock energy is released in the crystalline basement, a significant number of the events occur in the overlying Arbuckle Group, indicating that active Meeker-Prague faulting extends into the sedimentary zone of wastewater disposal. Although the number of aftershocks in the Arbuckle Group is large, comprising ~40% of the aftershock catalog, the moment contribution of Arbuckle Group earthquakes is much less than 1% of the total aftershock moment budget. Aftershock locations are sparse in patches that experienced large slip during the mainshock.

  18. Measurements of the vacuum-plasma response in EXTRAP T2R using generic closed-loop subspace system identification

    Energy Technology Data Exchange (ETDEWEB)

    Olofsson, K. Erik J., E-mail: erik.olofsson@ee.kth.se [School of Electrical Engineering (EES), Royal Institute of Technology (KTH), Stockholm (Sweden); Brunsell, Per R.; Drake, James R. [School of Electrical Engineering (EES), Royal Institute of Technology (KTH), Stockholm (Sweden)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Unstable plasma response safely measured using special signal processing techniques. Black-Right-Pointing-Pointer Prediction-capable MIMO models obtained. Black-Right-Pointing-Pointer Computational statistics employed to show physical content of these models. Black-Right-Pointing-Pointer Multifold cross-validation applied for the supervised learning problem. - Abstract: A multibatch formulation of a multi-input multi-output closed-loop subspace system identification method is employed for the purpose of obtaining control-relevant models of the vacuum-plasma response in the magnetic confinement fusion experiment EXTRAP T2R. The accuracy of the estimate of the plant dynamics is estimated by computing bootstrap replication statistics of the dataset. It is seen that the thus identified models exhibit both predictive capabilities and physical spectral properties.

  19. Performance Analysis of Blind Subspace-Based Signature Estimation Algorithms for DS-CDMA Systems with Unknown Correlated Noise

    Science.gov (United States)

    Zarifi, Keyvan; Gershman, Alex B.

    2006-12-01

    We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.

  20. Semiclassical analysis of the weak-coupling limit of SU(2) lattice gauge theory: The subspace of constant fields

    International Nuclear Information System (INIS)

    Bartels, J.; Wu, T.T.

    1988-01-01

    This paper contains the first part of a systematic semiclassical analysis of the weak-coupling limit of lattice gauge theories, using the Hamiltonian formulation. The model consists of an N 3 cubic lattice of pure SU(2) Yang-Mills theory, and in this first part we limit ourselves to the subspace of constant field configurations. We investigate the flow of classical trajectories, with a particular emphasis on the existence and location of caustics. There the ground-state wave function is expected to peak. It is found that regions densely filled with caustics are very close to the origin, i.e., in the domain of weak field configurations. This strongly supports the expectation that caustics are essential for quantities of physical interest

  1. Dissatisfaction of Compact Picard Condition (CPC) with GRACE satellite data and its treatment by Generalized Tikhonov in Sobolev subspace

    Science.gov (United States)

    AllahTavakoli, Y.; Bagheri, H.; Safari, A.; Sharifi, M.

    2012-04-01

    This paper is mainly aiming to prove that the stripy noises in the map of earth's surface mass-density changes derived from GRACE Satellites gravimetry, is due to a dissatisfaction of Compact Picard Condition (CPC) with the GRACE data in the inversion of the Newton Integral Equation over the thin layer of earth; and hence the paper proposes the regularization strategies as efficient tools to treat the Ill-posedness and consequently to de-strip the data. First of all, we preferred to slightly modify the mathematical model of earth's surface mass-density changes developed creatively first by J. Wahr and et.al (1998), according to the all their previous assumptions plus taking into consideration the effect of the earth topography. By the modification we expect that some uncertainties in the prior model have been reduced to some extent. Then we analyzed the CPC on the model and we demonstrated how to perform Generalized Tikhonov regularization in Sobolev subspace for overcoming the instability of the problem. Then we applied the strategy in some simulations and case studies to validate our ideas. The simulations confirm that the stripy noises in the GRACE-derived map of the mass-density changes are due to the CPC dissatisfaction and furthermore the case studies show that Generalized Tikhonov regularization in Sobolev subspace is an influential filtering tool to de-strip the noisy data. Also, the case studies interestingly show that the effect of the topography is comparable to the effect of the load Love numbers on the Wahr's model; hence it may be taken into consideration when the load Love numbers have been taken into account.

  2. Recovering task fMRI signals from highly under-sampled data with low-rank and temporal subspace constraints.

    Science.gov (United States)

    Chiew, Mark; Graedel, Nadine N; Miller, Karla L

    2018-07-01

    Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Developing a Shuffled Complex-Self Adaptive Hybrid Evolution (SC-SAHEL) Framework for Water Resources Management and Water-Energy System Optimization

    Science.gov (United States)

    Rahnamay Naeini, M.; Sadegh, M.; AghaKouchak, A.; Hsu, K. L.; Sorooshian, S.; Yang, T.

    2017-12-01

    Meta-Heuristic optimization algorithms have gained a great deal of attention in a wide variety of fields. Simplicity and flexibility of these algorithms, along with their robustness, make them attractive tools for solving optimization problems. Different optimization methods, however, hold algorithm-specific strengths and limitations. Performance of each individual algorithm obeys the "No-Free-Lunch" theorem, which means a single algorithm cannot consistently outperform all possible optimization problems over a variety of problems. From users' perspective, it is a tedious process to compare, validate, and select the best-performing algorithm for a specific problem or a set of test cases. In this study, we introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme, and allows users to select the most suitable algorithm tailored to the problem at hand. The concept of SC-SAHEL is to execute different EAs as separate parallel search cores, and let all participating EAs to compete during the course of the search. The newly developed SC-SAHEL algorithm is designed to automatically select, the best performing algorithm for the given optimization problem. This algorithm is rigorously effective in finding the global optimum for several strenuous benchmark test functions, and computationally efficient as compared to individual EAs. We benchmark the proposed SC-SAHEL algorithm over 29 conceptual test functions, and two real-world case studies - one hydropower reservoir model and one hydrological model (SAC-SMA). Results show that the proposed framework outperforms individual EAs in an absolute majority of the test problems, and can provide competitive results to the fittest EA algorithm with more comprehensive information during the search. The proposed framework is also flexible for merging additional EAs, boundary

  4. A hybrid self-adaptive Particle Swarm Optimization–Genetic Algorithm–Radial Basis Function model for annual electricity demand prediction

    International Nuclear Information System (INIS)

    Yu, Shiwei; Wang, Ke; Wei, Yi-Ming

    2015-01-01

    Highlights: • A hybrid self-adaptive PSO–GA-RBF model is proposed for electricity demand prediction. • Each mixed-coding particle is composed by two coding parts of binary and real. • Five independent variables have been selected to predict future electricity consumption in Wuhan. • The proposed model has a simpler structure or higher estimating precision than other ANN models. • No matter what the scenario, the electricity consumption of Wuhan will grow rapidly. - Abstract: The present study proposes a hybrid Particle Swarm Optimization and Genetic Algorithm optimized Radial Basis Function (PSO–GA-RBF) neural network for prediction of annual electricity demand. In the model, each mixed-coding particle (or chromosome) is composed of two coding parts, binary and real, which optimizes the structure of the RBF by GA operation and the parameters of the basis and weights by a PSO–GA implementation. Five independent variables have been selected to predict future electricity consumption in Wuhan by using optimized networks. The results shows that (1) the proposed PSO–GA-RBF model has a simpler network structure (fewer hidden neurons) or higher estimation precision than other selected ANN models; and (2) no matter what the scenario, the electricity consumption of Wuhan will grow rapidly at average annual growth rates of about 9.7–11.5%. By 2020, the electricity demand in the planning scenario, the highest among the scenarios, will be 95.85 billion kW h. The lowest demand is estimated for the business-as-usual scenario, and will be 88.45 billion kW h

  5. A multistage framework for reliability-based distribution expansion planning considering distributed generations by a self-adaptive global-based harmony search algorithm

    International Nuclear Information System (INIS)

    Shivaie, Mojtaba; Ameli, Mohammad T.; Sepasian, Mohammad S.; Weinsier, Philip D.; Vahidinasab, Vahid

    2015-01-01

    In this paper, the authors present a new multistage framework for reliability-based Distribution Expansion Planning (DEP) in which expansion options are a reinforcement and/or installation of substations, feeders, and Distributed Generations (DGs). The proposed framework takes into account not only costs associated with investment, maintenance, and operation, but also expected customer interruption cost in the optimization as four problem objectives. At the same time, operational restrictions, Kirchhoff's laws, radial structure limitation, voltage limits, and capital expenditure budget restriction are considered as problem constraints. The proposed model is a non-convex optimization problem having a non-linear, mixed-integer nature. Hence, a hybrid Self-adaptive Global-based Harmony Search Algorithm (SGHSA) and Optimal Power Flow (OPF) were used and followed by a fuzzy satisfying method in order to obtain the final optimal solution. The SGHSA is a recently developed optimization algorithm which imitates the music improvisation process. In this process, the harmonists improvise their instrument pitches, searching for the perfect state of harmony. The planning methodology was demonstrated on the 27-node, 13.8-kV test system in order to demonstrate the feasibility and capability of the proposed model. Simulation results illustrated the sufficiency and profitableness of the newly developed framework, when compared with other methods. - Highlights: • A new multistage framework is presented for reliability-based DEP problem. • In this paper, DGs are considered as an expansion option to increase the flexibility of the proposed model. • In this paper, effective factors of DEP problem are incorporated as a multi-objective model. • In this paper, three new algorithms HSA, IHSA and SGHSA are proposed. • Results obtained by the proposed SGHSA algorithm are better than others

  6. Self-adaptive treatment of time dependent nonlinear nonhomogeneous radial heat flow in reactor components with boundary element method; Samoadaptivno obravnanje spemenljivega nelinearnega nehomogenoga radialnega topltnega toka v reaktorskih komponentah z metodo robnih elementov

    Energy Technology Data Exchange (ETDEWEB)

    Sarler, B; Alujevic, A [Univerza B. Kardelja, Institut ' Jozef Stefan' , Ljubljana (Yugoslavia)

    1988-07-01

    The basic principles of self-adaptive algorithm for treatment of transient nonlinear nonhomogeneous radial heat flow, based on direct Boundary Element method formulation, are presented. The indicators of discretization error are developed, together with binary-tree strategy for manipulation with time domain mesh, assuring automatic optimisation of calculation procedure with respect to predetermined error. The developed method is particularly suitable for use in a spectrum of extremely nonlinear cases, occurring in thermal analyses of reactor components.(author)

  7. Random SU(2) invariant tensors

    Science.gov (United States)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  8. Random Deep Belief Networks for Recognizing Emotions from Speech Signals

    Directory of Open Access Journals (Sweden)

    Guihua Wen

    2017-01-01

    Full Text Available Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.

  9. Random projections and the optimization of an algorithm for phase retrieval

    International Nuclear Information System (INIS)

    Elser, Veit

    2003-01-01

    Iterative phase retrieval algorithms typically employ projections onto constraint subspaces to recover the unknown phases in the Fourier transform of an image, or, in the case of x-ray crystallography, the electron density of a molecule. For a general class of algorithms, where the basic iteration is specified by the difference map, solutions are associated with fixed points of the map, the attractive character of which determines the effectiveness of the algorithm. The behaviour of the difference map near fixed points is controlled by the relative orientation of the tangent spaces of the two constraint subspaces employed by the map. Since the dimensionalities involved are always large in practical applications, it is appropriate to use random matrix theory ideas to analyse the average-case convergence at fixed points. Optimal values of the γ parameters of the difference map are found which differ somewhat from the values previously obtained on the assumption of orthogonal tangent spaces

  10. Data-Driven Nonlinear Subspace Modeling for Prediction and Control of Molten Iron Quality Indices in Blast Furnace Ironmaking

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ping; Song, Heda; Wang, Hong; Chai, Tianyou

    2017-09-01

    Blast furnace (BF) in ironmaking is a nonlinear dynamic process with complicated physical-chemical reactions, where multi-phase and multi-field coupling and large time delay occur during its operation. In BF operation, the molten iron temperature (MIT) as well as Si, P and S contents of molten iron are the most essential molten iron quality (MIQ) indices, whose measurement, modeling and control have always been important issues in metallurgic engineering and automation field. This paper develops a novel data-driven nonlinear state space modeling for the prediction and control of multivariate MIQ indices by integrating hybrid modeling and control techniques. First, to improve modeling efficiency, a data-driven hybrid method combining canonical correlation analysis and correlation analysis is proposed to identify the most influential controllable variables as the modeling inputs from multitudinous factors would affect the MIQ indices. Then, a Hammerstein model for the prediction of MIQ indices is established using the LS-SVM based nonlinear subspace identification method. Such a model is further simplified by using piecewise cubic Hermite interpolating polynomial method to fit the complex nonlinear kernel function. Compared to the original Hammerstein model, this simplified model can not only significantly reduce the computational complexity, but also has almost the same reliability and accuracy for a stable prediction of MIQ indices. Last, in order to verify the practicability of the developed model, it is applied in designing a genetic algorithm based nonlinear predictive controller for multivariate MIQ indices by directly taking the established model as a predictor. Industrial experiments show the advantages and effectiveness of the proposed approach.

  11. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  12. Diomres (k,m): An efficient method based on Krylov subspaces to solve big, dispersed, unsymmetrical linear systems

    Energy Technology Data Exchange (ETDEWEB)

    de la Torre Vega, E. [Instituto de Investigaciones Electricas, Cuernavaca (Mexico); Cesar Suarez Arriaga, M. [Universidad Michoacana SNH, Michoacan (Mexico)

    1995-03-01

    In geothermal simulation processes, MULKOM uses Integrated Finite Differences to solve the corresponding partial differential equations. This method requires to resolve efficiently big linear dispersed systems of non-symmetrical nature on each temporal iteration. The order of the system is usually greater than one thousand its solution could represent around 80% of CPU total calculation time. If the elapsed time solving this class of linear systems is reduced, the duration of numerical simulation decreases notably. When the matrix is big (N{ge}500) and with holes, it is inefficient to handle all the system`s elements, because it is perfectly figured out by its elements distinct of zero, quantity greatly minor than N{sup 2}. In this area, iteration methods introduce advantages with respect to gaussian elimination methods, because these last replenish matrices not having any special distribution of their non-zero elements and because they do not make use of the available solution estimations. The iterating methods of the Conjugated Gradient family, based on the subspaces of Krylov, possess the advantage of improving the convergence speed by means of preconditioning techniques. The creation of DIOMRES(k,m) method guarantees the continuous descent of the residual norm, without incurring in division by zero. This technique converges at most in N iterations if the system`s matrix is symmetrical, it does not employ too much memory to converge and updates immediately the approximation by using incomplete orthogonalization and adequate restarting. A preconditioned version of DIOMRES was applied to problems related to unsymmetrical systems with 1000 unknowns and less than five terms per equation. We found that this technique could reduce notably the time needful to find the solution without requiring memory increment. The coupling of this method to geothermal versions of MULKOM is in process.

  13. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  14. Generalized subspace correction methods

    Energy Technology Data Exchange (ETDEWEB)

    Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)

    1996-12-31

    A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.

  15. Random walk on random walks

    NARCIS (Netherlands)

    Hilário, M.; Hollander, den W.Th.F.; Sidoravicius, V.; Soares dos Santos, R.; Teixeira, A.

    2014-01-01

    In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density ¿¿(0,8). At each step the random walk performs a nearest-neighbour jump, moving to

  16. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    Energy Technology Data Exchange (ETDEWEB)

    Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu [Department of Mathematics, University of Southern California, Los Angeles, CA 90089 (United States); Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089 (United States); Ghanem, Roger G., E-mail: ghanem@usc.edu [Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089 (United States)

    2017-07-15

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  17. Output-only cyclo-stationary linear-parameter time-varying stochastic subspace identification method for rotating machinery and spinning structures

    Science.gov (United States)

    Velazquez, Antonio; Swartz, R. Andrew

    2015-02-01

    Economical maintenance and operation are critical issues for rotating machinery and spinning structures containing blade elements, especially large slender dynamic beams (e.g., wind turbines). Structural health monitoring systems represent promising instruments to assure reliability and good performance from the dynamics of the mechanical systems. However, such devices have not been completely perfected for spinning structures. These sensing technologies are typically informed by both mechanistic models coupled with data-driven identification techniques in the time and/or frequency domain. Frequency response functions are popular but are difficult to realize autonomously for structures of higher order, especially when overlapping frequency content is present. Instead, time-domain techniques have shown to possess powerful advantages from a practical point of view (i.e. low-order computational effort suitable for real-time or embedded algorithms) and also are more suitable to differentiate closely-related modes. Customarily, time-varying effects are often neglected or dismissed to simplify this analysis, but such cannot be the case for sinusoidally loaded structures containing spinning multi-bodies. A more complex scenario is constituted when dealing with both periodic mechanisms responsible for the vibration shaft of the rotor-blade system and the interaction of the supporting substructure. Transformations of the cyclic effects on the vibrational data can be applied to isolate inertial quantities that are different from rotation-generated forces that are typically non-stationary in nature. After applying these transformations, structural identification can be carried out by stationary techniques via data-correlated eigensystem realizations. In this paper, an exploration of a periodic stationary or cyclo-stationary subspace identification technique is presented here for spinning multi-blade systems by means of a modified Eigensystem Realization Algorithm (ERA) via

  18. A description of the location and structure of the essential spectrum of a model operator in a subspace of a Fock space

    Energy Technology Data Exchange (ETDEWEB)

    Yodgorov, G R [Navoi State Pedagogical Institute, Navoi (Uzbekistan); Ismail, F [Universiti Putra Malaysia, Selangor (Malaysia); Muminov, Z I [Malaysia – Japan International Institute of Technology, Kuala Lumpur (Malaysia)

    2014-12-31

    We consider a certain model operator acting in a subspace of a fermionic Fock space. We obtain an analogue of Faddeev's equation. We describe the location of the essential spectrum of the operator under consideration and show that the essential spectrum consists of the union of at most four segments. Bibliography: 19 titles.

  19. Random magnetism

    International Nuclear Information System (INIS)

    Tahir-Kheli, R.A.

    1975-01-01

    A few simple problems relating to random magnetic systems are presented. Translational symmetry, only on the macroscopic scale, is assumed for these systems. A random set of parameters, on the microscopic scale, for the various regions of these systems is also assumed. A probability distribution for randomness is obeyed. Knowledge of the form of these probability distributions, is assumed in all cases [pt

  20. Randomized random walk on a random walk

    International Nuclear Information System (INIS)

    Lee, P.A.

    1983-06-01

    This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)

  1. A Strong Self-adaptivity Localization Algorithm Based on Gray Prediction Model for Mobile Nodes%一种使用灰度预测模型的强自适应性移动节点定位算法

    Institute of Scientific and Technical Information of China (English)

    单志龙; 刘兰辉; 张迎胜; 黄广雄

    2014-01-01

    定位技术是无线传感器网络的关键技术,而关于移动节点的定位又是其中的技术难点。该文针对移动节点定位问题提出基于灰度预测模型的强自适应性移动节点定位算法(GPLA)。算法在基于蒙特卡罗定位思想的基础上,利用灰度预测模型进行运动预测,精确采样区域,用估计距离进行滤波,提高采样粒子的有效性,通过限制性的线性交叉操作来生成新粒子,从而加快样本生成,减少采样次数,提高算法效率。仿真实验中,该算法在通信半径、锚节点密度、样本大小等条件变化的情况下,表现出较好的性能与较强的自适应性。%Localization of sensor nodes is an important issue in Wireless Sensor Networks (WSNs), and positioning of the mobile nodes is one of the difficulties. To deal with this issue, a strong self-adaptive Localization Algorithm based on Gray Prediction model for mobile nodes (GPLA) is proposed. On the background of Monte Carlo Localization Algoritm, gray prediction model is used in GPLA, which can accurate sampling area is used to predict nodes motion situation. In filtering process, estimated distance is taken to improve the validity of the sample particles. Finally, restrictive linear crossover is used to generate new particles, which can accelerate the sampling process, reduce the times of sampling and heighten the efficiency of GPLA. Simulation results show that the algorithm has excellent performance and strong self-adaptivity in different communication radius, anchor node, sample size, and other conditions.

  2. Random Fields

    Science.gov (United States)

    Vanmarcke, Erik

    1983-03-01

    Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.

  3. Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.

    Science.gov (United States)

    Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal

    2011-06-01

    This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.

  4. Damage Detection in Bridge Structure Using Vibration Data under Random Travelling Vehicle Loads

    International Nuclear Information System (INIS)

    Loh, C H; Hung, T Y; Chen, S F; Hsu, W T

    2015-01-01

    Due to the random nature of the road excitation and the inherent uncertainties in bridge-vehicle system, damage identification of bridge structure through continuous monitoring under operating situation become a challenge problem. Methods for system identification and damage detection of a continuous two-span concrete bridge structure in time domain is presented using interaction forces from random moving vehicles as excitation. The signals recorded in different locations of the instrumented bridge are mixed with signals from different internal and external (road roughness) vibration sources. The damage structure is also modelled as the stiffness reduction in one of the beam element. For the purpose of system identification and damage detection three different output-only modal analysis techniques are proposed: The covariance-driven stochastic subspace identification (SSI-COV), the blind source separation algorithms (called Second Order Blind Identification) and the multivariate AR model. The advantages and disadvantages of the three algorithms are discussed. Finally, the null-space damage index, subspace damage indices and mode shape slope change are used to detect and locate the damage. The proposed approaches has been tested in simulation and proved to be effective for structural health monitoring. (paper)

  5. Analysis of unbounded operators and random motion

    International Nuclear Information System (INIS)

    Jorgensen, Palle E. T.

    2009-01-01

    We study infinite weighted graphs with view to 'limits at infinity' or boundaries at infinity. Examples of such weighted graphs arise in infinite (in practice, that means 'very' large) networks of resistors or in statistical mechanics models for classical or quantum systems. However, more generally, our analysis includes reproducing kernel Hilbert spaces and associated operators on them. If X is some infinite set of vertices or nodes, in applications the essential ingredient going into the definition is a reproducing kernel Hilbert space; it measures the differences of functions on X evaluated on pairs of points in X. Moreover, the Hilbert norm-squared in H(X) will represent a suitable measure of energy. Associated unbounded operators will define a notion or dissipation, it can be a graph Laplacian or a more abstract unbounded Hermitian operator defined from the reproducing kernel Hilbert space under study. We prove that there are two closed subspaces in reproducing kernel Hilbert space H(X) that measure quantitative notions of limits at infinity in X: one generalizes finite-energy harmonic functions in H(X) and the other a deficiency index of a natural operator in H(X) associated directly with the diffusion. We establish these results in the abstract, and we offer examples and applications. Our results are related to, but different from, potential theoretic notions of 'boundaries' in more standard random walk models. Comparisons are made.

  6. Numerical Control Machine Tool Fault Diagnosis Using Hybrid Stationary Subspace Analysis and Least Squares Support Vector Machine with a Single Sensor

    Directory of Open Access Journals (Sweden)

    Chen Gao

    2017-03-01

    Full Text Available Tool fault diagnosis in numerical control (NC machines plays a significant role in ensuring manufacturing quality. However, current methods of tool fault diagnosis lack accuracy. Therefore, in the present paper, a fault diagnosis method was proposed based on stationary subspace analysis (SSA and least squares support vector machine (LS-SVM using only a single sensor. First, SSA was used to extract stationary and non-stationary sources from multi-dimensional signals without the need for independency and without prior information of the source signals, after the dimensionality of the vibration signal observed by a single sensor was expanded by phase space reconstruction technique. Subsequently, 10 dimensionless parameters in the time-frequency domain for non-stationary sources were calculated to generate samples to train the LS-SVM. Finally, the measured vibration signals from tools of an unknown state and their non-stationary sources were separated by SSA to serve as test samples for the trained SVM. The experimental validation demonstrated that the proposed method has better diagnosis accuracy than three previous methods based on LS-SVM alone, Principal component analysis and LS-SVM or on SSA and Linear discriminant analysis.

  7. Random magnetism

    International Nuclear Information System (INIS)

    Tsallis, C.

    1980-03-01

    The 'ingredients' which control a phase transition in well defined system as well as in random ones (e.g. random magnetic systems) are listed and discussed within a somehow unifying perspective. Among these 'ingredients' we find the couplings and elements responsible for the cooperative phenomenon, the topological connectivity as well as possible topological incompatibilities, the influence of new degrees of freedom, the order parameter dimensionality, the ground state degeneracy and finally the 'quanticity' of the system. The general trends, though illustrated in magnetic systems, essentially hold for all phase transitions, and give a basis for connection of this area with Field theory, Theory of dynamical systems, etc. (Author) [pt

  8. Random magnetism

    International Nuclear Information System (INIS)

    Tsallis, C.

    1981-01-01

    The 'ingredients' which control a phase transition in well defined systems as well as in random ones (e.q. random magnetic systems) are listed and discussed within a somehow unifying perspective. Among these 'ingredients' the couplings and elements responsible for the cooperative phenomenon, the topological connectivity as well as possible topological incompatibilities, the influence of new degrees of freedom, the order parameter dimensionality, the ground state degeneracy and finally the 'quanticity' of the system are found. The general trends, though illustrated in magnetic systems, essentially hold for all phase transitions, and give a basis for connection of this area with Field theory, Theory of dynamical systems, etc. (Author) [pt

  9. Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J.C.; Ibrahim, S.R.; Brincker, Rune

    Abstraet Thispaper demansirates how to use the Random Decrement (RD) technique for identification o flinear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing...

  10. Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune

    This paper demonstrates how to use the Random Decrement (RD) technique for identification of linear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing a new...

  11. Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, R.; Brincker, Rune

    1998-01-01

    This paper demonstrates how to use the Random Decrement (RD) technique for identification of linear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing a new...

  12. Random dynamics

    International Nuclear Information System (INIS)

    Bennett, D.L.; Brene, N.; Nielsen, H.B.

    1986-06-01

    The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model. (orig.)

  13. Random dynamics

    International Nuclear Information System (INIS)

    Bennett, D.L.

    1987-01-01

    The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: Gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model. (orig.)

  14. Random Dynamics

    Science.gov (United States)

    Bennett, D. L.; Brene, N.; Nielsen, H. B.

    1987-01-01

    The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model.

  15. Dissecting random and systematic differences between noisy composite data sets.

    Science.gov (United States)

    Diederichs, Kay

    2017-04-01

    Composite data sets measured on different objects are usually affected by random errors, but may also be influenced by systematic (genuine) differences in the objects themselves, or the experimental conditions. If the individual measurements forming each data set are quantitative and approximately normally distributed, a correlation coefficient is often used to compare data sets. However, the relations between data sets are not obvious from the matrix of pairwise correlations since the numerical value of the correlation coefficient is lowered by both random and systematic differences between the data sets. This work presents a multidimensional scaling analysis of the pairwise correlation coefficients which places data sets into a unit sphere within low-dimensional space, at a position given by their CC* values [as defined by Karplus & Diederichs (2012), Science, 336, 1030-1033] in the radial direction and by their systematic differences in one or more angular directions. This dimensionality reduction can not only be used for classification purposes, but also to derive data-set relations on a continuous scale. Projecting the arrangement of data sets onto the subspace spanned by systematic differences (the surface of a unit sphere) allows, irrespective of the random-error levels, the identification of clusters of closely related data sets. The method gains power with increasing numbers of data sets. It is illustrated with an example from low signal-to-noise ratio image processing, and an application in macromolecular crystallography is shown, but the approach is completely general and thus should be widely applicable.

  16. Random tensors

    CERN Document Server

    Gurau, Razvan

    2017-01-01

    Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....

  17. Explaining outliers by subspace separability

    DEFF Research Database (Denmark)

    Micenková, Barbora; Ng, Raymond T.; Dang, Xuan-Hong

    2013-01-01

    Outliers are extraordinary objects in a data collection. Depending on the domain, they may represent errors, fraudulent activities or rare events that are subject of our interest. Existing approaches focus on detection of outliers or degrees of outlierness (ranking), but do not provide a possible...... with any existing outlier detection algorithm and it also includes a heuristic that gives a substantial speedup over the baseline strategy....

  18. Shape analysis with subspace symmetries

    KAUST Repository

    Berner, Alexander; Wand, Michael D.; Mitra, Niloy J.; Mewes, Daniel; Seidel, Hans Peter

    2011-01-01

    We address the problem of partial symmetry detection, i.e., the identification of building blocks a complex shape is composed of. Previous techniques identify parts that relate to each other by simple rigid mappings, similarity transforms, or, more

  19. Stochastic Subspace Modelling of Turbulence

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Pedersen, B. J.; Nielsen, Søren R.K.

    2009-01-01

    positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order......, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.......Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper...

  20. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav

    2018-01-17

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  1. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav; Filová , Lenka; Richtarik, Peter

    2018-01-01

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  2. Random pulse generator

    International Nuclear Information System (INIS)

    Guo Ya'nan; Jin Dapeng; Zhao Dixin; Liu Zhen'an; Qiao Qiao; Chinese Academy of Sciences, Beijing

    2007-01-01

    Due to the randomness of radioactive decay and nuclear reaction, the signals from detectors are random in time. But normal pulse generator generates periodical pulses. To measure the performances of nuclear electronic devices under random inputs, a random generator is necessary. Types of random pulse generator are reviewed, 2 digital random pulse generators are introduced. (authors)

  3. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  4. Topics in random walks in random environment

    International Nuclear Information System (INIS)

    Sznitman, A.-S.

    2004-01-01

    Over the last twenty-five years random motions in random media have been intensively investigated and some new general methods and paradigms have by now emerged. Random walks in random environment constitute one of the canonical models of the field. However in dimension bigger than one they are still poorly understood and many of the basic issues remain to this day unresolved. The present series of lectures attempt to give an account of the progresses which have been made over the last few years, especially in the study of multi-dimensional random walks in random environment with ballistic behavior. (author)

  5. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    Science.gov (United States)

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed

  6. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  7. Quantumness, Randomness and Computability

    International Nuclear Information System (INIS)

    Solis, Aldo; Hirsch, Jorge G

    2015-01-01

    Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics. (paper)

  8. How random is a random vector?

    Science.gov (United States)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  9. How random is a random vector?

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2015-01-01

    Over 80 years ago Samuel Wilks proposed that the “generalized variance” of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the “Wilks standard deviation” –the square root of the generalized variance–is indeed the standard deviation of a random vector. We further establish that the “uncorrelation index” –a derivative of the Wilks standard deviation–is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: “randomness measures” and “independence indices” of random vectors. In turn, these general notions give rise to “randomness diagrams”—tangible planar visualizations that answer the question: How random is a random vector? The notion of “independence indices” yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  10. A special covariance structure for random coefficient models with both between and within covariates

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1990-07-01

    We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

  11. On a randomly imperfect spherical cap pressurized by a random ...

    African Journals Online (AJOL)

    On a randomly imperfect spherical cap pressurized by a random dynamic load. ... In this paper, we investigate a dynamical system in a random setting of dual ... characterization of the random process for determining the dynamic buckling load ...

  12. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  13. Random walks, random fields, and disordered systems

    CERN Document Server

    Černý, Jiří; Kotecký, Roman

    2015-01-01

    Focusing on the mathematics that lies at the intersection of probability theory, statistical physics, combinatorics and computer science, this volume collects together lecture notes on recent developments in the area. The common ground of these subjects is perhaps best described by the three terms in the title: Random Walks, Random Fields and Disordered Systems. The specific topics covered include a study of Branching Brownian Motion from the perspective of disordered (spin-glass) systems, a detailed analysis of weakly self-avoiding random walks in four spatial dimensions via methods of field theory and the renormalization group, a study of phase transitions in disordered discrete structures using a rigorous version of the cavity method, a survey of recent work on interacting polymers in the ballisticity regime and, finally, a treatise on two-dimensional loop-soup models and their connection to conformally invariant systems and the Gaussian Free Field. The notes are aimed at early graduate students with a mod...

  14. Misuse of randomization

    DEFF Research Database (Denmark)

    Liu, Jianping; Kjaergard, Lise Lotte; Gluud, Christian

    2002-01-01

    The quality of randomization of Chinese randomized trials on herbal medicines for hepatitis B was assessed. Search strategy and inclusion criteria were based on the published protocol. One hundred and seventy-six randomized clinical trials (RCTs) involving 20,452 patients with chronic hepatitis B...... virus (HBV) infection were identified that tested Chinese medicinal herbs. They were published in 49 Chinese journals. Only 10% (18/176) of the studies reported the method by which they randomized patients. Only two reported allocation concealment and were considered as adequate. Twenty percent (30...

  15. Random surfaces and strings

    International Nuclear Information System (INIS)

    Ambjoern, J.

    1987-08-01

    The theory of strings is the theory of random surfaces. I review the present attempts to regularize the world sheet of the string by triangulation. The corresponding statistical theory of triangulated random surfaces has a surprising rich structure, but the connection to conventional string theory seems non-trivial. (orig.)

  16. Derandomizing from random strings

    NARCIS (Netherlands)

    Buhrman, H.; Fortnow, L.; Koucký, M.; Loff, B.

    2010-01-01

    In this paper we show that BPP is truth-table reducible to the set of Kolmogorov random strings R(K). It was previously known that PSPACE, and hence BPP is Turing-reducible to R(K). The earlier proof relied on the adaptivity of the Turing-reduction to find a Kolmogorov-random string of polynomial

  17. Quantum random number generator

    Science.gov (United States)

    Soubusta, Jan; Haderka, Ondrej; Hendrych, Martin

    2001-03-01

    Since reflection or transmission of a quantum particle on a beamsplitter is inherently random quantum process, a device built on this principle does not suffer from drawbacks of neither pseudo-random computer generators or classical noise sources. Nevertheless, a number of physical conditions necessary for high quality random numbers generation must be satisfied. Luckily, in quantum optics realization they can be well controlled. We present an easy random number generator based on the division of weak light pulses on a beamsplitter. The randomness of the generated bit stream is supported by passing the data through series of 15 statistical test. The device generates at a rate of 109.7 kbit/s.

  18. Quantum random number generator

    Science.gov (United States)

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  19. Autonomous Byte Stream Randomizer

    Science.gov (United States)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  20. A Study on the Self-Adaption Incentive Performance Salary

    Science.gov (United States)

    Zhang, Chuanming; Wang, Yang

    In project managing, the performance salary management mode is often used to motivate project managers and other similar staff to improve performance or reduce the cost. But the engineering activities who own a lot of internal and external uncertain factors can not be known by the principle. It is difficult for to develop a suitable incentive target to project managers etch. This paper thinks that the manager self master the maximum of information on engineering activities. So this paper sets up an incentive model: the project managers themselves report performance objectives; owner gives the managers reward or punishment combined with their reported performance and actual performance. The model to ensure that the project manager is only accurate self reported its results to get the maximum profit. At the same time, it cans incentive managers to improve performance or reduce the cost. This paper focuses on setting up the model, analyzing the model parameters. And cite an example analyze them.

  1. Self Adaptive Safe Provisioning of Wireless Power Using DCOPs

    NARCIS (Netherlands)

    Leeuwen, C.J. van; Yildirim, K.S.; Pawelczak, P.

    2017-01-01

    Wireless Power Transfer (WPT) technologies aim at getting rid of cables used by consumer devices for energy provision. As long distance WPT is becoming mature, the health impact of WPT becomes increasingly important to consider. In this paper we look at how to maximize the wireless power transfer to

  2. Towards A Self Adaptive System for Social Wellness

    Directory of Open Access Journals (Sweden)

    Asad Masood Khattak

    2016-04-01

    Full Text Available Advancements in science and technology have highlighted the importance of robust healthcare services, lifestyle services and personalized recommendations. For this purpose patient daily life activity recognition, profile information, and patient personal experience are required. In this research work we focus on the improvement in general health and life status of the elderly through the use of an innovative services to align dietary intake with daily life and health activity information. Dynamic provisioning of personalized healthcare and life-care services are based on the patient daily life activities recognized using smart phone. To achieve this, an ontology-based approach is proposed, where all the daily life activities and patient profile information are modeled in ontology. Then the semantic context is exploited with an inference mechanism that enables fine-grained situation analysis for personalized service recommendations. A generic system architecture is proposed that facilitates context information storage and exchange, profile information, and the newly recognized activities. The system exploits the patient’s situation using semantic inference and provides recommendations for appropriate nutrition and activity related services. The proposed system is extensively evaluated for the claims and for its dynamic nature. The experimental results are very encouraging and have shown better accuracy than the existing system. The proposed system has also performed better in terms of the system support for a dynamic knowledge-base and the personalized recommendations.

  3. Robust object tacking based on self-adaptive search area

    Science.gov (United States)

    Dong, Taihang; Zhong, Sheng

    2018-02-01

    Discriminative correlation filter (DCF) based trackers have recently achieved excellent performance with great computational efficiency. However, DCF based trackers suffer boundary effects, which result in the unstable performance in challenging situations exhibiting fast motion. In this paper, we propose a novel method to mitigate this side-effect in DCF based trackers. We change the search area according to the prediction of target motion. When the object moves fast, broad search area could alleviate boundary effects and reserve the probability of locating object. When the object moves slowly, narrow search area could prevent effect of useless background information and improve computational efficiency to attain real-time performance. This strategy can impressively soothe boundary effects in situations exhibiting fast motion and motion blur, and it can be used in almost all DCF based trackers. The experiments on OTB benchmark show that the proposed framework improves the performance compared with the baseline trackers.

  4. Self-adaptive numerical integrator for analytic functions

    International Nuclear Information System (INIS)

    Garribba, S.; Quartapelle, L.; Reina, G.

    1978-01-01

    A new adaptive algorithm for the integration of analytical functions is presented. The algorithm processes the integration interval by generating local subintervals whose length is controlled through a feedback loop. The control is obtained by means of a relation derived on an analytical basis and valid for an arbitrary integration rule: two different estimates of an integral are used to compute the interval length necessary to obtain an integral estimate with accuracy within the assigned error bounds. The implied method for local generation of subintervals and an effective assumption of error partition among subintervals give rise to an adaptive algorithm provided with a highly accurate and very efficient integration procedure. The particular algorithm obtained by choosing the 6-point Gauss-Legendre integration rule is considered and extensive comparisons are made with other outstanding integration algorithms

  5. Self-adaptation of Ontologies to Folksonomies in Semantic Web

    OpenAIRE

    Francisco Echarte; José Javier Astrain; Alberto Córdoba; Jesús Villadangos

    2008-01-01

    Ontologies and tagging systems are two different ways to organize the knowledge present in the current Web. In this paper we propose a simple method to model folksonomies, as tagging systems, with ontologies. We show the scalability of the method using real data sets. The modeling method is composed of a generic ontology that represents any folksonomy and an algorithm to transform the information contained in folksonomies to the generic ontology. The method allows representing folksonomies at...

  6. Self-adaptive Bioinspired Hummingbird-wing Stimulated Triboelectric Nanogenerators.

    Science.gov (United States)

    Ahmed, Abdelsalam; Hassan, Islam; Song, Peiyi; Gamaleldin, Mohamed; Radhi, Ali; Panwar, Nishtha; Tjin, Swee Chuan; Desoky, Ahmed Y; Sinton, David; Yong, Ken-Tye; Zu, Jean

    2017-12-07

    Bio-inspired technologies have remarkable potential for energy harvesting from clean and sustainable energy sources. Inspired by the hummingbird-wing structure, we propose a shape-adaptive, lightweight triboelectric nanogenerator (TENG) designed to exploit the unique flutter mechanics of the hummingbird for small-scale wind energy harvesting. The flutter is confined between two surfaces for contact electrification upon oscillation. We investigate the flutter mechanics on multiple contact surfaces with several free-standing and lightweight electrification designs. The flutter driven-TENGs are deposited on simplified wing designs to match the electrical performance with variations in wind speed. The hummingbird TENG (H-TENG) device weighed 10 g, making it one of the lightest TENG harvesters in the literature. With a six TENG network, the hybrid design attained a 1.5 W m -2 peak electrical output at 7.5 m/s wind speed with an approximately linear increase in charge rate with the increased number of TENG harvesters. We demonstrate the ability of the H-TENG networks to operate Internet of Things (IoT) devices from sustainable and renewable energy sources.

  7. Self-adapting the success rate when practicing math

    NARCIS (Netherlands)

    Jansen, B.R.J.; Hofman, A.D.; Savi, A.; Visser, I.; van der Maas, H.L.J.

    2016-01-01

    Use and benefits of the possibility to choose a success rate are studied in a math-practice application that is used by a considerable percentage of Dutch primary school children. Study 1 uses data that were collected with the application, using children's practice data (N = 40,329; grades 1–6).

  8. Towards A Self Adaptive System for Social Wellness.

    Science.gov (United States)

    Khattak, Asad Masood; Khan, Wajahat Ali; Pervez, Zeeshan; Iqbal, Farkhund; Lee, Sungyoung

    2016-04-13

    Advancements in science and technology have highlighted the importance of robust healthcare services, lifestyle services and personalized recommendations. For this purpose patient daily life activity recognition, profile information, and patient personal experience are required. In this research work we focus on the improvement in general health and life status of the elderly through the use of an innovative services to align dietary intake with daily life and health activity information. Dynamic provisioning of personalized healthcare and life-care services are based on the patient daily life activities recognized using smart phone. To achieve this, an ontology-based approach is proposed, where all the daily life activities and patient profile information are modeled in ontology. Then the semantic context is exploited with an inference mechanism that enables fine-grained situation analysis for personalized service recommendations. A generic system architecture is proposed that facilitates context information storage and exchange, profile information, and the newly recognized activities. The system exploits the patient's situation using semantic inference and provides recommendations for appropriate nutrition and activity related services. The proposed system is extensively evaluated for the claims and for its dynamic nature. The experimental results are very encouraging and have shown better accuracy than the existing system. The proposed system has also performed better in terms of the system support for a dynamic knowledge-base and the personalized recommendations.

  9. Random number generation

    International Nuclear Information System (INIS)

    Coveyou, R.R.

    1974-01-01

    The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)

  10. Quantum random access memory

    OpenAIRE

    Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo

    2007-01-01

    A random access memory (RAM) uses n bits to randomly address N=2^n distinct memory cells. A quantum random access memory (qRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(log N) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust qRAM algorithm, as it in general requires entanglement among exponentially l...

  11. Randomization of inspections

    International Nuclear Information System (INIS)

    Markin, J.T.

    1989-01-01

    As the numbers and complexity of nuclear facilities increase, limitations on resources for international safeguards may restrict attainment of safeguards goals. One option for improving the efficiency of limited resources is to expand the current inspection regime to include random allocation of the amount and frequency of inspection effort to material strata or to facilities. This paper identifies the changes in safeguards policy, administrative procedures, and operational procedures that would be necessary to accommodate randomized inspections and identifies those situations where randomization can improve inspection efficiency and those situations where the current nonrandom inspections should be maintained. 9 refs., 1 tab

  12. Random phenomena; Phenomenes aleatoires

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, G. [Commissariat a l' energie atomique et aux energies alternatives - CEA, C.E.N.G., Service d' Electronique, Section d' Electronique, Grenoble (France)

    1963-07-01

    This document gathers a set of conferences presented in 1962. A first one proposes a mathematical introduction to the analysis of random phenomena. The second one presents an axiomatic of probability calculation. The third one proposes an overview of one-dimensional random variables. The fourth one addresses random pairs, and presents basic theorems regarding the algebra of mathematical expectations. The fifth conference discusses some probability laws: binomial distribution, the Poisson distribution, and the Laplace-Gauss distribution. The last one deals with the issues of stochastic convergence and asymptotic distributions.

  13. Tunable random packings

    International Nuclear Information System (INIS)

    Lumay, G; Vandewalle, N

    2007-01-01

    We present an experimental protocol that allows one to tune the packing fraction η of a random pile of ferromagnetic spheres from a value close to the lower limit of random loose packing η RLP ≅0.56 to the upper limit of random close packing η RCP ≅0.64. This broad range of packing fraction values is obtained under normal gravity in air, by adjusting a magnetic cohesion between the grains during the formation of the pile. Attractive and repulsive magnetic interactions are found to affect stongly the internal structure and the stability of sphere packing. After the formation of the pile, the induced cohesion is decreased continuously along a linear decreasing ramp. The controlled collapse of the pile is found to generate various and reproducible values of the random packing fraction η

  14. Random maintenance policies

    CERN Document Server

    Nakagawa, Toshio

    2014-01-01

    Exploring random maintenance models, this book provides an introduction to the implementation of random maintenance, and it is one of the first books to be written on this subject.  It aims to help readers learn new techniques for applying random policies to actual reliability models, and it provides new theoretical analyses of various models including classical replacement, preventive maintenance and inspection policies. These policies are applied to scheduling problems, backup policies of database systems, maintenance policies of cumulative damage models, and reliability of random redundant systems. Reliability theory is a major concern for engineers and managers, and in light of Japan’s recent earthquake, the reliability of large-scale systems has increased in importance. This also highlights the need for a new notion of maintenance and reliability theory, and how this can practically be applied to systems. Providing an essential guide for engineers and managers specializing in reliability maintenance a...

  15. Theory of random sets

    CERN Document Server

    Molchanov, Ilya

    2017-01-01

    This monograph, now in a thoroughly revised second edition, offers the latest research on random sets. It has been extended to include substantial developments achieved since 2005, some of them motivated by applications of random sets to econometrics and finance. The present volume builds on the foundations laid by Matheron and others, including the vast advances in stochastic geometry, probability theory, set-valued analysis, and statistical inference. It shows the various interdisciplinary relationships of random set theory within other parts of mathematics, and at the same time fixes terminology and notation that often vary in the literature, establishing it as a natural part of modern probability theory and providing a platform for future development. It is completely self-contained, systematic and exhaustive, with the full proofs that are necessary to gain insight. Aimed at research level, Theory of Random Sets will be an invaluable reference for probabilists; mathematicians working in convex and integ...

  16. Quantum randomness and unpredictability

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Gregg [Quantum Communication and Measurement Laboratory, Department of Electrical and Computer Engineering and Division of Natural Science and Mathematics, Boston University, Boston, MA (United States)

    2017-06-15

    Quantum mechanics is a physical theory supplying probabilities corresponding to expectation values for measurement outcomes. Indeed, its formalism can be constructed with measurement as a fundamental process, as was done by Schwinger, provided that individual measurements outcomes occur in a random way. The randomness appearing in quantum mechanics, as with other forms of randomness, has often been considered equivalent to a form of indeterminism. Here, it is argued that quantum randomness should instead be understood as a form of unpredictability because, amongst other things, indeterminism is not a necessary condition for randomness. For concreteness, an explication of the randomness of quantum mechanics as the unpredictability of quantum measurement outcomes is provided. Finally, it is shown how this view can be combined with the recently introduced view that the very appearance of individual quantum measurement outcomes can be grounded in the Plenitude principle of Leibniz, a principle variants of which have been utilized in physics by Dirac and Gell-Mann in relation to the fundamental processes. This move provides further support to Schwinger's ''symbolic'' derivation of quantum mechanics from measurement. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Reconstructing random media

    International Nuclear Information System (INIS)

    Yeong, C.L.; Torquato, S.

    1998-01-01

    We formulate a procedure to reconstruct the structure of general random heterogeneous media from limited morphological information by extending the methodology of Rintoul and Torquato [J. Colloid Interface Sci. 186, 467 (1997)] developed for dispersions. The procedure has the advantages that it is simple to implement and generally applicable to multidimensional, multiphase, and anisotropic structures. Furthermore, an extremely useful feature is that it can incorporate any type and number of correlation functions in order to provide as much morphological information as is necessary for accurate reconstruction. We consider a variety of one- and two-dimensional reconstructions, including periodic and random arrays of rods, various distribution of disks, Debye random media, and a Fontainebleau sandstone sample. We also use our algorithm to construct heterogeneous media from specified hypothetical correlation functions, including an exponentially damped, oscillating function as well as physically unrealizable ones. copyright 1998 The American Physical Society

  18. Subspace Methods for Massive and Messy Data

    Science.gov (United States)

    2017-07-12

    We also use four datasets from the 20 newsgroups corpus3: atheism- religion , autos-motorcycle, cryptography-electronics and mac-windows. We compared...with rank R, indicating the data has few degrees of freedom . The key observation in our work studying the variety model is that despite the data

  19. Dominant Taylor Spectrum and Invariant Subspaces

    Czech Academy of Sciences Publication Activity Database

    Ambrozie, Calin-Grigore; Müller, Vladimír

    2009-01-01

    Roč. 61, č. 1 (2009), s. 101-111 ISSN 0379-4024 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : Taylor spectrum * Scott-Brown technique * dominant spectrum Subject RIV: BA - General Mathematics Impact factor: 0.580, year: 2009

  20. Subspace Analysis of Indoor UWB Channels

    Directory of Open Access Journals (Sweden)

    Rachid Saadane

    2005-03-01

    Full Text Available This work aims at characterizing the second-order statistics of indoor ultra-wideband (UWB channels using channel sounding techniques. We present measurement results for different scenarios conducted in a laboratory setting at Institut Eurécom. These are based on an eigendecomposition of the channel autocovariance matrix, which allows for the analysis of the growth in the number of significant degrees of freedom of the channel process as a function of the signaling bandwidth as well as the statistical correlation between different propagation paths. We show empirical eigenvalue distributions as a function of the signal bandwidth for both line-of-sight and non-line-of-sight situations. Furthermore, we give examples where paths from different propagation clusters (possibly arising from reflection or diffraction show strong statistical dependence.

  1. Subspace Signal Processing in Structured Noise

    Science.gov (United States)

    1990-12-01

    1.7 Motivation for the Model ....... ........................... 8 1.8 E x am p les...S). We do not require that H be orthogonal to S. * 1.7 Motivation for the Model The linear model is quite versatile in terms of the types of signals...cross terms zero, we choose . = (SHs)- mS~u’ (3.69) This implies that = Ps4 , (3.70) and S t s (3.71) : = Ps . RPs -. The last step is to maximize

  2. Krylov subspace acceleration of waveform relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Lumsdaine, A.; Wu, Deyun [Univ. of Notre Dame, IN (United States)

    1996-12-31

    Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).

  3. Preconditioned Krylov subspace methods for eigenvalue problems

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Saad, Y.; Stathopoulos, A. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    Lanczos algorithm is a commonly used method for finding a few extreme eigenvalues of symmetric matrices. It is effective if the wanted eigenvalues have large relative separations. If separations are small, several alternatives are often used, including the shift-invert Lanczos method, the preconditioned Lanczos method, and Davidson method. The shift-invert Lanczos method requires direct factorization of the matrix, which is often impractical if the matrix is large. In these cases preconditioned schemes are preferred. Many applications require solution of hundreds or thousands of eigenvalues of large sparse matrices, which pose serious challenges for both iterative eigenvalue solver and preconditioner. In this paper we will explore several preconditioned eigenvalue solvers and identify the ones suited for finding large number of eigenvalues. Methods discussed in this paper make up the core of a preconditioned eigenvalue toolkit under construction.

  4. Intermittency and random matrices

    Science.gov (United States)

    Sokoloff, Dmitry; Illarionov, E. A.

    2015-08-01

    A spectacular phenomenon of intermittency, i.e. a progressive growth of higher statistical moments of a physical field excited by an instability in a random medium, attracted the attention of Zeldovich in the last years of his life. At that time, the mathematical aspects underlying the physical description of this phenomenon were still under development and relations between various findings in the field remained obscure. Contemporary results from the theory of the product of independent random matrices (the Furstenberg theory) allowed the elaboration of the phenomenon of intermittency in a systematic way. We consider applications of the Furstenberg theory to some problems in cosmology and dynamo theory.

  5. Random quantum operations

    International Nuclear Information System (INIS)

    Bruzda, Wojciech; Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2009-01-01

    We define a natural ensemble of trace preserving, completely positive quantum maps and present algorithms to generate them at random. Spectral properties of the superoperator Φ associated with a given quantum map are investigated and a quantum analogue of the Frobenius-Perron theorem is proved. We derive a general formula for the density of eigenvalues of Φ and show the connection with the Ginibre ensemble of real non-symmetric random matrices. Numerical investigations of the spectral gap imply that a generic state of the system iterated several times by a fixed generic map converges exponentially to an invariant state

  6. Random a-adic groups and random net fractals

    Energy Technology Data Exchange (ETDEWEB)

    Li Yin [Department of Mathematics, Nanjing University, Nanjing 210093 (China)], E-mail: Lyjerry7788@hotmail.com; Su Weiyi [Department of Mathematics, Nanjing University, Nanjing 210093 (China)], E-mail: suqiu@nju.edu.cn

    2008-08-15

    Based on random a-adic groups, this paper investigates the relationship between the existence conditions of a positive flow in a random network and the estimation of the Hausdorff dimension of a proper random net fractal. Subsequently we describe some particular random fractals for which our results can be applied. Finally the Mauldin and Williams theorem is shown to be very important example for a random Cantor set with application in physics as shown in E-infinity theory.

  7. [Intel random number generator-based true random number generator].

    Science.gov (United States)

    Huang, Feng; Shen, Hong

    2004-09-01

    To establish a true random number generator on the basis of certain Intel chips. The random numbers were acquired by programming using Microsoft Visual C++ 6.0 via register reading from the random number generator (RNG) unit of an Intel 815 chipset-based computer with Intel Security Driver (ISD). We tested the generator with 500 random numbers in NIST FIPS 140-1 and X(2) R-Squared test, and the result showed that the random number it generated satisfied the demand of independence and uniform distribution. We also compared the random numbers generated by Intel RNG-based true random number generator and those from the random number table statistically, by using the same amount of 7500 random numbers in the same value domain, which showed that the SD, SE and CV of Intel RNG-based random number generator were less than those of the random number table. The result of u test of two CVs revealed no significant difference between the two methods. Intel RNG-based random number generator can produce high-quality random numbers with good independence and uniform distribution, and solves some problems with random number table in acquisition of the random numbers.

  8. On Random Numbers and Design

    Science.gov (United States)

    Ben-Ari, Morechai

    2004-01-01

    The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

  9. Uniform random number generators

    Science.gov (United States)

    Farr, W. R.

    1971-01-01

    Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.

  10. On randomly interrupted diffusion

    International Nuclear Information System (INIS)

    Luczka, J.

    1993-01-01

    Processes driven by randomly interrupted Gaussian white noise are considered. An evolution equation for single-event probability distributions in presented. Stationary states are considered as a solution of a second-order ordinary differential equation with two imposed conditions. A linear model is analyzed and its stationary distributions are explicitly given. (author). 10 refs

  11. Coded Random Access

    DEFF Research Database (Denmark)

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi

    2015-01-01

    The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered as ...

  12. Random eigenvalue problems revisited

    Indian Academy of Sciences (India)

    statistical distributions; linear stochastic systems. 1. ... dimensional multivariate Gaussian random vector with mean µ ∈ Rm and covariance ... 5, the proposed analytical methods are applied to a three degree-of-freedom system and the ...... The joint pdf ofω1 andω3 is however close to a bivariate Gaussian density function.

  13. Alzheimer random walk

    Science.gov (United States)

    Odagaki, Takashi; Kasuya, Keisuke

    2017-09-01

    Using the Monte Carlo simulation, we investigate a memory-impaired self-avoiding walk on a square lattice in which a random walker marks each of sites visited with a given probability p and makes a random walk avoiding the marked sites. Namely, p = 0 and p = 1 correspond to the simple random walk and the self-avoiding walk, respectively. When p> 0, there is a finite probability that the walker is trapped. We show that the trap time distribution can well be fitted by Stacy's Weibull distribution b(a/b){a+1}/{b}[Γ({a+1}/{b})]-1x^a\\exp(-a/bx^b)} where a and b are fitting parameters depending on p. We also find that the mean trap time diverges at p = 0 as p- α with α = 1.89. In order to produce sufficient number of long walks, we exploit the pivot algorithm and obtain the mean square displacement and its Flory exponent ν(p) as functions of p. We find that the exponent determined for 1000 step walks interpolates both limits ν(0) for the simple random walk and ν(1) for the self-avoiding walk as [ ν(p) - ν(0) ] / [ ν(1) - ν(0) ] = pβ with β = 0.388 when p ≪ 0.1 and β = 0.0822 when p ≫ 0.1. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  14. Random vibrations theory and practice

    CERN Document Server

    Wirsching, Paul H; Ortiz, Keith

    1995-01-01

    Random Vibrations: Theory and Practice covers the theory and analysis of mechanical and structural systems undergoing random oscillations due to any number of phenomena— from engine noise, turbulent flow, and acoustic noise to wind, ocean waves, earthquakes, and rough pavement. For systems operating in such environments, a random vibration analysis is essential to the safety and reliability of the system. By far the most comprehensive text available on random vibrations, Random Vibrations: Theory and Practice is designed for readers who are new to the subject as well as those who are familiar with the fundamentals and wish to study a particular topic or use the text as an authoritative reference. It is divided into three major sections: fundamental background, random vibration development and applications to design, and random signal analysis. Introductory chapters cover topics in probability, statistics, and random processes that prepare the reader for the development of the theory of random vibrations a...

  15. Free random variables

    CERN Document Server

    Voiculescu, Dan; Nica, Alexandru

    1992-01-01

    This book presents the first comprehensive introduction to free probability theory, a highly noncommutative probability theory with independence based on free products instead of tensor products. Basic examples of this kind of theory are provided by convolution operators on free groups and by the asymptotic behavior of large Gaussian random matrices. The probabilistic approach to free products has led to a recent surge of new results on the von Neumann algebras of free groups. The book is ideally suited as a textbook for an advanced graduate course and could also provide material for a seminar. In addition to researchers and graduate students in mathematics, this book will be of interest to physicists and others who use random matrices.

  16. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  17. On Complex Random Variables

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE In this paper, it is shown that a complex multivariate random variable  is a complex multivariate normal random variable of dimensionality if and only if all nondegenerate complex linear combinations of  have a complex univariate normal distribution. The characteristic function of  has been derived, and simpler forms of some theorems have been given using this characterization theorem without assuming that the variance-covariance matrix of the vector  is Hermitian positive definite. Marginal distributions of  have been given. In addition, a complex multivariate t-distribution has been defined and the density derived. A characterization of the complex multivariate t-distribution is given. A few possible uses of this distribution have been suggested.

  18. A Campbell random process

    International Nuclear Information System (INIS)

    Reuss, J.D.; Misguich, J.H.

    1993-02-01

    The Campbell process is a stationary random process which can have various correlation functions, according to the choice of an elementary response function. The statistical properties of this process are presented. A numerical algorithm and a subroutine for generating such a process is built up and tested, for the physically interesting case of a Campbell process with Gaussian correlations. The (non-Gaussian) probability distribution appears to be similar to the Gamma distribution

  19. Certified randomness in quantum physics.

    Science.gov (United States)

    Acín, Antonio; Masanes, Lluis

    2016-12-07

    The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.

  20. Cross over of recurrence networks to random graphs and random ...

    Indian Academy of Sciences (India)

    2017-01-27

    Jan 27, 2017 ... that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to .... municative [19] or social [20], deviate from the random ..... He has shown that the spatial effects become.