Method of dimensionality reduction in contact mechanics and friction
Popov, Valentin L
2015-01-01
This book describes for the first time a simulation method for the fast calculation of contact properties and friction between rough surfaces in a complete form. In contrast to existing simulation methods, the method of dimensionality reduction (MDR) is based on the exact mapping of various types of three-dimensional contact problems onto contacts of one-dimensional foundations. Within the confines of MDR, not only are three dimensional systems reduced to one-dimensional, but also the resulting degrees of freedom are independent from another. Therefore, MDR results in an enormous reduction of the development time for the numerical implementation of contact problems as well as the direct computation time and can ultimately assume a similar role in tribology as FEM has in structure mechanics or CFD methods, in hydrodynamics. Furthermore, it substantially simplifies analytical calculation and presents a sort of “pocket book edition” of the entirety contact mechanics. Measurements of the rheology of bodies in...
Nonlinear Dimensionality Reduction Methods in Climate Data Analysis
Ross, Ian
2008-01-01
Linear dimensionality reduction techniques, notably principal component analysis, are widely used in climate data analysis as a means to aid in the interpretation of datasets of high dimensionality. These linear methods may not be appropriate for the analysis of data arising from nonlinear processes occurring in the climate system. Numerous techniques for nonlinear dimensionality reduction have been developed recently that may provide a potentially useful tool for the identification of low-dimensional manifolds in climate data sets arising from nonlinear dynamics. In this thesis I apply three such techniques to the study of El Nino/Southern Oscillation variability in tropical Pacific sea surface temperatures and thermocline depth, comparing observational data with simulations from coupled atmosphere-ocean general circulation models from the CMIP3 multi-model ensemble. The three methods used here are a nonlinear principal component analysis (NLPCA) approach based on neural networks, the Isomap isometric mappin...
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Ni, Shengqiao; Lv, Jiancheng; Cheng, Zhehao; Li, Mao
2015-01-01
This paper presents improvements to the conventional Topology Representing Network to build more appropriate topology relationships. Based on this improved Topology Representing Network, we propose a novel method for online dimensionality reduction that integrates the improved Topology Representing Network and Radial Basis Function Network. This method can find meaningful low-dimensional feature structures embedded in high-dimensional original data space, process nonlinear embedded manifolds, and map the new data online. Furthermore, this method can deal with large datasets for the benefit of improved Topology Representing Network. Experiments illustrate the effectiveness of the proposed method.
Shengqiao Ni
Full Text Available This paper presents improvements to the conventional Topology Representing Network to build more appropriate topology relationships. Based on this improved Topology Representing Network, we propose a novel method for online dimensionality reduction that integrates the improved Topology Representing Network and Radial Basis Function Network. This method can find meaningful low-dimensional feature structures embedded in high-dimensional original data space, process nonlinear embedded manifolds, and map the new data online. Furthermore, this method can deal with large datasets for the benefit of improved Topology Representing Network. Experiments illustrate the effectiveness of the proposed method.
Dimensionality Reduction Mappings
Bunte, Kerstin; Biehl, Michael; Hammer, Barbara
2011-01-01
A wealth of powerful dimensionality reduction methods has been established which can be used for data visualization and preprocessing. These are accompanied by formal evaluation schemes, which allow a quantitative evaluation along general principles and which even lead to further visualization schem
A finite-dimensional reduction method for slightly supercritical elliptic problems
Riccardo Molle
2004-01-01
Full Text Available We describe a finite-dimensional reduction method to find solutions for a class of slightly supercritical elliptic problems. A suitable truncation argument allows us to work in the usual Sobolev space even in the presence of supercritical nonlinearities: we modify the supercritical term in such a way to have subcritical approximating problems; for these problems, the finite-dimensional reduction can be obtained applying the methods already developed in the subcritical case; finally, we show that, if the truncation is realized at a sufficiently large level, then the solutions of the approximating problems, given by these methods, also solve the supercritical problems when the parameter is small enough.
Ross S Williamson
2015-04-01
Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W
2015-04-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Kunju Shi
2014-01-01
Full Text Available Dimensionality reduction is a crucial task in machinery fault diagnosis. Recently, as a popular dimensional reduction technology, manifold learning has been successfully used in many fields. However, most of these technologies are not suitable for the task, because they are unsupervised in nature and fail to discover the discriminate structure in the data. To overcome these weaknesses, kernel local linear discriminate (KLLD algorithm is proposed. KLLD algorithm is a novel algorithm which combines the advantage of neighborhood preserving projections (NPP, Floyd, maximum margin criterion (MMC, and kernel trick. KLLD has four advantages. First of all, KLLD is a supervised dimension reduction method that can overcome the out-of-sample problems. Secondly, short-circuit problem can be avoided. Thirdly, KLLD algorithm can use between-class scatter matrix and inner-class scatter matrix more efficiently. Lastly, kernel trick is included in KLLD algorithm to find more precise solution. The main feature of the proposed method is that it attempts to both preserve the intrinsic neighborhood geometry of the increased data and exact the discriminate information. Experiments have been performed to evaluate the new method. The results show that KLLD has more benefits than traditional methods.
Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.
2014-03-01
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
Douglas, Amber M.
Graphene is a two-dimensional (2D) sp2-hybridized carbon-based material possessing properties which include high electrical conductivity, ballistic thermal conductivity, tensile strength exceeding that of steel, high flexural strength, optical transparency, and the ability to adsorb and desorb atoms and molecules. Due to the characteristics of said material, graphene is a candidate for applications in integrated circuits, electrochromic devices, transparent conducting electrodes, desalination, solar cells, thermal management materials, polymer nanocomposites, and biosensors. Despite the above mentioned properties and possible applications, very few technologies have been commercialized utilizing graphene due to the high cost associated with the production of graphene. Therefore, a great deal of effort and research has been performed to produce a material that provides similar properties, reduced graphene oxide due (RGO) to the ease of commercial scaling of the production processes. This material is typically prepared through the oxidation of graphite in an aqueous media to graphene oxide (GO) followed by reduction to yield RGO. Although this material has been extensively studied, there is a lack of consistency in the scientific community regarding the analysis of the resulting RGO material. In this dissertation, a study of the reduction methods for GO and an alternate 2D carbon-based material, humic acid (HA), followed by analysis of the materials using Raman spectroscopy and Energy Dispersive X-ray Spectroscopy (EDS). Means of reduction will include chemical and thermal methods. Characterization of the material has been carried out on both before and after reduction.
GUYanfeng; ZHANGYe; QUANTaifan
2003-01-01
A challenging problem in using hyper-spectral data is to eliminate redundancy and preserve useful spectral information for applications. In this pa-per, a kernel-based nonlinear subspace projection (KNSP)method is proposed for feature extraction and dimension-ality reduction in hyperspectral images. The proposed method includes three key steps: subspace partition of hyperspectral data, feature extraction using kernel-based principal component analysis (KPCA) and feature selec-tion based on class separability in the subspaces. Accord-ing to the strong correlation between neighboring bands,the whole data space is partitioned to requested subspaces.In each subspace, the KPCA method is used to effectively extract spectral feature and eliminate redundancies. A criterion function based on class discrimination and sepa-rability is used for the transformed feature selection. For the purpose of testifying its effectiveness, the proposed new method is compared with the classical principal component analysis (PCA) and segmented principal component trans-formation (SPCT). A hyperspectral image classification is performed on AVIRIS data. which have 224 svectral bands.Experimental results show that KNSP is very effective for feature extraction and dimensionality reduction of hyper-spectral data and provides significant improvement over classical PCA and current SPCT technique.
Spectral Methods for Linear and Non-Linear Semi-Supervised Dimensionality Reduction
Chatpatanasiri, Ratthachat
2008-01-01
We present a general framework of spectral methods for semi-supervised dimensionality reduction. Applying an approach called manifold regularization, our framework naturally generalizes existent supervised frameworks. Furthermore, by our two semi-supervised versions of the representer theorem, our framework can be kernelized as well. Using our framework, we give three examples of semi-supervised algorithms which are extended from three recent supervised algorithms, namely, ``discriminant neighborhood embedding'', ``marginal Fisher analysis'' and ``local Fisher discriminant analysis''. We also give three more semi-supervised examples of the kernel versions of these algorithms. Numerical results of the six semi-supervised algorithms compared to their supervised versions are presented.
An Ant Colony Optimization Based Dimension Reduction Method for High-Dimensional Datasets
Ying Li; Gang Wang; Huiling Chen; Lian Shi; Lei Qin
2013-01-01
In this paper,a bionic optimization algorithm based dimension reduction method named Ant Colony Optimization -Selection (ACO-S) is proposed for high-dimensional datasets.Because microarray datasets comprise tens of thousands of features (genes),they are usually used to test the dimension reduction techniques.ACO-S consists of two stages in which two well-known ACO algorithms,namely ant system and ant colony system,are utilized to seek for genes,respectively.In the first stage,a modified ant system is used to filter the nonsignificant genes from high-dimensional space,and a number of promising genes are reserved in the next step.In the second stage,an improved ant colony system is applied to gene selection.In order to enhance the search ability of ACOs,we propose a method for calculating priori available heuristic information and design a fuzzy logic controller to dynamically adjust the number of ants in ant colony system.Furthermore,we devise another fuzzy logic controller to tune the parameter (q0) in ant colony system.We evaluate the performance of ACO-S on five microarray datasets,which have dimensions varying from 7129 to 12000.We also compare the performance of ACO-S with the results obtained from four existing well-known bionic optimization algorithms.The comparison results show that ACO-S has a notable ability to generate a gene subset with the smallest size and salient features while yielding high classification accuracy.The comparative results generated by ACO-S adopting different classifiers are also given.The proposed method is shown to be a promising and effective tool for mining high-dimension data and mobile robot navigation.
A comparison of dimensionality reduction methods for retrieval of similar objects in simulation data
Cantu-Paz, E; Cheung, S S; Kamath, C
2003-09-23
High-resolution computer simulations produce large volumes of data. As a first step in the analysis of these data, supervised machine learning techniques can be used to retrieve objects similar to a query that the user finds interesting. These objects may be characterized by a large number of features, some of which may be redundant or irrelevant to the similarity retrieval problem. This paper presents a comparison of six dimensionality reduction algorithms on data from a fluid mixing simulation. The objective is to identify methods that efficiently find feature subsets that result in high accuracy rates. Our experimental results with single- and multi-resolution data suggest that standard forward feature selection produces the smallest feature subsets in the shortest time.
Visualizing the quality of dimensionality reduction
Mokbel, Bassam; Lueks, Wouter; Gisbrecht, Andrej; Hammer, Barbara
2013-01-01
The growing number of dimensionality reduction methods available for data visualization has recently inspired the development of formal measures to evaluate the resulting low-dimensional representation independently from the methods' inherent criteria. Many evaluation measures can be summarized base
Khoudeir, A.; Montemayor, R.; Urrutia, Luis F.
2008-09-01
Using the parent Lagrangian method together with a dimensional reduction from D to (D-1) dimensions, we construct dual theories for massive spin two fields in arbitrary dimensions in terms of a mixed symmetry tensor TA[A1A2…AD-2]. Our starting point is the well-studied massless parent action in dimension D. The resulting massive Stueckelberg-like parent actions in (D-1) dimensions inherit all the gauge symmetries of the original massless action and can be gauge fixed in two alternative ways, yielding the possibility of having a parent action with either a symmetric or a nonsymmetric Fierz-Pauli field eAB. Even though the dual sector in terms of the standard spin two field includes only the symmetrical part e{AB} in both cases, these two possibilities yield different results in terms of the alternative dual field TA[A1A2…AD-2]. In particular, the nonsymmetric case reproduces the Freund-Curtright action as the dual to the massive spin two field action in four dimensions.
Khoudeir, A; Urrutia, Luis F
2008-01-01
Using the parent Lagrangian method together with a dimensional reduction from D to (D-1) dimensions we constuct dual theories for massive spin two fields in arbitray dimensions in terms of a mixed symmetry tensor T_{A[A_1...A_{D-2}]}. Our starting point is the well studied massless parent action in dimension D. The resulting massive Stueckelberg-like parent actions in (D-1) dimensions inherits all the gauge symmetries of the original masless action and can be gauge fixed in two alternative ways, yielding the possibility of having either a parent action with a symmetric or a non-symmetric Fierz-Pauli field e_{AB}. Even though the dual sector in terms of the standard spin two field includes only the symmetrical part e_{{AB}} in both cases, these two alternatives yield different results in terms of the alternative dual field T_{A[A_1...A_{D-2}]}. In particular, the non-symmetric case reproduces the Freund-Curtright action as the dual to the massive spin two field action in four dimensions.
Jung, Hye-Young; Leem, Sangseob; Lee, Sungyoung; Park, Taesung
2016-12-01
Gene-gene interaction (GGI) is one of the most popular approaches for finding the missing heritability of common complex traits in genetic association studies. The multifactor dimensionality reduction (MDR) method has been widely studied for detecting GGIs. In order to identify the best interaction model associated with disease susceptibility, MDR compares all possible genotype combinations in terms of their predictability of disease status from a simple binary high(H) and low(L) risk classification. However, this simple binary classification does not reflect the uncertainty of H/L classification. We regard classifying H/L as equivalent to defining the degree of membership of two risk groups H/L. By adopting the fuzzy set theory, we propose Fuzzy MDR which takes into account the uncertainty of H/L classification. Fuzzy MDR allows the possibility of partial membership of H/L through a membership function which transforms the degree of uncertainty into a [0,1] scale. The best genotype combinations can be selected which maximizes a new fuzzy set based accuracy measure. Two simulation studies are conducted to compare the power of the proposed Fuzzy MDR with that of MDR. Our results show that Fuzzy MDR has higher power than MDR. We illustrate the proposed Fuzzy MDR by analysing bipolar disorder (BD) trait of the WTCCC dataset to detect GGI associated with BD. We propose a novel Fuzzy MDR method to detect gene-gene interaction by taking into account the uncertainly of H/L classification and show that it has higher power than MDR. Fuzzy MDR can be easily extended to handle continuous phenotypes as well. The program written in R for the proposed Fuzzy MDR is available at https://statgen.snu.ac.kr/software/FuzzyMDR. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
The cohomological reduction method for computing n-dimensional cocyclic matrices
Álvarez, Víctor; Frau, María-Dolores; Real, Pedro
2012-01-01
Provided that a cohomological model for G is known, we describe a method for constructing a basis for n-cocycles over G, from which the whole set of n-dimensional cocyclic matrices over G may be straightforwardly calculated. Focusing in the case n=2 (which is of special interest, e.g. for looking for cocyclic Hadamard matrices), our method provides a basis for 2-cocycles in such a way that representative 2-cocycles are calculated all at once, so that there is no need to distinguish between inflation and transgression 2-cocycles (as it has traditionally been the case until now). When n>2, this method provide an uniform way of looking for higher dimensional cocyclic Hadamard matrices for the first time. We illustrate the method with some examples, for n=2,3. In particular, we give some examples of improper 3-dimensional cocyclic Hadamard matrices.
An empirical fuzzy multifactor dimensionality reduction method for detecting gene-gene interactions.
Leem, Sangseob; Park, Taesung
2017-03-14
Detection of gene-gene interaction (GGI) is a key challenge towards solving the problem of missing heritability in genetics. The multifactor dimensionality reduction (MDR) method has been widely studied for detecting GGIs. MDR reduces the dimensionality of multi-factor by means of binary classification into high-risk (H) or low-risk (L) groups. Unfortunately, this simple binary classification does not reflect the uncertainty of H/L classification. Thus, we proposed Fuzzy MDR to overcome limitations of binary classification by introducing the degree of membership of two fuzzy sets H/L. While Fuzzy MDR demonstrated higher power than that of MDR, its performance is highly dependent on the several tuning parameters. In real applications, it is not easy to choose appropriate tuning parameter values. In this work, we propose an empirical fuzzy MDR (EF-MDR) which does not require specifying tuning parameters values. Here, we propose an empirical approach to estimating the membership degree that can be directly estimated from the data. In EF-MDR, the membership degree is estimated by the maximum likelihood estimator of the proportion of cases(controls) in each genotype combination. We also show that the balanced accuracy measure derived from this new membership function is a linear function of the standard chi-square statistics. This relationship allows us to perform the standard significance test using p-values in the MDR framework without permutation. Through two simulation studies, the power of the proposed EF-MDR is shown to be higher than those of MDR and Fuzzy MDR. We illustrate the proposed EF-MDR by analyzing Crohn's disease (CD) and bipolar disorder (BD) in the Wellcome Trust Case Control Consortium (WTCCC) dataset. We propose an empirical Fuzzy MDR for detecting GGI using the maximum likelihood of the proportion of cases(controls) as the membership degree of the genotype combination. The program written in R for EF-MDR is available at http://statgen.snu.ac.kr/software/EF-MDR .
Liu, Jie
2014-12-01
This study investigates the effect of the feature dimensionality reduction strategies on the classification of surface electromyography (EMG) signals toward developing a practical myoelectric control system. Two dimensionality reduction strategies, feature selection and feature projection, were tested on both EMG feature sets, respectively. A feature selection based myoelectric pattern recognition system was introduced to select the features by eliminating the redundant features of EMG recordings instead of directly choosing a subset of EMG channels. The Markov random field (MRF) method and a forward orthogonal search algorithm were employed to evaluate the contribution of each individual feature to the classification, respectively. Our results from 15 healthy subjects indicate that, with a feature selection analysis, independent of the type of feature set, across all subjects high overall accuracies can be achieved in classification of seven different forearm motions with a small number of top ranked original EMG features obtained from the forearm muscles (average overall classification accuracy >95% with 12 selected EMG features). Compared to various feature dimensionality reduction techniques in myoelectric pattern recognition, the proposed filter-based feature selection approach is independent of the type of classification algorithms and features, which can effectively reduce the redundant information not only across different channels, but also cross different features in the same channel. This may enable robust EMG feature dimensionality reduction without needing to change ongoing, practical use of classification algorithms, an important step toward clinical utility.
Deriving Shape-Based Features for C. elegans Locomotion Using Dimensionality Reduction Methods
Gyenes, Bertalan; Brown, André E. X.
2016-01-01
High-throughput analysis of animal behavior is increasingly common following the advances of recording technology, leading to large high-dimensional data sets. This dimensionality can sometimes be reduced while still retaining relevant information. In the case of the nematode worm Caenorhabditis elegans, more than 90% of the shape variance can be captured using just four principal components. However, it remains unclear if other methods can achieve a more compact representation or contribute further biological insight to worm locomotion. Here we take a data-driven approach to worm shape analysis using independent component analysis (ICA), non-negative matrix factorization (NMF), a cosine series, and jPCA (a dynamic variant of principal component analysis [PCA]) and confirm that the dimensionality of worm shape space is close to four. Projecting worm shapes onto the bases derived using each method gives interpretable features ranging from head movements to tail oscillation. We use these as a comparison method to find differences between the wild type N2 worms and various mutants. For example, we find that the neuropeptide mutant nlp-1(ok1469) has an exaggerated head movement suggesting a mode of action for the previously described increased turning rate. The different bases provide complementary views of worm behavior and we expect that closer examination of the time series of projected amplitudes will lead to new results in the future. PMID:27582697
Moiré-reduction method for slanted-lenticular-based quasi-three-dimensional displays
Zhuang, Zhenfeng; Surman, Phil; Zhang, Lei; Rawat, Rahul; Wang, Shizheng; Zheng, Yuanjin; Sun, Xiao Wei
2016-12-01
In this paper we present a method for determining the preferred slanted angle for a lenticular film that minimizes moiré patterns in quasi-three-dimensional (Q3D) displays. We evaluate the preferred slanted angles of the lenticular film for the stripe-type sub-pixel structure liquid crystal display (LCD) panel. Additionally, the sub-pixels mapping algorithm of the specific angle is proposed to assign the images to either the right or left eye channel. A Q3D display prototype is built. Compared with the conventional SLF, this newly implemented Q3D display can not only eliminate moiré patterns but also provide 3D images in both portrait and landscape orientations. It is demonstrated that the developed slanted lenticular film (SLF) provides satisfactory 3D images by employing a compact structure, minimum moiré patterns and stabilized 3D contrast.
Fermion masses from dimensional reduction
Kapetanakis, D. (National Research Centre for the Physical Sciences Democritos, Athens (Greece)); Zoupanos, G. (European Organization for Nuclear Research, Geneva (Switzerland))
1990-10-11
We consider the fermion masses in gauge theories obtained from ten dimensions through dimensional reduction on coset spaces. We calculate the general fermion mass matrix and we apply the mass formula in illustrative examples. (orig.).
Dimensional Reduction for Conformal Blocks
Hogervorst, Matthijs
2016-01-01
We consider the dimensional reduction of a CFT, breaking multiplets of the d-dimensional conformal group SO(d+1,1) up into multiplets of SO(d,1). This leads to an expansion of d-dimensional conformal blocks in terms of blocks in d-1 dimensions. In particular, we obtain a formula for 3d conformal blocks as an infinite sum over 2F1 hypergeometric functions with closed-form coefficients.
Three-dimensional metal artifact reduction method for dental conebeam CT scanners
Kobayashi, Koji; Katsumata, Atsushi; Ito, Koichi; Aoki, Takafumi
2009-02-01
In dental treatments where metal is indispensable material and dental implants require precise structural measurements of teeth and bones, the ability of CT scanners to perform Metal Artifact Reduction (MAR) is a very important yet unsolved problem. The increasing need for dental implants is raising the demand for a conebeam CT. In this paper, an MAR method of the Metal Erasing Method (MEM) is extended to three dimensions. Assuming that metals are completely opaque to X-ray, MEM reconstructs metals and other materials separately, then combines them afterward. 3D-MEM is not only more efficient but performs better than the repetition of MEM, because it identifies metals more precisely by utilizing the continuity of metals in the third dimension. Another important contribution of the research is the application of advanced binarization techniques for identifying metal-corrupted areas on projection images. Differential histogram techniques are applied to find an adequate threshold value. Whereas MEM needs to identify metals on a sinogram that covers the all rotation angles with a single threshold value, identifying metals on each projection image with an individual value is an important benefit of 3D-MEM. The threshold value varies per projection angle, especially by the influence of the spine and scull, that are objects outside of the field of view. The performance of 3D-MEM is examined using a subject who has as many as 12 pieces of complex metals in his teeth. It is shown that the metals are successfully identified and the grade of metal artifact has been considerably reduced.
Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X. L.
2016-01-01
This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy.
Zhang, Lianbin
2012-01-01
In this study, three-dimensional (3D) graphene assemblies are prepared from graphene oxide (GO) by a facile in situ reduction-assembly method, using a novel, low-cost, and environment-friendly reducing medium which is a combination of oxalic acid (OA) and sodium iodide (NaI). It is demonstrated that the combination of a reducing acid, OA, and NaI is indispensable for effective reduction of GO in the current study and this unique combination (1) allows for tunable control over the volume of the thus-prepared graphene assemblies and (2) enables 3D graphene assemblies to be prepared from the GO suspension with a wide range of concentrations (0.1 to 4.5 mg mL-1). To the best of our knowledge, the GO concentration of 0.1 mg mL-1 is the lowest GO concentration ever reported for preparation of 3D graphene assemblies. The thus-prepared 3D graphene assemblies exhibit low density, highly porous structures, and electrically conducting properties. As a proof of concept, we show that by infiltrating a responsive polymer of polydimethylsiloxane (PDMS) into the as-resulted 3D conducting network of graphene, a conducting composite is obtained, which can be used as a sensing device for differentiating organic solvents with different polarity. © 2012 The Royal Society of Chemistry.
Robust methods for data reduction
Farcomeni, Alessio
2015-01-01
Robust Methods for Data Reduction gives a non-technical overview of robust data reduction techniques, encouraging the use of these important and useful methods in practical applications. The main areas covered include principal components analysis, sparse principal component analysis, canonical correlation analysis, factor analysis, clustering, double clustering, and discriminant analysis.The first part of the book illustrates how dimension reduction techniques synthesize available information by reducing the dimensionality of the data. The second part focuses on cluster and discriminant analy
Multichannel transfer function with dimensionality reduction
Kim, Han Suk
2010-01-17
The design of transfer functions for volume rendering is a difficult task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel. In this paper, we propose a new method for transfer function design. Our new method provides a framework to combine multiple approaches and pushes the boundary of gradient-based transfer functions to multiple channels, while still keeping the dimensionality of transfer functions to a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. The high-dimensional data of the domain is reduced by applying recently developed nonlinear dimensionality reduction algorithms. In this paper, we used Isomap as well as a traditional algorithm, Principle Component Analysis (PCA). Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. In this publication we report on the impact of the dimensionality reduction algorithms on transfer function design for confocal microscopy data.
Dimensionality reduction with unsupervised nearest neighbors
Kramer, Oliver
2013-01-01
This book is devoted to a novel approach for dimensionality reduction based on the famous nearest neighbor method that is a powerful classification and regression approach. It starts with an introduction to machine learning concepts and a real-world application from the energy domain. Then, unsupervised nearest neighbors (UNN) is introduced as efficient iterative method for dimensionality reduction. Various UNN models are developed step by step, reaching from a simple iterative strategy for discrete latent spaces to a stochastic kernel-based algorithm for learning submanifolds with independent parameterizations. Extensions that allow the embedding of incomplete and noisy patterns are introduced. Various optimization approaches are compared, from evolutionary to swarm-based heuristics. Experimental comparisons to related methodologies taking into account artificial test data sets and also real-world data demonstrate the behavior of UNN in practical scenarios. The book contains numerous color figures to illustr...
Lee, Jeayoung; Jin, Mehyun; Lee, Yoonseok; Ha, Jaejung; Yeo, Jungsou; Oh, Dongyep
2014-01-01
We examined the gene-gene interactions of five exonic single nucleotide polymorphisms (SNPs) in the gene encoding fatty acid synthase using 513 Korean cattle and using the model free and the non-parametrical multifactor dimensionality reduction method for the analysis. The five SNPs of g.12870 T>C, g.13126 T>C, g.15532 C>A, g.16907 T>C and g.17924 G>A associated with a variety of fatty acid compositions and marbling score were used in this study. The two-factor interaction between g.13126 T>C and g.15532 C>A had the highest training-balanced among the five-factor models and a testing-balanced accuracy at 70.18 % on C18:1 with a cross-validation consistency of 10 out of 10. Also, the two-factor interaction between g.13126 T>C and g.15532 C>A had the highest testing-balanced accuracy at 68.59 % with a 10 out of 10 cross-validation consistency, than any other models on MUFA. In MS, a single SNP g.15532 C>A had the best accuracy at 58.85 % and the two-factor interaction model g.12870 T>C and g.15532 C>A had the highest testing-balanced accuracy at 64.00 %. The three-factor interaction model g.12870 T>C, g.13126 T>C and g.15532 C>A was recorded as having a high testing-balanced accuracy of 63.24 %, but it was lower than the two-factor interaction model. We used likelihood ratio tests for interaction, and Chi square tests to validate our results, with all tests showing statistical significance. We also compared this with mean scores between the high-risk trait group and low-risk trait group. The genotypes of TTCA, TTAA and TCAA at g.15532 and g.13126 on C18:1, genotypes TTCC, TTCA, TTAA, TCAA CCAA at g.15532 and g.13126 on MUFA and genotypes CCCC, TCCA, CCCA, TTAA, TCAA and CCAA at g.15532 and g.12870 on MS were recommended for the genetic improvement of beef quality.
Foist, Rod B; Schulze, H Georg; Ivanov, Andre; Turner, Robin F B
2011-05-01
Two-dimensional correlation spectroscopy (2D-COS) is a powerful spectral analysis technique widely used in many fields of spectroscopy because it can reveal spectral information in complex systems that is not readily evident in the original spectral data alone. However, noise may severely distort the information and thus limit the technique's usefulness. Consequently, noise reduction is often performed before implementing 2D-COS. In general, this is implemented using one-dimensional (1D) methods applied to the individual input spectra, but, because 2D-COS is based on sets of successive spectra and produces 2D outputs, there is also scope for the utilization of 2D noise-reduction methods. Furthermore, 2D noise reduction can be applied either to the original set of spectra before performing 2D-COS ("pretreatment") or on the 2D-COS output ("post-treatment"). Very little work has been done on post-treatment; hence, the relative advantages of these two approaches are unclear. In this work we compare the noise-reduction performance on 2D-COS of pretreatment and post-treatment using 1D (wavelets) and 2D algorithms (wavelets, matrix maximum entropy). The 2D methods generally outperformed the 1D method in pretreatment noise reduction. 2D post-treatment in some cases was superior to pretreatment and, unexpectedly, also provided correlation coefficient maps that were similar to 2D correlation spectroscopy maps but with apparent better contrast.
Andersson, Pher G
2008-01-01
With its comprehensive overview of modern reduction methods, this book features high quality contributions allowing readers to find reliable solutions quickly and easily. The monograph treats the reduction of carbonyles, alkenes, imines and alkynes, as well as reductive aminations and cross and heck couplings, before finishing off with sections on kinetic resolutions and hydrogenolysis. An indispensable lab companion for every chemist.
Dimensional Reduction for Generalized Continuum Polymers
Helmuth, Tyler
2016-10-01
The Brydges-Imbrie dimensional reduction formula relates the pressure of a d-dimensional gas of hard spheres to a model of (d+2)-dimensional branched polymers. Brydges and Imbrie's proof was non-constructive and relied on a supersymmetric localization lemma. The main result of this article is a constructive proof of a more general dimensional reduction formula that contains the Brydges-Imbrie formula as a special case. Central to the proof are invariance lemmas, which were first introduced by Kenyon and Winkler for branched polymers. The new dimensional reduction formulas rely on invariance lemmas for central hyperplane arrangements that are due to Mészáros and Postnikov. Several applications are presented, notably dimensional reduction formulas for (i) non-spherical bodies and (ii) for corrections to the pressure due to symmetry effects.
What is dimensional reduction really telling us?
Coumbe, Daniel
2015-01-01
Numerous approaches to quantum gravity report a reduction in the number of spacetime dimensions at the Planck scale. However, accepting the reality of dimensional reduction also means accepting its consequences, including a variable speed of light. We provide numerical evidence for a variable speed of light in the causal dynamical triangulation (CDT) approach to quantum gravity, showing that it closely matches the superluminality implied by dimensional reduction. We argue that reconciling the appearance of dimensional reduction with a constant speed of light may require modifying our understanding of time, an idea originally proposed in Ref. 1.
Cascade Support Vector Machines with Dimensionality Reduction
Oliver Kramer
2015-01-01
Full Text Available Cascade support vector machines have been introduced as extension of classic support vector machines that allow a fast training on large data sets. In this work, we combine cascade support vector machines with dimensionality reduction based preprocessing. The cascade principle allows fast learning based on the division of the training set into subsets and the union of cascade learning results based on support vectors in each cascade level. The combination with dimensionality reduction as preprocessing results in a significant speedup, often without loss of classifier accuracies, while considering the high-dimensional pendants of the low-dimensional support vectors in each new cascade level. We analyze and compare various instantiations of dimensionality reduction preprocessing and cascade SVMs with principal component analysis, locally linear embedding, and isometric mapping. The experimental analysis on various artificial and real-world benchmark problems includes various cascade specific parameters like intermediate training set sizes and dimensionalities.
Dimensionality reduction in Bayesian estimation algorithms
G. W. Petty
2013-03-01
Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.
Dimensional reduction of nonlinear time delay systems
M. S. Fofana
2005-01-01
infinite-dimensional problem without the assumption of small time delay. This dimensional reduction is illustrated in this paper with the delay versions of the Duffing and van der Pol equations. For both nonlinear delay equations, transcendental characteristic equations of linearized stability are examined through Hopf bifurcation. The infinite-dimensional nonlinear solutions of the delay equations are decomposed into stable and centre subspaces, whose respective dimensions are determined by the linearized stability of the transcendental equations. Linear semigroups, infinitesimal generators, and their adjoint forms with bilinear pairings are the additional candidates for the infinite-dimensional reduction.
Reduction of infinite dimensional equations
Zhongding Li
2006-02-01
Full Text Available In this paper, we use the general Legendre transformation to show the infinite dimensional integrable equations can be reduced to a finite dimensional integrable Hamiltonian system on an invariant set under the flow of the integrable equations. Then we obtain the periodic or quasi-periodic solution of the equation. This generalizes the results of Lax and Novikov regarding the periodic or quasi-periodic solution of the KdV equation to the general case of isospectral Hamiltonian integrable equation. And finally, we discuss the AKNS hierarchy as a special example.
Dimensionality reduction of collective motion by principal manifolds
Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.
2015-01-01
While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.
Relations between two-dimensional models from dimensional reduction
Amaral, R.L.P.G.; Natividade, C.P. [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Inst. de Fisica
1998-12-31
In this work we explore the consequences of dimensional reduction of the 3D Maxwell-Chern-Simons and some related models. A connection between topological mass generation in 3D and mass generation according to the Schwinger mechanism in 2D is obtained. Besides, a series of relationships are established by resorting to dimensional reduction and duality interpolating transformations. Nonabelian generalizations are also pointed out. (author) 10 refs.
Dimensional reduction over fuzzy coset spaces
Aschieri, P. E-mail: aschieri@theorie.physik.uni-muenchen.de; Madore, J.; Manousselis, P.; Zoupanos, G
2004-04-01
We examine gauge theories on Minkowski space-time times fuzzy coset spaces. This means that the extra space dimensions instead of being a continuous coset space S/R are a corresponding finite matrix approximation. The gauge theory defined on this non-commutative setup is reduced to four dimensions and the rules of the corresponding dimensional reduction are established. We investigate in particular the case of the fuzzy sphere including the dimensional reduction of fermion fields. (author)
Dimension and dimensional reduction in quantum gravity
Carlip, S.
2017-10-01
A number of very different approaches to quantum gravity contain a common thread, a hint that spacetime at very short distances becomes effectively two dimensional. I review this evidence, starting with a discussion of the physical meaning of ‘dimension’ and concluding with some speculative ideas of what dimensional reduction might mean for physics.
Coset space dimensional reduction of gauge theories
Kapetanakis, D. (Physik Dept., Technische Univ. Muenchen, Garching (Germany)); Zoupanos, G. (CERN, Geneva (Switzerland))
1992-10-01
We review the attempts to construct unified theories defined in higher dimensions which are dimensionally reduced over coset spaces. We employ the coset space dimensional reduction scheme, which permits the detailed study of the resulting four-dimensional gauge theories. In the context of this scheme we present the difficulties and the suggested ways out in the attempts to describe the observed interactions in a realistic way. (orig.).
Dimensional reduction and the Higgs potential
Farakos, K.; Koutsoumbas, G.; Surridge, M.; Zoupanos, G.
1987-08-17
Dimensional reduction of pure gauge theories over a compact coset space S/R leads to 4-dimensional gauge theories, where Higgs fields and the corresponding potential appear naturally. We derive and examine the Higgs potential in certain classes of dimensionally reduced models. In some of these models with Higgs potential of geometrical origin, the spontaneous symmetry breaking takes us a step closer towards the observed low energy gauge theory.
Parallel Framework for Dimensionality Reduction of Large-Scale Datasets
Sai Kiranmayee Samudrala
2015-01-01
Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.
A Difference Criterion for Dimensionality Reduction
Aved, A. J.; Blasch, E.; Peng, J.
2015-12-01
A dynamic data-driven geoscience application includes hyperspectral scene classification which has shown promising potential in many remote-sensing applications. A hyperspectral image of a scene spectral radiance is typically measured by hundreds of contiguous spectral bands or features, ranging from visible/near-infrared (VNIR) to shortwave infrared (SWIR). Spectral-reflectance measurements provide rich information for object detection and classification. On the other hand, they generate a large number of features, resulting in a high dimensional measurement space. However, a large number of features often poses challenges and can result in poor classification performance. This is due to the curse of dimensionality which requires model reduction, uncertainty quantification and optimization for real-world applications. In such situations, feature extraction or selection methods play an important role by significantly reducing the number of features for building classifiers. In this work, we focus on efficient feature extraction using the dynamic data-driven applications systems (DDDAS) paradigm. Many dimension reduction techniques have been proposed in the literature. A well-known technique is Fisher's linear discriminant analysis (LDA). LDA finds the projection matrix that simultaneously maximizes a within class scatter matrix and minimizes a between class scatter matrix. However, LDA requires matrix inverse which can be a major issue when the within matrix is singular. We propose a difference criterion for dimension reduction that does not require a matrix inverse for software implementation. We show how to solve the optimization problem with semi-definite programming. In addition, we establish an error bound for the proposed algorithm. We demonstrate the connection between relief feature selection and a two class formulation of multi-class problems, thereby providing a sound basis for observed benefits associated with this formulation. Finally, we provide
Dimensional reduction and dynamical symmetry breaking
Forgacs, P.; Zoupanos, G.
1984-11-22
We present a model in which the electroweak gauge group is broken according to a dynamical scenario based on the chiral symmetry breaking of high colour representations. The dynamical scenario requires also the existence of elementary Higgs fields, which in the present scheme come from the dimensional reduction of a pure gauge theory.
Dimensional reduction and dynamical symmetry breaking
Forgacs, P.; Zoupanos, G. (European Organization for Nuclear Research, Geneva (Switzerland))
1984-11-22
We present a model in which the electroweak gauge group is broken according to a dynamical scenario based on the chiral symmetry breaking of high colour representations. The dynamical scenario also requires the existence of elementary Higgs fields, which in the present scheme come from the dimensional reduction of a pure gauge theory.
Outlier Preservation by Dimensionality Reduction Techniques
Onderwater, M.
2015-01-01
Sensors are increasingly part of our daily lives: motion detection, lighting control, and energy consumption all rely on sensors. Combining this information into, for instance, simple and comprehensive graphs can be quite challenging. Dimensionality reduction is often used to address this problem, b
Multiloop Integrand Reduction for Dimensionally Regulated Amplitudes
Mastrolia, P; Ossola, G; Peraro, T
2013-01-01
We present the integrand reduction via multivariate polynomial division as a natural technique to encode the unitarity conditions of Feynman amplitudes. We derive a recursive formula for the integrand reduction, valid for arbitrary dimensionally regulated loop integrals with any number of loops and external legs, which can be used to obtain the decomposition of any integrand analytically with a finite number of algebraic operations. The general results are illustrated by applications to two-loop Feynman diagrams in QED and QCD, showing that the proposed reduction algorithm can also be seamlessly applied to integrands with denominators appearing with arbitrary powers.
Effective Image Database Search via Dimensionality Reduction
Dahl, Anders Bjorholm; Aanæs, Henrik
2008-01-01
Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction......, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building....... In the query step, features from the query image are assigned to the visual vocabulary. The dimensionality reduction enables us to do exact feature labeling using kD-tree, instead of approximate approaches normally used. Despite the dimensionality reduction to between 6 and 15 dimensions we obtain improved...
Nonlinear Dimensionality Reduction via Path-Based Isometric Mapping
2013-01-01
Nonlinear dimensionality reduction methods have demonstrated top-notch performance in many pattern recognition and image classification tasks. Despite their popularity, they suffer from highly expensive time and memory requirements, which render them inapplicable to large-scale datasets. To leverage such cases we propose a new method called "Path-Based Isomap". Similar to Isomap, we exploit geodesic paths to find the low-dimensional embedding. However, instead of preserving pairwise geodesic ...
Local coordinates alignment with global preservation for dimensionality reduction.
Chen, Jing; Ma, Zhengming; Liu, Yang
2013-01-01
Dimensionality reduction is vital in many fields, and alignment-based methods for nonlinear dimensionality reduction have become popular recently because they can map the high-dimensional data into a low-dimensional subspace with the property of local isometry. However, the relationships between patches in original high-dimensional space cannot be ensured to be fully preserved during the alignment process. In this paper, we propose a novel method for nonlinear dimensionality reduction called local coordinates alignment with global preservation. We first introduce a reasonable definition of topology-preserving landmarks (TPLs), which not only contribute to preserving the global structure of datasets and constructing a collection of overlapping linear patches, but they also ensure that the right landmark is allocated to the new test point. Then, an existing method for dimensionality reduction that has good performance in preserving the global structure is used to derive the low-dimensional coordinates of TPLs. Local coordinates of each patch are derived using tangent space of the manifold at the corresponding landmark, and then these local coordinates are aligned into a global coordinate space with the set of landmarks in low-dimensional space as reference points. The proposed alignment method, called landmarks-based alignment, can produce a closed-form solution without any constraints, while most previous alignment-based methods impose the unit covariance constraint, which will result in the deficiency of global metrics and undesired rescaling of the manifold. Experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed algorithm.
A Fourier dimensionality reduction model for big data interferometric imaging
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the
Linear low-rank approximation and nonlinear dimensionality reduction
ZHANG Zhenyue; ZHA Hongyuan
2004-01-01
We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of column-partitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing
Shuang Li
2016-01-01
Full Text Available Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Margin Based Dimensionality Reduction and Generalization
2010-01-01
NJ 07043, USA 2IBM T.J. Watson Research, Hawthorne, NY 10532, USA 3Computing Technology Applications Branch, Air Force Research Laboratory, Ohio...NAME(S) AND ADDRESS(ES) Air Force Research Laboratory,Computing Technology Applications Branch,Wright Patterson AFB,OH,45433 8. PERFORMING...Pattern Analysis and Machine Intelligence, vol. 26, no. 6, pp. 732-739, 2004. [16] L. Rueda and M. Herrera, “Linear dimensionality reduction by
Dimensional reduction without continuous extra dimensions
Chamseddine, Ali H. [American University of Beirut, Physics Department, Beirut, Lebanon and I.H.E.S. F-91440 Bures-sur-Yvette (France); Froehlich, J.; Schubnel, B. [ETHZ, Mathematics and Physics Departments, Zuerich (Switzerland); Wyler, D. [Institute of Theoretical Physics, University of Zuerich (Switzerland)
2013-01-15
We describe a novel approach to dimensional reduction in classical field theory. Inspired by ideas from noncommutative geometry, we introduce extended algebras of differential forms over space-time, generalized exterior derivatives, and generalized connections associated with the 'geometry' of space-times with discrete extra dimensions. We apply our formalism to theories of gauge- and gravitational fields and find natural geometrical origins for an axion- and a dilaton field, as well as a Higgs field.
Recursive support vector machines for dimensionality reduction.
Tao, Qing; Chu, Dejun; Wang, Jue
2008-01-01
The usual dimensionality reduction technique in supervised learning is mainly based on linear discriminant analysis (LDA), but it suffers from singularity or undersampled problems. On the other hand, a regular support vector machine (SVM) separates the data only in terms of one single direction of maximum margin, and the classification accuracy may be not good enough. In this letter, a recursive SVM (RSVM) is presented, in which several orthogonal directions that best separate the data with the maximum margin are obtained. Theoretical analysis shows that a completely orthogonal basis can be derived in feature subspace spanned by the training samples and the margin is decreasing along the recursive components in linearly separable cases. As a result, a new dimensionality reduction technique based on multilevel maximum margin components and then a classifier with high accuracy are achieved. Experiments in synthetic and several real data sets show that RSVM using multilevel maximum margin features can do efficient dimensionality reduction and outperform regular SVM in binary classification problems.
A Fourier dimensionality reduction model for big data interferometric imaging
Kartik, S Vijay; Thiran, Jean-Philippe; Wiaux, Yves
2016-01-01
Data dimensionality reduction in radio interferometry can provide critical savings of computational resources for image reconstruction, which is of paramount importance for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensi...
A Tannakian approach to dimensional reduction of principal bundles
Álvarez-Cónsul, Luis; García-Prada, Oscar
2016-01-01
Let $P$ be a parabolic subgroup of a connected simply connected complex semisimple Lie group $G$. Given a compact K\\"ahler manifold $X$, the dimensional reduction of $G$-equivariant holomorphic vector bundles over $X\\times G/P$ was carried out by the first and third authors. This raises the question of dimensional reduction of holomorphic principal bundles over $X\\times G/P$. The method used for equivariant vector bundles does not generalize to principal bundles. In this paper, we adapt to equivariant principal bundles the Tannakian approach of Nori, to describe the dimensional reduction of $G$-equivariant principal bundles over $X\\times G/P$, and to establish a Hitchin--Kobayashi type correspondence. In order to be able to apply the Tannakian theory, we need to assume that $X$ is a complex projective manifold.
Dimensionality Reduction on Multi-Dimensional Transfer Functions for Multi-Channel Volume Data Sets
Kim, Han Suk; Schulze, Jürgen P.; Cone, Angela C.; Sosinsky, Gina E.; Martone, Maryann E.
2011-01-01
The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets. PMID:21841914
Dimensionality Reduction of Laplacian Embedding for 3D Mesh Reconstruction
Mardhiyah, I.; Madenda, S.; Salim, R. A.; Wiryana, I. M.
2016-06-01
Laplacian eigenbases are the important thing that we have to process from 3D mesh information. The information of geometric 3D mesh are include vertices locations and the connectivity of graph. Due to spectral analysis, geometric 3D mesh for large and sparse graphs with thousands of vertices is not practical to compute all the eigenvalues and eigenvector. Because of that, in this paper we discuss how to build 3D mesh reconstruction by reducing dimensionality on null eigenvalue but retain the corresponding eigenvector of Laplacian Embedding to simplify mesh processing. The result of reducing information should have to retained the connectivity of graph. The advantages of dimensionality reduction is for computational eficiency and problem simplification. Laplacian eigenbases is the point of dimensionality reduction for 3D mesh reconstruction. In this paper, we show how to reconstruct geometric 3D mesh after approximation step of 3D mesh by dimensionality reduction. Dimensionality reduction shown by Laplacian Embedding matrix. Furthermore, the effectiveness of 3D mesh reconstruction method will evaluated by geometric error, differential error, and final error. Numerical approximation error of our result are small and low complexity of computational.
基于流形学习的非线性维数约简方法%Nonlinear Dimensionality Reduction Method Based on Manifold Learning
段志臣; 芮小平; 张立媛
2012-01-01
流形学习是一种新的非线性维数约简方法,近年来正引起可视化等领域研究者的高度重视.为加深对流形学习的理解,介绍了流形学习的基本原理,总结了其研究进展和分类方法,最后阐述了几种常用的流形学习方法的基本思想、算法步骤和各自的优缺点.通过在人工数据集Swiss-Roll上进行实验,将各类方法在近邻值选取和噪声影响等方面进行了对比分析,结果表明:与传统的线性维数约简方法相比,流形学习方法能够有效地发现观测样本的低维结构.最后对流形学习未来的研究方向作出展望,以期在这一领域取得更大进展.%As a new kind of nonlinear dimensionality reduction method, manifold learning is capturing increasing interests of researchers. To understand manifold learning better, the principle is firstly introduced, and then its development history and different representations are summarized, finally several major method are introduced, whose basic thoughts, steps and advantages are pointed out respectively. By the experiments on Swiss-Roll, the selection of neighbors and noise effect are analyzed, the results shows: compared with traditional linear method, manifold learning can discover the intrinsic structure of the samples better. Finally the prospect of manifold learning was discussed for more developments.
Multi-Channel Transfer Function with Dimensionality Reduction
Kim, Han Suk; Schulze, Jürgen P.; Cone, Angela C.; Sosinsky, Gina E.; Martone, Maryann E.
2010-01-01
The design of transfer functions for volume rendering is a difficult task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel. In this paper, we propose a new method for transfer function design. Our new method provides a framework to combine multiple approaches and pushes the boundary of gradient-based transfer functions to multiple channels, while still keeping the dimensionality of transfer functions to a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. The high-dimensional data of the domain is reduced by applying recently developed nonlinear dimensionality reduction algorithms. In this paper, we used Isomap as well as a traditional algorithm, Principle Component Analysis (PCA). Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. In this publication we report on the impact of the dimensionality reduction algorithms on transfer function design for confocal microscopy data. PMID:20582228
On the consistency of coset space dimensional reduction
Chatzistavrakidis, A. [Institute of Nuclear Physics, NCSR DEMOKRITOS, GR-15310 Athens (Greece); Physics Department, National Technical University of Athens, GR-15780 Zografou Campus, Athens (Greece)], E-mail: cthan@mail.ntua.gr; Manousselis, P. [Physics Department, National Technical University of Athens, GR-15780 Zografou Campus, Athens (Greece); Department of Engineering Sciences, University of Patras, GR-26110 Patras (Greece)], E-mail: pman@central.ntua.gr; Prezas, N. [CERN PH-TH, 1211 Geneva (Switzerland)], E-mail: nikolaos.prezas@cern.ch; Zoupanos, G. [Physics Department, National Technical University of Athens, GR-15780 Zografou Campus, Athens (Greece)], E-mail: george.zoupanos@cern.ch
2007-11-15
In this Letter we consider higher-dimensional Yang-Mills theories and examine their consistent coset space dimensional reduction. Utilizing a suitable ansatz and imposing a simple set of constraints we determine the four-dimensional gauge theory obtained from the reduction of both the higher-dimensional Lagrangian and the corresponding equations of motion. The two reductions yield equivalent results and hence they constitute an example of a consistent truncation.
改进的非线性数据降维方法及其应用%Improved non-linear data dimensionality reduction method and its application
吴晓婷; 闫德勤
2011-01-01
Locally Linear Embedding(LLE) algorithm is one of the non-linear dimensionality reduction methods which are based on manifold learning. In LLE, each sample point is reconstructed from a linear combination of its nearest neighbors.However, different number of neighbors will produce different reconstruction errors, which will make the result different directly. This paper structures the approximate reconstruction coefficient making use of their category information which is obtained by clustering, and proposes an improved algorithm.The proposed algorithm can reduce the influence of the number of neighbors efficiently and the probability of the database is retained. This is confirmed by experiments on both synthetic and real-world data.%局部线性嵌入算法(Locally Linear Embedding,LLE)是基于流形学习的非线性降维方法之一.LLE利用样本点的近邻点的线性组合对每个样本点进行局部重构,而不同近邻个数的选取会产生不同的重构误差,从而影响整体算法的实施.提出了一种LLE的改进算法,算法有效地降低了近邻点个数对算法的影响,并很好地学习了高维数据的流形结构.所提方法的有效性在人造和真实数据的对比实验中得到了证实.
Dimensional reduction for D3-brane moduli
Cownden, Brad; Frey, Andrew R.; Marsh, M. C. David; Underwood, Bret
2016-12-01
Warped string compactifications are central to many attempts to stabilize moduli and connect string theory with cosmology and particle phenomenology. We present a first-principles derivation of the low-energy 4D effective theory from dimensional reduction of a D3-brane in a warped Calabi-Yau compactification of type IIB string theory with imaginary self-dual 3-form flux, including effects of D3-brane motion beyond the probe approximation, and find the metric on the moduli space of brane positions, the universal volume modulus, and axions descending from the 4-form potential. As D3-branes may be considered as carrying either electric or magnetic charges for the self-dual 5-form field strength, we present calculations in both duality frames. Our results are consistent with, but extend significantly, earlier results on the low-energy effective theory arising from D3-branes in string compactifications.
Dimensional Reduction for D3-brane Moduli
Cownden, Brad; Marsh, M C David; Underwood, Bret
2016-01-01
Warped string compactifications are central to many attempts to stabilize moduli and connect string theory with cosmology and particle phenomenology. We present a first-principles derivation of the low-energy 4D effective theory from dimensional reduction of a D3-brane in a warped Calabi-Yau compactification of type IIB string theory with imaginary self-dual 3-form flux, including effects of D3-brane motion beyond the probe approximation, and find the metric on the moduli space of brane positions, the universal volume modulus, and axions descending from the 4-form potential. As D3-branes may be considered as carrying either electric or magnetic charges for the self-dual 5-form field strength, we present calculations in both duality frames. Our results are consistent with, but extend significantly, earlier results on the low-energy effective theory arising from D3-branes in string compactifications.
Multiple Kernel Spectral Regression for Dimensionality Reduction
Bing Liu
2013-01-01
Full Text Available Traditional manifold learning algorithms, such as locally linear embedding, Isomap, and Laplacian eigenmap, only provide the embedding results of the training samples. To solve the out-of-sample extension problem, spectral regression (SR solves the problem of learning an embedding function by establishing a regression framework, which can avoid eigen-decomposition of dense matrices. Motivated by the effectiveness of SR, we incorporate multiple kernel learning (MKL into SR for dimensionality reduction. The proposed approach (termed MKL-SR seeks an embedding function in the Reproducing Kernel Hilbert Space (RKHS induced by the multiple base kernels. An MKL-SR algorithm is proposed to improve the performance of kernel-based SR (KSR further. Furthermore, the proposed MKL-SR algorithm can be performed in the supervised, unsupervised, and semi-supervised situation. Experimental results on supervised classification and semi-supervised classification demonstrate the effectiveness and efficiency of our algorithm.
Taşkin, Gülşen
2016-05-01
Recently, information extraction from hyperspectral images (HI) has become an attractive research area for many practical applications in earth observation due to the fact that HI provides valuable information with a huge number of spectral bands. In order to process such a huge amount of data in an effective way, traditional methods may not fully provide a satisfactory performance because they do not mostly consider high dimensionality of the data which causes curse of dimensionality also known as Hughes phenomena. In case of supervised classification, a poor generalization performance is achieved as a consequence resulting in availability of limited training samples. Therefore, advance methods accounting for the high dimensionality need to be developed in order to get a good generalization capability. In this work, a method of High Dimensional Model Representation (HDMR) was utilized for dimensionality reduction, and a novel feature selection method was introduced based on global sensitivity analysis. Several implementations were conducted with hyperspectral images in comparison to state-of-art feature selection algorithms in terms of classification accuracy, and the results showed that the proposed method outperforms the other feature selection methods even with all considered classifiers, that are support vector machines, Bayes, and decision tree j48.
Coset space dimensional reduction of Einstein-Yang-Mills theory
Chatzistavrakidis, A. [Institute of Nuclear Physics, NCSR Demokritos, 15310 Athens (Greece); Physics Department, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Manousselis, P. [Physics Department, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Department of Engineering Sciences, University of Patras, 26110 Patras (Greece); Prezas, N. [Theory Unit, Physics Department, 1211 Geneva (Switzerland); Zoupanos, G.
2008-04-15
In the present contribution we extend our previous work by considering the coset space dimensional reduction of higher-dimensional Einstein-Yang-Mills theories including scalar fluctuations as well as Kaluza-Klein excitations of the compactification metric and we describe the gravity-modified rules for the reduction of non-abelian gauge theories. (Abstract Copyright [2008], Wiley Periodicals, Inc.)
Adaptive sampling for nonlinear dimensionality reduction based on manifold learning
Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan
2017-01-01
We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space...... that is approximately isometric to the manifold that is assumed to be formed by the high-fidelity Navier-Stokes flow solutions under smooth variations of the inflow conditions. The focus of the work at hand is the adaptive construction and refinement of the Isomap emulator: We exploit the non-Euclidean Isomap metric...... to detect and fill up gaps in the sampling in the embedding space. The performance of the proposed manifold filling method will be illustrated by numerical experiments, where we consider nonlinear parameter-dependent steady-state Navier-Stokes flows in the transonic regime....
Hierarchical discriminant manifold learning for dimensionality reduction and image classification
Chen, Weihai; Zhao, Changchen; Ding, Kai; Wu, Xingming; Chen, Peter C. Y.
2015-09-01
In the field of image classification, it has been a trend that in order to deliver a reliable classification performance, the feature extraction model becomes increasingly more complicated, leading to a high dimensionality of image representations. This, in turn, demands greater computation resources for image classification. Thus, it is desirable to apply dimensionality reduction (DR) methods for image classification. It is necessary to apply DR methods to relieve the computational burden as well as to improve the classification accuracy. However, traditional DR methods are not compatible with modern feature extraction methods. A framework that combines manifold learning based DR and feature extraction in a deeper way for image classification is proposed. A multiscale cell representation is extracted from the spatial pyramid to satisfy the locality constraints for a manifold learning method. A spectral weighted mean filtering is proposed to eliminate noise in the feature space. A hierarchical discriminant manifold learning is proposed which incorporates both category label and image scale information to guide the DR process. Finally, the image representation is generated by concatenating dimensionality reduced cell representations from the same image. Extensive experiments are conducted to test the proposed algorithm on both scene and object recognition datasets in comparison with several well-established and state-of-the-art methods with respect to classification precision and computational time. The results verify the effectiveness of incorporating manifold learning in the feature extraction procedure and imply that the multiscale cell representations may be distributed on a manifold.
Shao, Zhenfeng; Zhang, Lei
2014-09-01
This paper presents a novel sparse dimensionality reduction method of hyperspectral image based on semi-supervised local Fisher discriminant analysis (SELF). The proposed method is designed to be especially effective for dealing with the out-of-sample extrapolation to realize advantageous complementarities between SELF and sparsity preserving projections (SPP). Compared to SELF and SPP, the method proposed herein offers highly discriminative ability and produces an explicit nonlinear feature mapping for the out-of-sample extrapolation. This is due to the fact that the proposed method can get an explicit feature mapping for dimensionality reduction and improve the classification performance of classifiers by performing dimensionality reduction. Experimental analysis on the sparsity and efficacy of low dimensional outputs shows that, sparse dimensionality reduction based on SELF can yield good classification results and interpretability in the field of hyperspectral remote sensing.
Supervised linear dimensionality reduction with robust margins for object recognition
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
APPLICATION OF RADON REDUCTION METHODS
The document is intended to aid homeowners and contractors in diagnosing and solving indoor radon problems. It will also be useful to State and Federal regulatory officials and many other persons who provide advice on the selection, design and operation of radon reduction methods...
Nearly-Kaehler dimensional reduction of the heterotic string
Chatzistavrakidis, A. [Institute of Nuclear Physics, NCSR Demokritos, 15310 Athens (Greece); Zoupanos, G. [Physics Department, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Theory Group, Physics Department, CERN, Geneva (Switzerland)
2010-07-15
The effective action in four dimensions resulting from the ten-dimensional N = 1 heterotic supergravity coupled to N = 1 supersymmetric Yang-Mills upon dimensional reduction over nearly-Kaehler manifolds is discussed. Nearly-Kaehler manifolds are an interesting class of manifolds admitting an SU(3)-structure and in six dimensions all homogeneous nearly-Kaehler manifolds are included in the class of the corresponding non-symmetric coset spaces plus a group manifold. Therefore it is natural to apply the Coset Space Dimensional Reduction scheme using these coset spaces as internal manifolds in order to determine the four-dimensional theory. (Abstract Copyright [2010], Wiley Periodicals, Inc.)
Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray
Lan Shu
2008-07-01
Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLEÃ¢Â€Â™s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.
Dimensional reduction in causal set gravity
Carlip, S
2015-01-01
Results from a number of different approaches to quantum gravity suggest that the effective dimension of spacetime may drop to $d=2$ at small scales. I show that two different dimensional estimators in causal set theory display the same behavior, and argue that a third, the spectral dimension, may exhibit a related phenomenon of "asymptotic silence."
Aspects of dynamical dimensional reduction in multigraph ensembles of CDT
Giasemidis, Georgios; Zohren, Stefan
2012-01-01
We study the continuum limit of a "radially reduced" approximation of Causal Dynamical Triangulations (CDT), so-called multigraph ensembles, and explain why they serve as realistic toy models to study the dimensional reduction observed in numerical simulations of four-dimensional CDT. We present properties of this approximation in two, three and four dimensions comparing them with the numerical simulations and pointing out some common features with 2+1 dimensional Horava-Lifshitz gravity.
Multimodal Biometrics Recognition by Dimensionality Diminution Method
Suvarnsing Bhable
2015-12-01
Full Text Available Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint, Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new dimensionality reduction method called Dimension Diminish Projection (DDP in this paper. DDP can not only preserve local information by capturing the intra-modal geometry, but also extract between-class relevant structures for classification effectively. Experimental results show that our proposed method performs better than other algorithms including PCA, LDA and MFA.
Accelerating high-dimensional clustering with lossless data reduction.
Qaqish, Bahjat F; O'Brien, Jonathon J; Hibbard, Jonathan C; Clowers, Katie J
2017-09-15
For cluster analysis, high-dimensional data are associated with instability, decreased classification accuracy and high-computational burden. The latter challenge can be eliminated as a serious concern. For applications where dimension reduction techniques are not implemented, we propose a temporary transformation which accelerates computations with no loss of information. The algorithm can be applied for any statistical procedure depending only on Euclidean distances and can be implemented sequentially to enable analyses of data that would otherwise exceed memory limitations. The method is easily implemented in common statistical software as a standard pre-processing step. The benefit of our algorithm grows with the dimensionality of the problem and the complexity of the analysis. Consequently, our simple algorithm not only decreases the computation time for routine analyses, it opens the door to performing calculations that may have otherwise been too burdensome to attempt. R, Matlab and SAS/IML code for implementing lossless data reduction is freely available in the Appendix. obrienj@hms.harvard.edu.
One-dimensional reduction of viscous jets
Pitrou, Cyril
2015-01-01
We build a general formalism to describe thin viscous jets as one-dimensional objects with an internal structure. We present in full generality the steps needed to describe the viscous jets around their central line, and we argue that the Taylor expansion of all fields around that line is conveniently expressed in terms of symmetric trace-free tensors living in the two dimensions of the fiber sections. We recover the standard results of axisymmetric jets and we report the first and second corrections to the lowest order description, also allowing for a rotational component around the axis of symmetry. When applied to generally curved fibers, the lowest order description corresponds to a viscous string model whose sections are circular. However, when including the first corrections we find that curved jets generically develop elliptic sections. Several subtle effects imply that the first corrections cannot be described by a rod model, since it amounts to selectively discard some corrections. However, in a fast...
Wilson flux breaking and coset space dimensional reduction
Zoupanos, G.
1988-02-11
Higher dimensional gauge theories lead, after dimensional reduction on coset spaces, to four-dimensional gauge theories usually with the natural emergence of a Higgs sector which is completely determined. However, the Higgs fields never appear in the adjoint representation which in many GUTs could lead to a successful spontaneous symmetry breaking towards the low energy gauge group. As an alternative we suggest that the breaking of the four-dimensional GUTs obtained from CSDR could be provided by the Wilson flux breaking and we discuss some semirealistic examples. We also speculate on the possibility that the breaking of the electroweak sector has dynamical origin.
Hasei, Tomohiro; Nakanishi, Haruka; Toda, Yumiko; Watanabe, Tetsushi
2012-08-31
3-Nitrobenzanthrone (3-NBA) is an extremely strong mutagen and carcinogen in rats inducing squamous cell carcinoma and adenocarcinoma. We developed a new sensitive analytical method, a two-dimensional HPLC system coupled with on-line reduction, to quantify non-fluorescent 3-NBA as fluorescent 3-aminobenzanthrone (3-ABA). The two-dimensional HPLC system consisted of reversed-phase HPLC and normal-phase HPLC, which were connected with a switch valve. 3-NBA was purified by reversed-phase HPLC and reduced to 3-ABA with a catalyst column, packed with alumina coated with platinum, in ethanol. An alcoholic solvent is necessary for reduction of 3-NBA, but 3-ABA is not fluorescent in the alcoholic solvent. Therefore, 3-ABA was separated from alcohol and impurities by normal-phase HPLC and detected with a fluorescence detector. Extracts from surface soil, airborne particles, classified airborne particles, and incinerator dust were applied to the two-dimensional HPLC system after clean-up with a silica gel column. 3-NBA, detected as 3-ABA, in the extracts was found as a single peak on the chromatograms without any interfering peaks. 3-NBA was detected in 4 incinerator dust samples (n=5). When classified airborne particles, that is, those 7.0 μm in size, were applied to the two-dimensional HPLC system after purified using a silica gel column, 3-NBA was detected in those particles with particle sizes NBA in airborne particles and the detection of 3-NBA in incinerator dust. Copyright © 2012 Elsevier B.V. All rights reserved.
Nonlinear Dimensionality Reduction and Data Visualization: A Review
Hujun Yin
2007-01-01
Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared.
Coupling running through the Looking-Glass of dimensional Reduction
Shirkov, D V
2010-01-01
The dimensional reduction, in a form of transition from four to two dimensions, was used in the 90s in a context of HE Regge scattering. Recently, it got a new impetus in quantum gravity where it opens the way to renormalizability and asymptotic safety. We consider a QFT model $g\\,\\varphi^4\\,$ with running coupling defined in both the two domains of different dimensionality; the $\\gbar(Q^2)\\,$ evolutions being duly conjugated at the reduction scale $\\,Q\\sim M.$ Beyond this scale, in the deep UV 2-dim region, the running coupling does not increase any more and tends to a finite value $\\gbar_2(\\infty)\\,<\\,\\gbar_2(M^2)$ from above. As a result, the global evolution picture looks quite peculiar and can provide base for the modified Grand Unification scenario with dimensional reduction and asymptotic safety for unification instead of leptoquarks.
The Spatial String Tension and Dimensional Reduction in QCD
Cheng, M; Van der Heide, J; Huebner, K; Karsch, F; Kaczmarek, O; Laermann, E; Liddle, J; Mawhinney, R D; Miao, C; Petreczky, P; Petrov, K; Schmidt, C; Söldner, W; Umeda, T
2008-01-01
We calculate the spatial string tension in (2+1) flavor QCD with physical strange quark mass and almost physical light quark masses using lattices with temporal extent N_tau=4,6 and 8. We compare our results on the spatial string tension with predictions of dimensionally reduced QCD. This suggests that also in the presence of light dynamical quarks dimensional reduction works well down to temperatures 1.5T_c.
DRACULA: Dimensionality Reduction And Clustering for Unsupervised Learning in Astronomy
Aguena, Michel; Busti, Vinicius C.; Camacho, Hugo; Sasdelli, Michele; Ishida, Emille E. O.; Vilalta, Ricardo; Trindade, Arlindo M. M.; Gieseke, Fabien; de Souza, Rafael S.; Fantaye, Yabebal T.; Mazzali, Paolo A.
2015-12-01
DRACULA classifies objects using dimensionality reduction and clustering. The code has an easy interface and can be applied to separate several types of objects. It is based on tools developed in scikit-learn, with some usage requiring also the H2O package.
Dimensionality Reduction and Uncertainty Quantification for Inverse Problems
van Leeuwen, Tristan
2015-01-01
Many inverse problems in science and engineering involve multi-experiment data and thus require a large number of forward simulations. Dimensionality reduction techniques aim at reducing the number of forward solves by (randomly) subsampling the data. In the special case of non-linear least-squares
Model reduction for controller design for infinite-dimensional systems
Opmeer, Mark Robertus
2006-01-01
The main aim of this thesis is, as the title suggests, the presentation of results on model reduction for controller design for infinite-dimensional systems. The obtained results are presented for both discrete-time systems and continuous-time systems. They are perfect generalizations of the corresp
Generalized Time-Limited Balanced Reduction Method
Shaker, Hamid Reza; Shaker, Fatemeh
2013-01-01
In this paper, a new method for model reduction of bilinear systems is presented. The proposed technique is from the family of gramian-based model reduction methods. The method uses time-interval generalized gramians in the reduction procedure rather than the ordinary generalized gramians...
Generalized Time-Limited Balanced Reduction Method
Shaker, Hamid Reza; Shaker, Fatemeh
2013-01-01
In this paper, a new method for model reduction of bilinear systems is presented. The proposed technique is from the family of gramian-based model reduction methods. The method uses time-interval generalized gramians in the reduction procedure rather than the ordinary generalized gramians and in ...
曹茜; 谭琨; 杜培军; 夏俊士
2012-01-01
The paper employed a novel method based on Nystrom algorithm to realize dimensionality reduction of hy-perspectral remote sensing image. First, part of the samples was extracted randomly to form sub kernel matrix whose eigenvectors were computed. Then the process above was iterated to compute the new kernel and update the eigenvectors. Finally the image after dimensionality reduction was produced with the last eigenvectors. This method was prepared with KPCA in time consumption, the quantity of extraction feature information and classification effect with datasets OMIS and ROSIS employed. Experimental results show that with contrast to KPCA, SKPCA ( Simplified KPCA ) had comparative performance in feature extraction and classification effects but apparently higher computing speed.%提出用基于Nystr(ǒ)m算法的简化核主成分分析方法(SKPCA)实现高光谱遥感影像的快速降维.首先随机选取部分样本构成子核矩阵并计算其特征向量,然后进行矩阵外推迭代得到近似核矩阵,并分解近似核矩阵不断更新特征向量,最后实现高光谱影像的降维处理.利用OMIS与ROSIS遥感影像进行试验,从运算速度、提取特征信息量以及分类后效果对SKPCA和KPCA(未简化的核主成分分析法)进行比较,结果表明,SKPCA和KPCA提取的特征信息量相当,提取特征与分类效果相近,但SKPCA的运算速度至少要高于KPCA数百倍.
Intrusion Detection System Using Hierarchical GMM and Dimensionality Reduction
L. Maria Michael
2012-07-01
Full Text Available The focus of this chapter is to provide the effective intrusion detection technique to protect Web server. The IDS protects an server from malicious attacks from the Internet if someone tries to break in through the firewall and tries to have access on any system in the trusted side and alerts the system administrator in case there is a breach in security. Gaussian Mixture Models (GMMs are among the most statistically mature methods for clustering the data. Intrusion detection can be divided into anomaly detection and misuse detection. Misuse detection model is to collect behavioral features of non-normal operation and establish related feature library. In the existing system of anomaly based Intrusion Detection System, the work is based on the number of attacks on the network and using decision tree analysis for rule matching and grading. We are proposing an IDS approach that will use signature based and anomaly based identification scheme. And we are also proposing the rule pruning scheme with GMM(Gaussian Mixture Model. It does facilitate efficient way of handling large amount of rules. And we are planned to compare the performance of the IDS on different models. The Dimension Reduction focuses on using information obtained KDD Cup 99 data set for the selection of attributes to identify the type of attacks. The dimensionality reduction is performed on 41 attributes to 14 and 7 attributes based on Best First Search method and then apply the two classifying Algorithms ID3 and J48 Keywords-Intrusion detection, reliable networks, malicious routers, internet dependability, tolerance.
Non-linear dimensionality reduction of signaling networks
Ivakhno Sergii
2007-06-01
Full Text Available Abstract Background Systems wide modeling and analysis of signaling networks is essential for understanding complex cellular behaviors, such as the biphasic responses to different combinations of cytokines and growth factors. For example, tumor necrosis factor (TNF can act as a proapoptotic or prosurvival factor depending on its concentration, the current state of signaling network and the presence of other cytokines. To understand combinatorial regulation in such systems, new computational approaches are required that can take into account non-linear interactions in signaling networks and provide tools for clustering, visualization and predictive modeling. Results Here we extended and applied an unsupervised non-linear dimensionality reduction approach, Isomap, to find clusters of similar treatment conditions in two cell signaling networks: (I apoptosis signaling network in human epithelial cancer cells treated with different combinations of TNF, epidermal growth factor (EGF and insulin and (II combination of signal transduction pathways stimulated by 21 different ligands based on AfCS double ligand screen data. For the analysis of the apoptosis signaling network we used the Cytokine compendium dataset where activity and concentration of 19 intracellular signaling molecules were measured to characterise apoptotic response to TNF, EGF and insulin. By projecting the original 19-dimensional space of intracellular signals into a low-dimensional space, Isomap was able to reconstruct clusters corresponding to different cytokine treatments that were identified with graph-based clustering. In comparison, Principal Component Analysis (PCA and Partial Least Squares – Discriminant analysis (PLS-DA were unable to find biologically meaningful clusters. We also showed that by using Isomap components for supervised classification with k-nearest neighbor (k-NN and quadratic discriminant analysis (QDA, apoptosis intensity can be predicted for different
Dimensional Reduction, Hard Thermal Loops and the Renormalization Group
Stephens, C R; Hess, P O; Astorga, F; Weber, Axel; Hess, Peter O.; Astorga, Francisco
2004-01-01
We study the realization of dimensional reduction and the validity of the hard thermal loop expansion for lambda phi^4 theory at finite temperature, using an environmentally friendly finite-temperature renormalization group with a fiducial temperature as flow parameter. The one-loop renormalization group allows for a consistent description of the system at low and high temperatures, and in particular of the phase transition. The main results are that dimensional reduction applies, apart from a range of temperatures around the phase transition, at high temperatures (compared to the zero temperature mass) only for sufficiently small coupling constants, while the HTL expansion is valid below (and rather far from) the phase transition, and, again, at high temperatures only in the case of sufficiently small coupling constants. We emphasize that close to the critical temperature, physics is completely dominated by thermal fluctuations that are not resummed in the hard thermal loop approach and where universal quant...
Eckardt, Henrik; Lind, Dennis; Toendevold, Erik
2015-01-01
Background and purpose - During acetabular fracture surgery, the acetabular roof is difficult to visualize with 2-dimensional fluoroscopic views. We assessed whether intraoperative 3-dimensional (3D) imaging can aid the surgeon to achieve better articular reduction and improve implant fixation....... Patients and methods - We operated on 72 acetabular fractures using intraoperative 3D imaging and compared the operative results, duration of surgery, and complications with those for 42 consecutive acetabular fracture operations conducted using conventional fluoroscopic imaging. Postoperative reduction...... was evaluated on reconstructed coronal and sagittal images of the acetabulum. Results - The fracture severity and patient characteristics were similar in the 2 groups. In the 3D group, 46 of 72 patients (0.6) had a perfect result after open reduction and internal fixation, and in the control group, 17 of 42 (0...
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Breakdown of unitarity in the dimensional reduction scheme
Hooft, G. 't; Van Damme, R.
1985-01-01
B-functions of any field theory using different regularization schemes should obey the physical rule that they can be transformed into each other by a finite transformation of the renormalized coupling constants in the theory. The dimensional reduction scheme does not obey this rule. The cause is that unacceptable counterterms had to be used where overlapping divergencies occur, so that unitarity is violated. Supersymmetry (or at least the N = 2 and N = 4 supersymmetric gauge theories and all...
On fermion masses in a dimensional reduction scheme
Barnes, K.J.; Forgacs, P.; Surridge, M.; Zoupanos, G.
1987-01-01
A candidate model for Grand Unification, arising from a Coset Space Dimensional Reduction scheme based on an E(7) gauge theory, is found to have a promising set of fermionic quantum numbers. Unfortunately, these fermions all develop large (geometric) masses. We derive formulae for the square of the Dirac operator and for fermion masses for a large class of CSDR schemes, revealing this as a general feature.
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.
Ryan C Williamson
2016-12-01
Full Text Available Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction-shared dimensionality and percent shared variance-with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure.
Kireeva, Natalia V; Ovchinnikova, Svetlana I; Tetko, Igor V; Asiri, Abdullah M; Balakin, Konstantin V; Tsivadze, Aslan Yu
2014-05-01
Over the years, a number of dimensionality reduction techniques have been proposed and used in chemoinformatics to perform nonlinear mappings. In this study, four representatives of nonlinear dimensionality reduction methods related to two different families were analyzed: distance-based approaches (Isomap and Diffusion Maps) and topology-based approaches (Generative Topographic Mapping (GTM) and Laplacian Eigenmaps). The considered methods were applied for the visualization of three toxicity datasets by using four sets of descriptors. Two methods, GTM and Diffusion Maps, were identified as the best approaches, which thus made it impossible to prioritize a single family of the considered dimensionality reduction methods. The intrinsic dimensionality assessment of data was performed by using the Maximum Likelihood Estimation. It was observed that descriptor sets with a higher intrinsic dimensionality contributed maps of lower quality. A new statistical coefficient, which combines two previously known ones, was proposed to automatically rank the maps. Instead of relying on one of the best methods, we propose to automatically generate maps with different parameter values for different descriptor sets. By following this procedure, the maps with the highest values of the introduced statistical coefficient can be automatically selected and used as a starting point for visual inspection by the user.
S. Szopa
2005-01-01
Full Text Available The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC compounds. The procedure is based on (i the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005, (ii the application of several commonly used reduction methods to the fully explicit scheme, and (iii the assessment of resulting errors based on direct comparison between the reduced and full schemes. The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii grouping of primary species having similar reactivities into surrogate species and (iii grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.
Quantum discriminant analysis for dimensionality reduction and classification
Cong, Iris; Duan, Luming
2016-07-01
We present quantum algorithms to efficiently perform discriminant analysis for dimensionality reduction and classification over an exponentially large input data set. Compared with the best-known classical algorithms, the quantum algorithms show an exponential speedup in both the number of training vectors M and the feature space dimension N. We generalize the previous quantum algorithm for solving systems of linear equations (2009 Phys. Rev. Lett. 103 150502) to efficiently implement a Hermitian chain product of k trace-normalized N ×N Hermitian positive-semidefinite matrices with time complexity of O({log}(N)). Using this result, we perform linear as well as nonlinear Fisher discriminant analysis for dimensionality reduction over M vectors, each in an N-dimensional feature space, in time O(p {polylog}({MN})/{ε }3), where ɛ denotes the tolerance error, and p is the number of principal projection directions desired. We also present a quantum discriminant analysis algorithm for data classification with time complexity O({log}({MN})/{ε }3).
Linear low-rank approximation and nonlinear dimensionality reduction
无
2004-01-01
［1］Bishop, C. M., Svensen, M., Williams, C. K. I., GTM: the generative topographic mapping, Neural Computation,1998, 10: 215-234.［2］Freedman, D., Efficient simplicial reconstructions of manifolds from their samples, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24: 1349-1357.［3］Hinton, G., Roweis, S., Stochastic neighbor embedding, Neural Information Processing Systems, 2003, 15:833-840.［4］Kohonen, T., Self-organizing Maps, 3rd ed., Berlin: Springer-Verlag, 2000.［5］Ramsay, J. O., Silverman, B. W., Applied Functional Data Analysis, Berlin: Springer-Verlag, 2002.［6］Roweis, S., Saul, L., Nonlinear dimensionality reduction by locally linear embedding, Science, 2000, 290:2323-2326.［7］Tenenbaum, J., De Silva, V., Langford, J., A global geometric framework for nonlinear dimension reduction,Science, 2000, 290:2319-2323.［8］Xu, G., Kailath, T., Fast subspace decompsotion, IEEE Transactions on Signal Processing, 1994, 42: 539-551.［9］Xu, G., Zha, H., Golub, G. et al., Fast algorithms for updating signal subspaces, IEEE Transactions on Circuits and Systems, 1994, 41: 537-549.［10］Zha, H., Marques, O., Simon, H., Large-scale SVD and subspace-based methods for information retrieval, Proceedings of Irregular '98, Lecture Notes in Computer Science, 1998, 1457: 29-42.［11］Zhang, Z., Zha, H., Structure and perturbation analysis of truncated SVDs for column-partitioned matrices,SIAM Journal on Matrix Analysis and Applications, 2001, 22: 1245-1262.［12］Zhang, Z., Zha, H., Simon, H., Low-rank approximations with sparse factors I: basic algorithms and error analysis, SIAM Journal on Matrix Analysis and Applications, 2002, 23: 706-727.［13］Stewart, G. W., Four algorithms for the efficient computation of truncated pivoted QR approximation to a sparse matrix, Numerische Mathematik, 1999, 83:313-323.［14］Golub, G., Van Loan, C., Matrix Computations, 3nd ed., Baltimore, Maryland: Johns Hopkins University Press,1996.
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
张振跃; 查宏远
2004-01-01
We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized da-ta points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approxi-mation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data pointswith respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can bequite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimension-al Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.
Qin Luo; Zheng Tian; Zhixiang Zhao
2008-01-01
Existing manifold learning algorithms use Euclidean distance to measure the proximity of data points. However, in high-dimensional space, Minkowski metrics are no longer stable because the ratio of distance of nearest and farthest neighbors to a given query is almost unit. It will degrade the performance of manifold learning algorithms when applied to dimensionality reduction of high-dimensional data. We introduce a new distance function named shrinkage-divergence-proximity (SDP) to manifold learning, which is meaningful in any high-dimensional space. An improved locally linear embedding (LLE) algorithm named SDP-LLE is proposed in light of the theoretical result. Experiments are conducted on a hyperspectral data set and an image segmentation data set. Experimental results show that the proposed method can efficiently reduce the dimensionality while getting higher classification accuracy.
Applicabilities of ship emission reduction methods
Guleryuz, Adem [ARGEMAN Research Group, Marine Division (Turkey)], email: ademg@argeman.org; Kilic, Alper [Istanbul Technical University, Maritime Faculty, Marine Engineering Department (Turkey)], email: enviromarineacademic@yahoo.com
2011-07-01
Ships, with their high consumption of fossil fuels to power their engines, are significant air polluters. Emission reduction methods therefore need to be implemented and the aim of this paper is to assess the advantages and disadvantages of each emissions reduction method. Benefits of the different methods are compared, with their disadvantages and requirements, to determine the applicability of such solutions. The methods studied herein are direct water injection, humid air motor, sea water scrubbing, diesel particulate filter, selected catalytic reduction, design of engine components, exhaust gas recirculation and engine replacement. Results of the study showed that the usefulness of each emissions reduction method depends on the particular case and that an evaluation should be carried out for each ship. This study pointed out that methods to reduce ship emissions are available but that their applicability depends on each case.
Dimensionality Reduction by Mutual Information for Text Classification
LIU Li-zhen; SONG Han-tao; LU Yu-chang
2005-01-01
The frame of text classification system was presented. The high dimensionality in feature space for text classification was studied. The mutual information is a widely used information theoretic measure, in a descriptive way, to measure the stochastic dependency of discrete random variables. The measure method was used as a criterion to reduce high dimensionality of feature vectors in text classification on Web. Feature selections or conversions were performed by using maximum mutual information including linear and non-linear feature conversions. Entropy was used and extended to find right features commendably in pattern recognition systems. Favorable foundation would be established for text classification mining.
Reduced order methods for modeling and computational reduction
Rozza, Gianluigi
2014-01-01
This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics. Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...
Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali
2017-02-01
One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.
Object-based Dimensionality Reduction in Land Surface Phenology Classification
Brian E. Bunker
2016-11-01
Full Text Available Unsupervised classification or clustering of multi-decadal land surface phenology provides a spatio-temporal synopsis of natural and agricultural vegetation response to environmental variability and anthropogenic activities. Notwithstanding the detailed temporal information available in calibrated bi-monthly normalized difference vegetation index (NDVI and comparable time series, typical pre-classification workflows average a pixel’s bi-monthly index within the larger multi-decadal time series. While this process is one practical way to reduce the dimensionality of time series with many hundreds of image epochs, it effectively dampens temporal variation from both intra and inter-annual observations related to land surface phenology. Through a novel application of object-based segmentation aimed at spatial (not temporal dimensionality reduction, all 294 image epochs from a Moderate Resolution Imaging Spectroradiometer (MODIS bi-monthly NDVI time series covering the northern Fertile Crescent were retained (in homogenous landscape units as unsupervised classification inputs. Given the inherent challenges of in situ or manual image interpretation of land surface phenology classes, a cluster validation approach based on transformed divergence enabled comparison between traditional and novel techniques. Improved intra-annual contrast was clearly manifest in rain-fed agriculture and inter-annual trajectories showed increased cluster cohesion, reducing the overall number of classes identified in the Fertile Crescent study area from 24 to 10. Given careful segmentation parameters, this spatial dimensionality reduction technique augments the value of unsupervised learning to generate homogeneous land surface phenology units. By combining recent scalable computational approaches to image segmentation, future work can pursue new global land surface phenology products based on the high temporal resolution signatures of vegetation index time series.
Symmetry Reductions of (2 + 1-Dimensional CDGKS Equation and Its Reduced Lax Pairs
Na Lv
2014-01-01
Full Text Available With the aid of symbolic computation, we obtain the symmetry transformations of the (2 + 1-dimensional Caudrey-Dodd-Gibbon-Kotera-Sawada (CDGKS equation by Lou’s direct method which is based on Lax pairs. Moreover, we use the classical Lie group method to seek the symmetry groups of both the CDGKS equation and its Lax pair and then reduce them by the obtained symmetries. In particular, we consider the reductions of the Lax pair completely. As a result, three reduced (1 + 1-dimensional equations with their new Lax pairs are presented and some group-invariant solutions of the equation are given.
Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with a very large number of input parameters can be addressed, if the response exhibits some special structure that can be discovered and exploited. Several physical responses exhibit a special structure known as an active subspace (AS), a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction with the AS represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the ...
Dimensional reduction of ten-dimensional E/sub 8/ gauge theory over a compact coset space S/R
Luest, D.; Zoupanos, G.
1985-12-26
Dimensional reduction of pure gauge theories over a compact coset space S/R leads to four-dimensional Yang-Mills-Higgs theories. We present a complete analysis of the four-dimensional unified models obtained by dimensionally reducing an E/sub 8/ gauge theory in ten dimensions over all possible six-dimensional homogeneous spaces S/R when S is a subgroup of E/sub 8/ and simple. (orig.).
Eigenanatomy: sparse dimensionality reduction for multi-modal medical image analysis.
Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B
2015-02-01
Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into multi-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available. Copyright © 2014 Elsevier Inc. All rights reserved.
Dimensionality reduction in conic section function neural network
Tulay Yildirim; Lale Ozyilmaz
2002-12-01
This paper details how dimensionality can be reduced in conic section function neural networks (CSFNN). This is particularly important for hardware implementation of networks. One of the main problems to be solved when considering the hardware design is the high connectivity requirement. If the effect that each of the network inputs has on the network output after training a neural network is known, then some inputs can be removed from the network. Consequently, the dimensionality of the network, and hence, the connectivity and the training time can be reduced. Sensitivity analysis, which extracts the cause and effect relationship between the inputs and outputs of the network, has been proposed as a method to achieve this and is investigated for Iris plant, thyroid disease and ionosphere databases. Simulations demonstrate the validity of the method used.
Pesenson, Meyer; Pesenson, I. Z.; McCollum, B.
2009-05-01
The complexity of multitemporal/multispectral astronomical data sets together with the approaching petascale of such datasets and large astronomical surveys require automated or semi-automated methods for knowledge discovery. Traditional statistical methods of analysis may break down not only because of the amount of data, but mostly because of the increase of the dimensionality of data. Image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to "the curse of dimensionality,” thus facilitating automated feature selection, classification and data segmentation. Dimension reduction methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and together with image fusion enable effective scientific visualization (as opposed to mere illustrative visualization). The main approach of this work utilizes recent advances in multidimensional image processing, as well as representation of essential structure of a data set in terms of its fundamental eigenfunctions, which are used as an orthonormal basis for the data visualization and analysis. We consider multidimensional data sets and images as manifolds or combinatorial graphs and construct variational splines that minimize certain Sobolev norms. These splines allow us to reconstruct the eigenfunctions of the combinatorial Laplace operator by using only a small portion of the graph. We use the first two or three eigenfunctions for embedding large data sets into two- or three-dimensional Euclidean space. Such reduced data sets allow efficient data organization, retrieval, analysis and visualization. We demonstrate applications of the algorithms to test cases from the Spitzer Space Telescope. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings.
Uni-Vector-Sensor Dimensionality Reduction MUSIC Algorithm for DOA and Polarization Estimation
Lanmei Wang
2014-01-01
Full Text Available This paper addresses the problem of multiple signal classification- (MUSIC- based direction of arrival (DOA and polarization estimation and proposes a new dimensionality reduction MUSIC (DR-MUSIC algorithm. Uni-vector-sensor MUSIC algorithm provides estimation for DOA and polarization; accordingly, a four-dimensional peak search is required, which hence incurs vast amount of computation. In the proposed DR-MUSIC method, the signal steering vector is expressed in the product form of arrival angle function matrix and polarization function vector. The MUSIC joint spectrum is converted to the form of Rayleigh-Ritz ratio by using the feature where the 2-norm of polarization function vector is constant. A four-dimensional MUSIC search reduced the dimension to two two-dimensional searches and the amount of computation is greatly decreased. The theoretical analysis and simulation results have verified the effectiveness of the proposed algorithm.
Breakdown of unitary in the dimensional reduction scheme
Damme, R. van; Hooft, G. ' t (Rijksuniversiteit Utrecht (Netherlands). Inst. voor Theoretische Fysica)
1985-01-03
..beta..-functions of any field theory using different regularization schemes should obey the physical rule that they can be transformed into each other by a finite transformation of the renormalized coupling constants in the theory. The dimensional reduction scheme does not obey this rule. The cause is that unacceptable counterterms had to be used where overlapping divergencies occur, so that unitarity is violated. Supersymmetry (or at least the N = 2 and N = 4 supersymmetric gauge theories and all supersymmetric theories not containing a vector field) turns out to be insensitive to this discrepancy, because the so-called ''epsilon-scalar'' renormalizes the same way as the scaler, fermion and vector fields.
Breakdown of unitary in the dimensional reduction scheme
Damme, R. van; Hooft, G. ' t
1985-01-03
..beta..-functions of any field theory using different regularization schemes should obey the physical rule that they can be transformed into each other by a finite transformation of the renormalized coupling constants in the theory. The dimensional reduction scheme does not obey this rule. The cause is that unacceptable counterterms had to be used where overlapping divergencies occur, so that unitarity is violated. Supersymmetry (or at least the N = 2 and N = 4 supersymmetric gauge theories and all supersymmetric theories not containing a vector field) turns out to be insensitive to this discrepancy, because the so-called epsilon-scalar renormalizes the same way as the scaler, fermion and vector fields. (orig.).
Methods of torque ripple reduction for flux reversal motor
Vakil, Gaurang; Sheth, N. K.; Miller, David
2009-04-01
This paper presents two-dimensional finite element based results for various methods of torque ripple reduction in flux-reversal motors. The effects of variation in magnet and rotor pole heights, rotor pole skewing, and multiple teeth per rotor pole on the cogging torque, developed torque, torque ripple, and phase inductance and also an optimum value of the magnet and rotor pole heights, skew angle, and choice of teeth per rotor pole with the teeth depth resulting in torque ripple reduction are presented.
A new method combining LDA and PLS for dimension reduction.
Tang, Liang; Peng, Silong; Bi, Yiming; Shan, Peng; Hu, Xiyuan
2014-01-01
Linear discriminant analysis (LDA) is a classical statistical approach for dimensionality reduction and classification. In many cases, the projection direction of the classical and extended LDA methods is not considered optimal for special applications. Herein we combine the Partial Least Squares (PLS) method with LDA algorithm, and then propose two improved methods, named LDA-PLS and ex-LDA-PLS, respectively. The LDA-PLS amends the projection direction of LDA by using the information of PLS, while ex-LDA-PLS is an extension of LDA-PLS by combining the result of LDA-PLS and LDA, making the result closer to the optimal direction by an adjusting parameter. Comparative studies are provided between the proposed methods and other traditional dimension reduction methods such as Principal component analysis (PCA), LDA and PLS-LDA on two data sets. Experimental results show that the proposed method can achieve better classification performance.
A new method combining LDA and PLS for dimension reduction.
Liang Tang
Full Text Available Linear discriminant analysis (LDA is a classical statistical approach for dimensionality reduction and classification. In many cases, the projection direction of the classical and extended LDA methods is not considered optimal for special applications. Herein we combine the Partial Least Squares (PLS method with LDA algorithm, and then propose two improved methods, named LDA-PLS and ex-LDA-PLS, respectively. The LDA-PLS amends the projection direction of LDA by using the information of PLS, while ex-LDA-PLS is an extension of LDA-PLS by combining the result of LDA-PLS and LDA, making the result closer to the optimal direction by an adjusting parameter. Comparative studies are provided between the proposed methods and other traditional dimension reduction methods such as Principal component analysis (PCA, LDA and PLS-LDA on two data sets. Experimental results show that the proposed method can achieve better classification performance.
Lyapunov Computational Method for Two-Dimensional Boussinesq Equation
Mabrouk, Anouar Ben
2010-01-01
A numerical method is developed leading to Lyapunov operators to approximate the solution of two-dimensional Boussinesq equation. It consists of an order reduction method and a finite difference discretization. It is proved to be uniquely solvable and analyzed for local truncation error for consistency. The stability is checked by using Lyapunov criterion and the convergence is studied. Some numerical implementations are provided at the end of the paper to validate the theoretical results.
Clustering and dimensionality reduction for image retrieval in high-dimensional spaces
Soumia Benkrama
2014-12-01
Full Text Available The scalability of indexing techniques and image retrieval pose many problems. Indeed, their performance degrades rapidly when the database size increases. In this paper, we propose an efficient indexing method for high-dimensional spaces. We investigate how high-dimensional indexing methods can be used on a partitioned space into clusters to help the design of an efficient and robust CBIR scheme. We develop a new method for efficient clustering is used for structuring objects in the feature space; this method allows dividing the base into data groups according to their similarity, in function of the parameter threshold and vocabulary size. A comparative study is presented between the proposed method and a set of classification methods. The experiments results on the Pascal Visual Object Classes challenges (VOC of 2007 and Caltech-256 dataset show that our method significantly improves the performance. Experimental retrieval results based on the precision/recall measures show interesting results.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
How to Evaluate Dimensionality Reduction? - Improving the Co-ranking Matrix
Lueks, Wouter; Biehl, Michael; Hammer, Barbara
2011-01-01
The growing number of dimensionality reduction methods available for data visualization has recently inspired the development of quality assessment measures, in order to evaluate the resulting low-dimensional representation independently from a methods' inherent criteria. Several (existing) quality measures can be (re)formulated based on the so-called co-ranking matrix, which subsumes all rank errors (i.e. differences between the ranking of distances from every point to all others, comparing the low-dimensional representation to the original data). The measures are often based on the partioning of the co-ranking matrix into 4 submatrices, divided at the K-th row and column, calculating a weighted combination of the sums of each submatrix. Hence, the evaluation process typically involves plotting a graph over several (or even all possible) settings of the parameter K. Considering simple artificial examples, we argue that this parameter controls two notions at once, that need not necessarily be combined, and th...
Manousselis, Pantelis [Department of Engineering Sciences, University of Patras, 26110 Patras (Greece) and Physics Department, National Technical University, Zografou Campus, 15780 Athens (Greece)]. E-mail: pman@central.ntua.gr; Zoupanos, George [Department of Engineering Sciences, University of Patras, 26110 Patras (Greece); Physics Department, National Technical University, Zografou Campus, 15780 Athens (Greece)
2004-11-01
A ten-dimensional supersymmetric gauge theory is written in terms of N=1, D=4 superfields. The theory is dimensionally reduced over six-dimensional coset spaces. We find that the resulting four-dimensional theory is either a softly broken N = 1 supersymmetric gauge theory or a non-supersymmetric gauge theory depending on whether the coset spaces used in the reduction are non-symmetric or symmetric. In both cases examples susceptible to yield realistic models are presented. (author)
Duality and Dimensional Reduction of 5D BF Theory
Amoretti, Andrea; Caruso, Giacomo; Maggiore, Nicola; Magnoli, Nicodemo
2013-01-01
A planar boundary introduced \\`a la Symanzik in the 5D topological BF theory, with the only requirement of locality and power counting, allows to uniquely determine a gauge invariant, non topological 4D Lagrangian. The boundary condition on the bulk fields is interpreted as a duality relation for the boundary fields, in analogy with the fermionization duality which holds in the 3D case. This suggests that the 4D degrees of freedom might be fermionic, although starting from a bosonic bulk theory. The method we propose to dimensionally reduce a Quantum Field Theory and to identify the resulting degrees of freedom can be applied to a generic spacetime dimension.
Kusratmoko, Eko; Wibowo, Adi; Cholid, Sofyan; Pin, Tjiong Giok
2017-07-01
This paper presents the results of applications of participatory three dimensional mapping (P3DM) method for fqcilitating the people of Cibanteng' village to compile a landslide disaster risk reduction program. Physical factors, as high rainfall, topography, geology and land use, and coupled with the condition of demographic and social-economic factors, make up the Cibanteng region highly susceptible to landslides. During the years 2013-2014 has happened 2 times landslides which caused economic losses, as a result of damage to homes and farmland. Participatory mapping is one part of the activities of community-based disaster risk reduction (CBDRR)), because of the involvement of local communities is a prerequisite for sustainable disaster risk reduction. In this activity, participatory mapping method are done in two ways, namely participatory two-dimensional mapping (P2DM) with a focus on mapping of disaster areas and participatory three-dimensional mapping (P3DM) with a focus on the entire territory of the village. Based on the results P3DM, the ability of the communities in understanding the village environment spatially well-tested and honed, so as to facilitate the preparation of the CBDRR programs. Furthermore, the P3DM method can be applied to another disaster areas, due to it becomes a medium of effective dialogue between all levels of involved communities.
Dimension reduction methods for microarray data: a review
Rabia Aziz
2017-03-01
Full Text Available Dimension reduction has become inevitable for pre-processing of high dimensional data. “Gene expression microarray data” is an instance of such high dimensional data. Gene expression microarray data displays the maximum number of genes (features simultaneously at a molecular level with a very small number of samples. The copious numbers of genes are usually provided to a learning algorithm for producing a complete characterization of the classification task. However, most of the times the majority of the genes are irrelevant or redundant to the learning task. It will deteriorate the learning accuracy and training speed as well as lead to the problem of overfitting. Thus, dimension reduction of microarray data is a crucial preprocessing step for prediction and classification of disease. Various feature selection and feature extraction techniques have been proposed in the literature to identify the genes, that have direct impact on the various machine learning algorithms for classification and eliminate the remaining ones. This paper describes the taxonomy of dimension reduction methods with their characteristics, evaluation criteria, advantages and disadvantages. It also presents a review of numerous dimension reduction approaches for microarray data, mainly those methods that have been proposed over the past few years.
A WEIGHTED FEATURE REDUCTION METHOD FOR POWER SPECTRA OF RADAR HRRPS
无
2006-01-01
Feature reduction is a key process in pattern recognition. This paper deals with the feature reduction methods for a time-shift invariant feature, power spectrum, in Radar Automatic Target Recognition (RATR)using High-Resolution Range Profiles (HRRPs). Several existing feature reduction methods in pattern recognition are analyzed, and a weighted feature reduction method based on Fisher's Discriminant Ratio (FDR) is proposed in this paper. According to the characteristics of radar HRRP target recognition, this proposed method searches the optimal weight vector for power spectra of HRRPs by means of an iterative algorithm,and thus reduces feature dimensionality. Compared with the method of using raw power spectra and some existing feature reduction methods, the weighted feature reduction method can not only reduce feature dimensionality, but also improve recognition performance with low computation complexity. In the recognition experiments based on measured data, the proposed method is robust to different test data and achieves good recognition results.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Reduction Methods for Total Reaction Cross Sections
Gomes, P. R. S.; Mendes Junior, D. R.; Canto, L. F.; Lubian, J.; de Faria, P. N.
2016-03-01
The most frequently used methods to reduce fusion and total reaction excitation functions were investigated in a very recent paper Canto et al. (Phys Rev C 92:014626, 2015). These methods are widely used to eliminate the influence of masses and charges in comparisons of cross sections for weakly bound and tightly bound systems. This study reached two main conclusions. The first is that the fusion function method is the most successful procedure to reduce fusion cross sections. Applying this method to theoretical cross sections of single channel calculations, one obtains a system independent curve (the fusion function), that can be used as a benchmark to fusion data. The second conclusion was that none of the reduction methods available in the literature is able to provide a universal curve for total reaction cross sections. The reduced single channel cross sections keep a strong dependence of the atomic and mass numbers of the collision partners, except for systems in the same mass range. In the present work we pursue this problem further, applying the reduction methods to systems within a limited mass range. We show that, under these circumstances, the reduction of reaction data may be very useful.
Shereena V. B
2015-04-01
Full Text Available The aim of this paper is to present a comparative study of two linear dimension reduction methods namely PCA (Principal Component Analysis and LDA (Linear Discriminant Analysis. The main idea of PCA is to transform the high dimensional input space onto the feature space where the maximal variance is displayed. The feature selection in traditional LDA is obtained by maximizing the difference between classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the whole data set where LDA tries to find the axes for best class seperability. The proposed method is experimented over a general image database using Matlab. The performance of these systems has been evaluated by Precision and Recall measures. Experimental results show that PCA based dimension reduction method gives the better performance in terms of higher precision and recall values with lesser computational complexity than the LDA based method.
Ficuciello, Fanny; Siciliano, Bruno
2016-07-01
A question that often arises, among researchers working on artificial hands and robotic manipulation, concerns the real meaning of synergies. Namely, are they a realistic representation of the central nervous system control of manipulation activities at different levels and of the sensory-motor manipulation apparatus of the human being, or do they constitute just a theoretical framework exploiting analytical methods to simplify the representation of grasping and manipulation activities? Apparently, this is not a simple question to answer and, in this regard, many minds from the field of neuroscience and robotics are addressing the issue [1]. The interest of robotics is definitely oriented towards the adoption of synergies to tackle the control problem of devices with high number of degrees of freedom (DoFs) which are required to achieve motor and learning skills comparable to those of humans. The synergy concept is useful for innovative underactuated design of anthropomorphic hands [2], while the resulting dimensionality reduction simplifies the control of biomedical devices such as myoelectric hand prostheses [3]. Synergies might also be useful in conjunction with the learning process [4]. This aspect is less explored since few works on synergy-based learning have been realized in robotics. In learning new tasks through trial-and-error, physical interaction is important. On the other hand, advanced mechanical designs such as tendon-driven actuation, underactuated compliant mechanisms and hyper-redundant/continuum robots might exhibit enhanced capabilities of adapting to changing environments and learning from exploration. In particular, high DoFs and compliance increase the complexity of modelling and control of these devices. An analytical approach to manipulation planning requires a precise model of the object, an accurate description of the task, and an evaluation of the object affordance, which all make the process rather time consuming. The integration of
Sathya Kumar Devireddy
2014-01-01
Full Text Available Objective: The aim was to assess the accuracy of three-dimensional anatomical reductions achieved by open method of treatment in cases of displaced unilateral mandibular subcondylar fractures using preoperative (pre op and postoperative (post op computed tomography (CT scans. Materials and Methods: In this prospective study, 10 patients with unilateral sub condylar fractures confirmed by an orthopantomogram were included. A pre op and post op CT after 1 week of surgical procedure was taken in axial, coronal and sagittal plane along with three-dimensional reconstruction. Standard anatomical parameters, which undergo changes due to fractures of the mandibular condyle were measured in pre and post op CT scans in three planes and statistically analysed for the accuracy of the reduction comparing the following variables: (a Pre op fractured and nonfractured side (b post op fractured and nonfractured side (c pre op fractured and post op fractured side. P < 0.05 was considered as significant. Results: Three-dimensional anatomical reduction was possible in 9 out of 10 cases (90%. The statistical analysis of each parameter in three variables revealed (P < 0.05 that there was a gross change in the dimensions of the parameters obtained in pre op fractured and nonfractured side. When these parameters were assessed in post op CT for the three variables there was no statistical difference between the post op fractured side and non fractured side. The same parameters were analysed for the three variables in pre op fractured and post op fractured side and found significant statistical difference suggesting a considerable change in the dimensions of the fractured side post operatively. Conclusion: The statistical and clinical results in our study emphasised that it is possible to fix the condyle in three-dimensional anatomical positions with open method of treatment and avoid post op degenerative joint changes. CT is the ideal imaging tool and should be used on
Microorganism Reduction Methods in Meat Products
ZÁHOROVÁ, Jana
2011-01-01
In Bachelor thesis I deal with a theme of the influences on the reduction of microorganisms of meat products. First, I focused on the characteristics of individual organisms, the factors affecting their growth, incidence of microorganisms in meat, forms of microbial degradation and contamination of meat microorganisms in slaughterhouses. The next section deals with the means to fight against microorganisms and methods which can reduce their presence in meat products. In the end there is menti...
Efficient EMD-based Similarity Search in Multimedia Databases via Flexible Dimensionality Reduction
Wichterich, Marc; Assent, Ira; Philipp, Kranen
2008-01-01
dimensionality reduction techniques for the EMD in a filter-and-refine architecture for efficient lossless retrieval. Thorough experimental evaluation on real world data sets demonstrates a substantial reduction of the number of expensive high-dimensional EMD computations and thus remarkably faster response...
Douzas, George; Grammatikopoulos, Theodoros; Zoupanos, George [National Technical University, Physics Department, Athens (Greece)
2009-02-15
We consider a N=1 supersymmetric E{sub 8} gauge theory, defined in ten dimensions and we determine all four-dimensional gauge theories resulting from the generalized dimensional reduction a la Forgacs-Manton over coset spaces, followed by a subsequent application of the Wilson flux spontaneous symmetry-breaking mechanism. Our investigation is constrained only by the requirements that (i) the dimensional reduction leads to the potentially phenomenologically interesting, anomaly-free, four-dimensional E{sub 6}, SO{sub 10} and SU{sub 5} GUTs and (ii) the Wilson flux mechanism makes use only of the freely acting discrete symmetries of all possible six-dimensional coset spaces. (orig.)
Calvini, Rosalba; Foca, Giorgia; Ulrici, Alessandro
2016-10-01
Hyperspectral sensors represent a powerful tool for chemical mapping of solid-state samples, since they provide spectral information localized in the image domain in very short times and without the need of sample pretreatment. However, due to the large data size of each hyperspectral image, data dimensionality reduction (DR) is necessary in order to develop hyperspectral sensors for real-time monitoring of large sets of samples with different characteristics. In particular, in this work, we focused on DR methods to convert the three-dimensional data array corresponding to each hyperspectral image into a one-dimensional signal (1D-DR), which retains spectral and/or spatial information. In this way, large datasets of hyperspectral images can be converted into matrices of signals, which in turn can be easily processed using suitable multivariate statistical methods. Obviously, different 1D-DR methods highlight different aspects of the hyperspectral image dataset. Therefore, in order to investigate their advantages and disadvantages, in this work, we compared three different 1D-DR methods: average spectrum (AS), single space hyperspectrogram (SSH) and common space hyperspectrogram (CSH). In particular, we have considered 370 NIR-hyperspectral images of a set of green coffee samples, and the three 1D-DR methods were tested for their effectiveness in sensor fault detection, data structure exploration and sample classification according to coffee variety and to coffee processing method. Principal component analysis and partial least squares-discriminant analysis were used to compare the three separate DR methods. Furthermore, low-level and mid-level data fusion was also employed to test the advantages of using AS, SSH and CSH altogether. Graphical Abstract Key steps in hyperspectral data dimenionality reduction.
Three-dimensional patterning methods and related devices
Putnam, Morgan C.; Kelzenberg, Michael D.; Atwater, Harry A.; Boettcher, Shannon W.; Lewis, Nathan S.; Spurgeon, Joshua M.; Turner-Evans, Daniel B.; Warren, Emily L.
2016-12-27
Three-dimensional patterning methods of a three-dimensional microstructure, such as a semiconductor wire array, are described, in conjunction with etching and/or deposition steps to pattern the three-dimensional microstructure.
O. Ye. Hentosh
2016-01-01
Full Text Available The possibility of applying the method of reducing upon finite-dimensional invariant subspaces, generated by the eigenvalues of the associated spectral problem, to some two-dimensional generalization of the relativistic Toda lattice with the triple matrix Lax type linearization is investigated. The Hamiltonian property and Lax-Liouville integrability of the vector fields, given by this system, on the invariant subspace related with the Bargmann type reduction are found out.
Methods in Model Order Reduction (MOR) field
刘志超
2014-01-01
Nowadays, the modeling of systems may be quite large, even up to tens of thousands orders. In spite of the increasing computational powers, direct simulation of these large-scale systems may be impractical. Thus, to industry requirements, analytically tractable and computationally cheap models must be designed. This is the essence task of Model Order Reduction (MOR). This article describes the basics of MOR optimization, various way of designing MOR, and gives the conclusion about existing methods. In addition, it proposed some heuristic footpath.
Xiaofang Li; Qionghua Wang; Yuhong Tao; Dahai Li; Aihong Wang
2011-01-01
@@ A method to reduce crosstalk in multi-view autostereoscopic three-dimensional (3D) displays based on the lenticular sheet is proposed. Correcting the luminance values of each parallax image displayed on the display screen is employed. We analyze the causes of crosstalk. We deduce the formulas of crosstalk reduction according to the relationship between crosstalk coefficients of each parallax image observed through the lenticular sheet, luminance values of each parallax image displayed on the display screen, and luminance values of each parallax image observed through the lenticular sheet at each viewing position. Experimental results verify the effectiveness of the proposed method.%A method to reduce crosstalk in multi-view autostereoscopic three-dimensional (3D) displays based on the lenticular sheet is proposed. Correcting the luminance values of each parallax image displayed on the display screen is employed. We analyze the causes of crosstalk. We deduce the formulas of crosstalk reduction according to the relationship between crosstalk coefficients of each parallax image observed through the lenticular sheet, luminance values of each parallax image displayed on the display screen, and luminance values of each parallax image observed through the lenticular sheet at each viewing position. Experimental results verify the effectiveness of the proposed method.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
Liao, Qifeng; Lin, Guang
2016-07-01
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
无
2007-01-01
In this paper, we present an object reduction for nonlinear partial differential equations. As a concrete example of its applications in physical problems, this method is applied to the (2+1)-dimensional Boiti-Leon-Pempinelli system, which has the extensive physics background, and an abundance of exact solutions is derived from some reduction equations. Based on the derived solutions, the localized structures under the periodic wave background are obtained.
Dai Hongying
2013-01-01
Full Text Available Abstract Background Multifactor Dimensionality Reduction (MDR has been widely applied to detect gene-gene (GxG interactions associated with complex diseases. Existing MDR methods summarize disease risk by a dichotomous predisposing model (high-risk/low-risk from one optimal GxG interaction, which does not take the accumulated effects from multiple GxG interactions into account. Results We propose an Aggregated-Multifactor Dimensionality Reduction (A-MDR method that exhaustively searches for and detects significant GxG interactions to generate an epistasis enriched gene network. An aggregated epistasis enriched risk score, which takes into account multiple GxG interactions simultaneously, replaces the dichotomous predisposing risk variable and provides higher resolution in the quantification of disease susceptibility. We evaluate this new A-MDR approach in a broad range of simulations. Also, we present the results of an application of the A-MDR method to a data set derived from Juvenile Idiopathic Arthritis patients treated with methotrexate (MTX that revealed several GxG interactions in the folate pathway that were associated with treatment response. The epistasis enriched risk score that pooled information from 82 significant GxG interactions distinguished MTX responders from non-responders with 82% accuracy. Conclusions The proposed A-MDR is innovative in the MDR framework to investigate aggregated effects among GxG interactions. New measures (pOR, pRR and pChi are proposed to detect multiple GxG interactions.
THEORETICAL STUDY OF THREE-DIMENSIONAL NUMERICAL MANIFOLD METHOD
LUO Shao-ming; ZHANG Xiang-wei; L(U) Wen-ge; JIANG Dong-ru
2005-01-01
The three-dimensional numerical manifold method(NMM) is studied on the basis of two-dimensional numerical manifold method. The three-dimensional cover displacement function is studied. The mechanical analysis and Hammer integral method of three-dimensional numerical manifold method are put forward. The stiffness matrix of three-dimensional manifold element is derived and the dissection rules are given. The theoretical system and the numerical realizing method of three-dimensional numerical manifold method are systematically studied. As an example, the cantilever with load on the end is calculated, and the results show that the precision and efficiency are agreeable.
Three-dimensional image signals: processing methods
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Primordial black hole evaporation and spontaneous dimensional reduction
Mureika, J.R., E-mail: jmureika@lmu.edu [Department of Physics, Loyola Marymount University, Los Angeles, CA 90045 (United States)
2012-09-17
Several different approaches to quantum gravity suggest the effective dimension of spacetime reduces from four to two near the Planck scale. In light of such evidence, this Letter re-examines the thermodynamics of primordial black holes (PBHs) in specific lower-dimensional gravitational models. Unlike in four dimensions, (1+1)-D black holes radiate with power P{approx}M{sub BH}{sup 2}, while it is known no (2+1)-D (BTZ) black holes can exist in a non-anti-de Sitter universe. This has important relevance to the PBH population size and distribution, and consequently on cosmological evolution scenarios. The number of PBHs that have evaporated to present day is estimated, assuming they account for all dark matter. Entropy conservation during dimensional transition imposes additional constraints. If the cosmological constant is non-negative, no black holes can exist in the (2+1)-dimensional epoch, and consequently a (1+1)-dimensional black hole will evolve to become a new type of remnant. Although these results are conjectural and likely model-dependent, they open new questions about the viability of PBHs as dark matter candidates.
Wen-zhi ZHANG; Pei-yan HUANG
2014-01-01
Based on the precise integration method (PIM), a coupling technique of the high order multiplication perturbation method (HOMPM) and the reduction method is proposed to solve variable coefficient singularly perturbed two-point boundary value prob-lems (TPBVPs) with one boundary layer. First, the inhomogeneous ordinary differential equations (ODEs) are transformed into the homogeneous ODEs by variable coefficient dimensional expansion. Then, the whole interval is divided evenly, and the transfer ma-trix in each sub-interval is worked out through the HOMPM. Finally, a group of algebraic equations are given based on the relationship between the neighboring sub-intervals, which are solved by the reduction method. Numerical results show that the present method is highly efficient.
Computational analysis of methods for reduction of induced drag
Janus, J. M.; Chatterjee, Animesh; Cave, Chris
1993-01-01
The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.
New Similarity Reduction Solutions for the (2+1)-Dimensional Nizhnik-Novikov-Veselov Equation
ZHI Hong-Yan
2013-01-01
In this paper,some new formal similarity reduction solutions for the (2+ 1)-dimensional Nizhnik-Novikov-Veselov equation are derived.Firstly,we derive the similarity reduction of the NNV equation with the optimal system of the admitted one-dimensional subalgebras.Secondly,by analyzing the reduced equation,three types of similarity solutions are derived,such as multi-soliton like solutions,variable separations solutions,and KdV type solutions.
Dimensional reduction of Markov state models from renormalization group theory
Orioli, S.; Faccioli, P.
2016-09-01
Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.
Dimensional reduction of Markov state models from renormalization group theory.
Orioli, S; Faccioli, P
2016-09-28
Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.
Localization of a mobile laser scanner via dimensional reduction
Lehtola, Ville V.; Virtanen, Juho-Pekka; Vaaja, Matti T.; Hyyppä, Hannu; Nüchter, Andreas
2016-11-01
We extend the concept of intrinsic localization from a theoretical one-dimensional (1D) solution onto a 2D manifold that is embedded in a 3D space, and then recover the full six degrees of freedom for a mobile laser scanner with a simultaneous localization and mapping algorithm (SLAM). By intrinsic localization, we mean that no reference coordinate system, such as global navigation satellite system (GNSS), nor inertial measurement unit (IMU) are used. Experiments are conducted with a 2D laser scanner mounted on a rolling prototype platform, VILMA. The concept offers potential in being extendable to other wheeled platforms.
Verified reduction of dimensionality for an all-vanadium redox flow battery model
Sharma, A. K.; Ling, C. Y.; Birgersson, E.; Vynnycky, M.; Han, M.
2015-04-01
The computational cost for all-vanadium redox flow batteries (VRFB) models that seek to capture the transport phenomena usually increases with the number of spatial dimensions considered. In this context, we carry out scale analysis to derive a reduced zero-dimensional model. Two nondimensional numbers and their limits to support the model reduction are identified. We verify the reduced model by comparing its charge-discharge curve predictions with that of a full two-dimensional model. The proposed analysis leading to reduction in dimensionality is generic and can be employed for other types of redox flow batteries.
Kastrin, Andrej
2010-01-01
Class prediction is an important application of microarray gene expression data analysis. The high-dimensionality of microarray data, where number of genes (variables) is very large compared to the number of samples (obser- vations), makes the application of many prediction techniques (e.g., logistic regression, discriminant analysis) difficult. An efficient way to solve this prob- lem is by using dimension reduction statistical techniques. Increasingly used in psychology-related applications, Rasch model (RM) provides an appealing framework for handling high-dimensional microarray data. In this paper, we study the potential of RM-based modeling in dimensionality reduction with binarized microarray gene expression data and investigate its prediction ac- curacy in the context of class prediction using linear discriminant analysis. Two different publicly available microarray data sets are used to illustrate a general framework of the approach. Performance of the proposed method is assessed by re-randomization s...
D-Theory: Field Quantization by Dimensional Reduction of Discrete Variables
Brower, R; Riederer, S; Wiese, U J
2003-01-01
D-theory is an alternative non-perturbative approach to quantum field theory formulated in terms of discrete quantized variables instead of classical fields. Classical scalar fields are replaced by generalized quantum spins and classical gauge fields are replaced by quantum links. The classical fields of a d-dimensional quantum field theory reappear as low-energy effective degrees of freedom of the discrete variables, provided the (d+1)-dimensional D-theory is massless. When the extent of the extra Euclidean dimension becomes small in units of the correlation length, an ordinary d-dimensional quantum field theory emerges by dimensional reduction. The D-theory formulation of scalar field theories with various global symmetries and of gauge theories with various gauge groups is constructed explicitly and the mechanism of dimensional reduction is investigated.
Model and Controller Order Reduction for Infinite Dimensional Systems
Fatmawati
2010-05-01
Full Text Available This paper presents a reduced order model problem using reciprocal transformation and balanced truncation followed by low order controller design of infinite dimensional systems. The class of systems considered is that of an exponentially stable state linear systems (A, B, C, where operator A has a bounded inverse, and the operator B and C are of finite-rank and bounded. We can connect the system (A, B, C with its reciprocal system via the solutions of the Lyapunov equations. The realization of the reciprocal system is reduced by balanced truncation. This result is further translated using reciprocal transformation as the reduced-order model for the systems (A, B, C. Then the low order controller is designed based on the reduced order model. The numerical examples are studied using simulations of Euler-Bernoulli beam to show the closed-loop performance.
Assessment of metal artifact reduction methods in pelvic CT
Abdoli, Mehrsima [Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam 1066 CX (Netherlands); Mehranian, Abolfazl [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211 (Switzerland); Ailianou, Angeliki; Becker, Minerva [Division of Radiology, Geneva University Hospital, Geneva CH-1211 (Switzerland); Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211 (Switzerland); Geneva Neuroscience Center, Geneva University, Geneva CH-1205 (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen 9700 RB (Netherlands)
2016-04-15
Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulated datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.
UV dimensional reduction to two from group valued momenta
Arzano, Michele
2016-01-01
We describe a new model of deformed relativistic kinematics based on the group manifold $U(1) \\times SU(2)$ as a four-momentum space. We discuss the action of the Lorentz group on such space and and illustrate the deformed composition law for the group-valued momenta. Due to the geometric structure of the group, the deformed kinematics is governed by {\\it two} energy scales $\\lambda$ and $\\kappa$. A relevant feature of the model is that it exhibits a running spectral dimension $d_s$ with the characteristic short distance reduction to $d_s =2$ found in most quantum gravity scenarios.
Reduction of Volume-preserving Flows on an n-dimensional Manifold
Yong-ai Zheng; De-bin Huang; Zeng-rong Liu
2003-01-01
A geometric reduction procedure for volume-preserving flows with a volume-preserving symmetry on an n-dimensional manifold is obtained. Instead of the coordinate-dependent theory and the concrete coordinate transformation, we show that a volume-preserving flow with a one-parameter volume-preserving symmetry on an n-dimensional manifold can be reduced to a volume-preserving flow on the corresponding (n - 1)-dimensional quotient space. More generally, if it admits an r-parameter volume-preserving commutable symmetry, then the reduced flow preserves the corresponding (n - r)-dimensional volume form.
A Dimensionality Reduction Framework for Detection of Multiscale Structure in Heterogeneous Networks
Hua-Wei Shen; Xue-Qi Cheng; Yuan-Zhuo Wang; Yi-xin Chen
2012-01-01
Graph clustering has been widely applied in exploring regularities emerging in relational data.Recently,the rapid development of network theory correlates graph clustering with the detection of community structure,a common and important topological characteristic of networks.Most existing methods investigate the community structure at a single topological scale.However,as shown by empirical studies,the community structure of real world networks often exhibits multiple topological descriptions,corresponding to the clustering at different resolutions.Furthermore,the detection of multiscale community structure is heavily affected by the heterogeneous distribution of node degree.It is very challenging to detect multiscale community structure in heterogeneous networks.In this paper,we propose a novel,unified framework for detecting community structure from the perspective of dimensionality reduction.Based on the framework,we first prove that the well-known Laplacian matrix for network partition and the widely-used modularity matrix for community detection are two kinds of covariaace matrices used in dimensionality reduction. We then propose a novel method to detect communities at multiple topological scales within our framework.We further show that existing algorithms fail to deal with heterogeneous node degrees.We develop a novel method to handle heterogeneity of networks by introducing a rescaling transformation into the covariance matrices in our framework.Extensive tests on real world and artificial networks demonstrate that the proposed correlation matrices significantly outperform Laplacian and modularity matrices in terms of their ability to identify multiscale community structure in heterogeneous networks.
Holographic dimensional reduction: Center manifold theorem and E-infinity
El Naschie, M.S. [Department of Physics, University of Alexandria (Egypt); Department of Astrophysics, Cairo University (Egypt); Department of Physics, Mansura University (Egypt)
2006-08-15
Klein modular curve is shown to be the holographic boundary of E-infinity Cantorian spacetime. The conformal relation between the full dimensional and the reduced space is explored. We show that both spaces analyzed in the appropriate manner give the same results for certain aspects of high energy particle physics and quantum gravity. Similarity with the center manifold theorem of non-linear dynamics and the theory of bifurcating vector fields is discussed. In particular it was found that the transfinite version of the E{sub 8}-bar E{sub 8} theory corresponds to a fuzzy Kahler manifold with b{sub 2}{sup -}=19-{phi}{sup 6} and b{sub 2}{sup +}=5+{phi}{sup 3}, while the boundary theory of the {gamma}{sub c}(7) Klein modular space corresponds to another fuzzy Kahler manifold with b{sub 2}{sup -}=13-{phi}{sup 6} and b{sub 2}{sup +}=3-{phi}{sup 6}. Based on these results, we conclude that the {epsilon}{sup ({approx}}{sup )}-{gamma}{sub c}(7) theory represents a worked out example for the correctness of the holographic principle first proposed by G. 't Hooft. Hooft.
Sharpening the weak gravity conjecture with dimensional reduction
Heidenreich, Ben; Reece, Matthew; Rudelius, Tom
2016-02-01
We investigate the behavior of the Weak Gravity Conjecture (WGC) under toroidal compactification and RG flows, finding evidence that WGC bounds for single photons become weaker in the infrared. By contrast, we find that a photon satisfying the WGC will not necessarily satisfy it after toroidal compactification when black holes charged under the Kaluza-Klein photons are considered. Doing so either requires an infinite number of states of different charges to satisfy the WGC in the original theory or a restriction on allowed compactification radii. These subtleties suggest that if the Weak Gravity Conjecture is true, we must seek a stronger form of the conjecture that is robust under compactification. We propose a "Lattice Weak Gravity Conjecture" that meets this requirement: a superextremal particle should exist for every charge in the charge lattice. The perturbative heterotic string satisfies this conjecture. We also use compactification to explore the extent to which the WGC applies to axions. We argue that gravitational instanton solutions in theories of axions coupled to dilaton-like fields are analogous to extremal black holes, motivating a WGC for axions. This is further supported by a match between the instanton action and that of wrapped black branes in a higher-dimensional UV completion.
Chaotic oscillator containing memcapacitor and meminductor and its dimensionality reduction analysis
Yuan, Fang; Wang, Guangyi; Wang, Xiaowei
2017-03-01
In this paper, smooth curve models of meminductor and memcapacitor are designed, which are generalized from a memristor. Based on these models, a new five-dimensional chaotic oscillator that contains a meminductor and memcapacitor is proposed. By dimensionality reducing, this five-dimensional system can be transformed into a three-dimensional system. The main work of this paper is to give the comparisons between the five-dimensional system and its dimensionality reduction model. To investigate dynamics behaviors of the two systems, equilibrium points and stabilities are analyzed. And the bifurcation diagrams and Lyapunov exponent spectrums are used to explore their properties. In addition, digital signal processing technologies are used to realize this chaotic oscillator, and chaotic sequences are generated by the experimental device, which can be used in encryption applications.
Collins, Ryan L; Hu, Ting; Wejse, Christian;
2013-01-01
for this problem. The goal of the present study was to apply MDR to mining high-order epistatic interactions in a population-based genetic study of tuberculosis (TB). Results The study used a previously published data set consisting of 19 candidate single-nucleotide polymorphisms (SNPs) in 321 pulmonary TB cases......Background Identifying high-order genetics associations with non-additive (i.e. epistatic) effects in population-based studies of common human diseases is a computational challenge. Multifactor dimensionality reduction (MDR) is a machine learning method that was designed specifically...... and 347 healthy controls from Guniea-Bissau in Africa. The ReliefF algorithm was applied first to generate a smaller set of the five most informative SNPs. MDR with 10-fold cross-validation was then applied to look at all possible combinations of two, three, four and five SNPs. The MDR model with the best...
Choo, Jaegul; Lee, Hanseung; Liu, Zhicheng; Stasko, John; Park, Haesun
2013-01-01
Many of the modern data sets such as text and image data can be represented in high-dimensional vector spaces and have benefited from computational methods that utilize advanced computational methods. Visual analytics approaches have contributed greatly to data understanding and analysis due to their capability of leveraging humans' ability for quick visual perception. However, visual analytics targeting large-scale data such as text and image data has been challenging due to the limited screen space in terms of both the numbers of data points and features to represent. Among various computational methods supporting visual analytics, dimension reduction and clustering have played essential roles by reducing these numbers in an intelligent way to visually manageable sizes. Given numerous dimension reduction and clustering methods available, however, the decision on the choice of algorithms and their parameters becomes difficult. In this paper, we present an interactive visual testbed system for dimension reduction and clustering in a large-scale high-dimensional data analysis. The testbed system enables users to apply various dimension reduction and clustering methods with different settings, visually compare the results from different algorithmic methods to obtain rich knowledge for the data and tasks at hand, and eventually choose the most appropriate path for a collection of algorithms and parameters. Using various data sets such as documents, images, and others that are already encoded in vectors, we demonstrate how the testbed system can support these tasks.
Hyperspectral image classification based on volumetric texture and dimensionality reduction
Su, Hongjun; Sheng, Yehua; Du, Peijun; Chen, Chen; Liu, Kui
2015-06-01
A novel approach using volumetric texture and reduced-spectral features is presented for hyperspectral image classification. Using this approach, the volumetric textural features were extracted by volumetric gray-level co-occurrence matrices (VGLCM). The spectral features were extracted by minimum estimated abundance covariance (MEAC) and linear prediction (LP)-based band selection, and a semi-supervised k-means (SKM) clustering method with deleting the worst cluster (SKMd) bandclustering algorithms. Moreover, four feature combination schemes were designed for hyperspectral image classification by using spectral and textural features. It has been proven that the proposed method using VGLCM outperforms the gray-level co-occurrence matrices (GLCM) method, and the experimental results indicate that the combination of spectral information with volumetric textural features leads to an improved classification performance in hyperspectral imagery.
Joint Statistics of Strongly Correlated Neurons via Dimensional Reduction
Deniz, Taskin
2016-01-01
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.
Dimensionality Reduction for Optimal Clustering In Data Mining
Ch. Raja Ramesh
2011-10-01
Full Text Available Spectral clustering and Leader’s algorithm have both been used to identify clusters that are nonlinearly separable in input space. Despite significant research, these methods have remained only loosely related. Sigmoid kernel and polynomial kernel were quite popular for support vector machines due to its origin from clustering. In this paper we are submitting the comparison of above kernel methods after reducing the dimensions using feature functions. For this we have given hand writing data -sets to create and compare the clusters.
Motion Planning for Robots with Topological Dimension Reduction Method
无
1990-01-01
This paper explores the realization of robotic motion planning,especially Findpath problem,which is a basic motion planning problem that arises in the development of robotics.Findpat means:Give the initial and desired final configurations of a robotic arm in 3-dimensional space,and give descriptions of the obstacles in the space,determine whether there is a continuous collision-free motion of the robotic arm from one configura tion to the other and find such a motion if it exists.There are several branches of approach in motion planning area,but in reality the important things are feasibility,efficiency and accuracy of the method.In this paper according to the concepts of Configuration Space(C-Space)and Rotation Mapping Graph(RMG) discussed in [1], a topological method named Dimension Reduction Method(DRM)for investigating the connectivity of the RMG(or the topologic structure of the RMG)is presented by using topologic technique.Based on this approach the Findpath problem is thus transformed to that of finding a connected way in a finite Characteristic Network(CN),The method has shown great potentiality in practice.Here a simulation system is designed to embody DRM[1-2] and it is in sight that DRM can be adopted in the first overall planning of real robot system in the near future.
Tissue cartography: compressing bio-image data by dimensional reduction.
Heemskerk, Idse; Streichan, Sebastian J
2015-12-01
The high volumes of data produced by state-of-the-art optical microscopes encumber research. We developed a method that reduces data size and processing time by orders of magnitude while disentangling signal by taking advantage of the laminar structure of many biological specimens. Our Image Surface Analysis Environment automatically constructs an atlas of 2D images for arbitrarily shaped, dynamic and possibly multilayered surfaces of interest. Built-in correction for cartographic distortion ensures that no information on the surface is lost, making the method suitable for quantitative analysis. We applied our approach to 4D imaging of a range of samples, including a Drosophila melanogaster embryo and a Danio rerio beating heart.
Multi-label dimensionality reduction and classification with extreme learning machines
Lin Feng; Jing Wang; Shenglan Liu; Yao Xiao
2014-01-01
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradual y becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classifica-tion algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality re-duction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integral y and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good ac-curacy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The ex-periments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label clas-sification.
A Method of Attribute Reduction Based on Rough Set
LI Chang-biao; SONG Jian-ping
2005-01-01
The logging attribute optimization is an important task in the well-logging interpretation.A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general reductive method is determined. Then, the significance of dispensable attribute in the reduction-table is calculated. Finally, the minimum relative reduction set is achieved. The typical calculation and quantitative computation of reservoir parameter in oil logging show that the method of attribute reduction is greatly effective and feasible in logging interpretation.
Anisotropic Inflation in a 5D Standing Wave Braneworld and Dimensional Reduction
Gogberashvili, Merab; Malagon-Morejon, Dagoberto; Mora-Luna, Refugio Rigel
2012-01-01
We investigate a cosmological solution within the framework of a 5D standing wave braneworld model generated by gravity coupled to a massless scalar phantom-like field. By obtaining a full exact solution of the model we found a novel dynamical mechanism in which the anisotropic nature of the primordial metric gives rise to i) inflation along certain spatial dimensions, and ii) deflation and a shrinking reduction of the number of spatial dimensions along other directions. This dynamical mechanism can be relevant for dimensional reduction in string and other higher dimensional theories in the attempt of getting a 4D isotropic expanding space-time.
Berezhkovskii, A. M.; Pustovoit, M. A.; Bezrukov, S. M.
2007-04-01
Brownian dynamics simulations of the particle diffusing in a long conical tube (the length of the tube is much greater than its smallest radius) are used to study reduction of the three-dimensional diffusion in tubes of varying cross section to an effective one-dimensional description. The authors find that the one-dimensional description in the form of the Fick-Jacobs equation with a position-dependent diffusion coefficient, D(x ), suggested by Zwanzig [J. Phys. Chem. 96, 3926 (1992)], with D(x ) given by the Reguera-Rubí formula [Phys. Rev. E 64, 061106 (2001)], D(x )=D/√1+R'(x)2, where D is the particle diffusion coefficient in the absence of constraints, and R(x ) is the tube radius at x, is valid when ∣R'(x)∣⩽1. When ∣R'(x)∣>1, higher spatial derivatives of the one-dimensional concentration in the effective diffusion equation cannot be neglected anymore as was indicated by Kalinay and Percus [J. Chem. Phys. 122, 204701 (2005)]. Thus the reduction to the effective one-dimensional description is a useful tool only when ∣R'(x)∣⩽1 since in this case one can apply the powerful standard methods to analyze the resulting diffusion equation.
Simple self-reduction method for anterior shoulder dislocation
Reiner Wirbel
2014-01-01
Conclusion: The presented Boss-Holzach-Matter method for reduction of anterior shoulder dislocation is a simple method without the need of anaesthesia, but cooperation from patients is crucial. The successful rate is comparable with other established methods.
Numerical Improvement of The Three-dimensional Boundary Element Method
Ortiz-Aleman, C.; Gil-Zepeda, A.; SÃ¡nchez-Sesma, F. J.; Luzon-Martinez, F.
2001-12-01
Boundary element methods have been applied to calculate the seismic response of various types of geological structures. Dimensionality reduction and a relatively easy fulfillment of radiation conditions at infinity are recognized advantages over domain approaches. Indirect Boundary Element Method (IBEM) formulations give rise to large systems of equations, and the considerable amount of operations required for solving them suggest the possibility of getting some benefit from exploitation of sparsity patterns. In this article, a brief study on the structure of the linear systems derived from the IBEM method is carried out. Applicability of a matrix static condensation algorithm to the inversion of the IBEM coefficient matrix is explored, in order to optimize the numerical burden of such method. Seismic response of a 3-D alluvial valley of irregular shape, as originally proposed by Sánchez-Sesma and Luzon (1995), was computed and comparisons on time consumption and memory allocation are established. An alternative way to deal with those linear systems is the use of threshold criteria for the truncation of the coefficient matrix, which implies the solution of sparse approximations instead of the original full IBEM systems (Ortiz-Aleman et al., 1998). Performance of this optimized approach is evaluated on its application to the case of a three-dimensional alluvial basin with irregular shape. Transfer functions were calculated for the frequency range from 0 to 1.25 Hz. Inversion of linear systems by using this algorithm lead to significant saving on computer time and memory allocation relative to the original IBEM formulation. Results represent an extension in the range of application of the IBEM method.
Reddy, M Babu
2010-01-01
The recent increase in dimensionality of data has thrown a great challenge to the existing dimensionality reduction methods in terms of their effectiveness. Dimensionality reduction has emerged as one of the significant preprocessing steps in machine learning applications and has been effective in removing inappropriate data, increasing learning accuracy, and improving comprehensibility. Feature redundancy exercises great influence on the performance of classification process. Towards the better classification performance, this paper addresses the usefulness of truncating the highly correlated and redundant attributes. Here, an effort has been made to verify the utility of dimensionality reduction by applying LVQ (Learning Vector Quantization) method on two Benchmark datasets of 'Pima Indian Diabetic patients' and 'Lung cancer patients'.
Methods of reduction of cisplatin nephrotoxicity
Walker, E.M. Jr.; Gale, G.R.
Cisplatin, an agent widely used in the chemotherapy of a variety of human malignancies, is often dose-limited owing to its nephrotoxicity. Some of the approaches under consideration, regarding the reduction of cisplatin nephrotoxicity, include the use of hydration and osmotic diuresis, pharmacological diuretics, chelating agents or agents which otherwise react with cisplatin or reverse cisplatin-induced deoxyribonucleic acid cross-links, and antioxidants to destroy free radicals, especially superoxide radicals, produced by cisplatin. The effects of each of these and other interventions on cisplatin-induced nephrotoxicity are delineated, along with their proposed mechanisms and effects on therapeutic efficacy. The current status of development of organoplatinum analogs yielding congeners with less nephrotoxicity and greater efficacy is discussed briefly. Finally, a possible role of endogenous and/or exogenous prostaglandins in protecting against or reversing heavy metal nephrotoxicity is suggested.
Kwon, Min-Seok; Kim, Kyunga; Lee, Sungyoung; Park, Taesung
2012-01-01
Multifactor dimensionality reduction (MDR) method has been widely applied to detect gene-gene interactions that are well recognized as playing an important role in understanding complex traits. However, because of an exhaustive analysis of MDR, the current MDR software has some limitations to be extended to the genome-wide association studies (GWAS) with a large number of genetic markers up to approximately 1 million. To overcome this computational problem, we developed CUDA (Compute Unified Device Architecture) based genome-wide association MDR (cuGWAM) software using efficient hardware accelerators, cuGWAM has better performance than CPU-based MDR methods and other GPU-based methods.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Dark energy in five-dimensional Brans-Dicke cosmology with dimensional reduction
Ahmad Rami E1-Nabulsi
2011-01-01
We explore a 5D Brans-Dicke scalar cosmology by conjecturing that the four-dimensional Hubble parameter varies as H =εφs,ε ∈ R and s is some unknown power index and that the extra-dimensions compactify as the visible dimensions expand as b(t) ≈ ax(t),x ∈ R-.We mainly discuss the case x =-1.For critical values of ε close to unity,it was observed that the acceleration of the universe occurs at redshift close to z =0.8 which indicates that in our model,accelerated expansion of the universe began only recently.Several interesting points are revealed and discussed in some detail.
Reduction Method for Active Distribution Networks
Raboni, Pietro; Chen, Zhe
2013-01-01
On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...
Robust three dimensional surface contouring method with digital holography
YUAN Cao-jin; ZHAI Hong-chen; WANG Xiao-lei; WU Lan
2006-01-01
In this paper,a digital holography system with short-coherence light source is used to record a series of holograms of a micro-object. The three dimensional reconstruction is completed by the least-square-polynomial-fitting with a series of two dimensional intensity images which are obtained through holographic reconstruction. This three dimensional reconstruction method can be used to carry out three-dimensional reconstruction of a micro-object with strong laser speckle noise,which can not be obtained from the conventional method.
UPWIND DISCONTINUOUS GALERKIN METHODS FOR TWO DIMENSIONAL NEUTRON TRANSPORT EQUATIONS
袁光伟; 沈智军; 闫伟
2003-01-01
In this paper the upwind discontinuous Galerkin methods with triangle meshes for two dimensional neutron transport equations will be studied.The stability for both of the semi-discrete and full-discrete method will be proved.
Noise Reduction Methods for Weighing Lysimeters
Mechanical vibration of the grass and crop weighing lysimeters, located at the University of California West Side Field Research and Extension Station at Five Points, CA generated noise in lysimeter mass measurements and reduced the quality of evapotranspiration (ET) data. Two filtering methods for ...
Radon Reduction Methods: A Homeowner's Guide.
Environmental Protection Agency, Washington, DC.
The U.S. Environmental Protection Agency (EPA) is studying the effectiveness of various ways to reduce high concentrations of radon in houses. This booklet was produced to share what has been learned with those whose radon problems demand immediate action. The booklet describes nine methods that have been tested successfully--by EPA and/or other…
Data Reduction Method for Categorical Data Clustering
Sánchez Garreta, José Salvador; Rendón, Eréndira; García, Rene A.; Abundez, Itzel; Gutiérrez, Citlalih; Gasca, Eduardo
2008-01-01
Categorical data clustering constitutes an important part of data mining; its relevance has recently drawn attention from several researchers. As a step in data mining, however, clustering encounters the problem of large amount of data to be processed. This article offers a solution for categorical clustering algorithms when working with high volumes of data by means of a method that summarizes the database. This is done using a structure called CM-tree. In order to test our metho...
Winham, Stacey J; Motsinger-Reif, Alison A
2011-01-01
The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates.
Nicolini, Paolo; Frezzato, Diego
2013-06-01
Simplification of chemical kinetics description through dimensional reduction is particularly important to achieve an accurate numerical treatment of complex reacting systems, especially when stiff kinetics are considered and a comprehensive picture of the evolving system is required. To this aim several tools have been proposed in the past decades, such as sensitivity analysis, lumping approaches, and exploitation of time scales separation. In addition, there are methods based on the existence of the so-called slow manifolds, which are hyper-surfaces of lower dimension than the one of the whole phase-space and in whose neighborhood the slow evolution occurs after an initial fast transient. On the other hand, all tools contain to some extent a degree of subjectivity which seems to be irremovable. With reference to macroscopic and spatially homogeneous reacting systems under isothermal conditions, in this work we shall adopt a phenomenological approach to let self-emerge the dimensional reduction from the mathematical structure of the evolution law. By transforming the original system of polynomial differential equations, which describes the chemical evolution, into a universal quadratic format, and making a direct inspection of the high-order time-derivatives of the new dynamic variables, we then formulate a conjecture which leads to the concept of an "attractiveness" region in the phase-space where a well-defined state-dependent rate function ω has the simple evolution dot{ω }= - ω ^2 along any trajectory up to the stationary state. This constitutes, by itself, a drastic dimensional reduction from a system of N-dimensional equations (being N the number of chemical species) to a one-dimensional and universal evolution law for such a characteristic rate. Step-by-step numerical inspections on model kinetic schemes are presented. In the companion paper [P. Nicolini and D. Frezzato, J. Chem. Phys. 138, 234102 (2013)], 10.1063/1.4809593 this outcome will be naturally
Szopa, S.; Aumont, B.; Madronich, S.
2005-02-01
The objective of this work was to develop and assess an automatic procedure to write reduced chemical schemes for modeling gaseous photooxidant pollution at different scales. The method is based on (i) the development of a tool for writing the fully explicit schemes 5 for VOC oxidation and (ii) the assessment of reduced schemes using the fully explicit scheme as a reference. The reference scheme contained ca. seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry involving more than two million reactions and 350 000 species was written using an expert system generator approach. 10 Three methods were applied to reduce the size of chemical schemes: (i) use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii) lumping of primary species having similar reactivities and (iii) lumping of secondary products into surrogate species. The number of species in the final reduced scheme is 150, i.e. low enough for 3-D modeling purposes using CTMs. Comparisons 15 between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.
B. Aumont
2005-02-01
Full Text Available The objective of this work was to develop and assess an automatic procedure to write reduced chemical schemes for modeling gaseous photooxidant pollution at different scales. The method is based on (i the development of a tool for writing the fully explicit schemes 5 for VOC oxidation and (ii the assessment of reduced schemes using the fully explicit scheme as a reference. The reference scheme contained ca. seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry involving more than two million reactions and 350 000 species was written using an expert system generator approach. 10 Three methods were applied to reduce the size of chemical schemes: (i use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii lumping of primary species having similar reactivities and (iii lumping of secondary products into surrogate species. The number of species in the final reduced scheme is 150, i.e. low enough for 3-D modeling purposes using CTMs. Comparisons 15 between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.
Coset space dimensional reduction and classification of semi-realistic particle physics models
Douzas, G.; Grammatikopoulos, T. [National Technical University of Athens, Zografou Campus, 157 80 Zografou, Athens (Greece); Madore, J. [Laboratoire de Physique Theorique, Universite de Paris-Sud, Batiment 211, 91405 Orsay (France); Zoupanos, G.
2008-04-15
Starting from a Yang-Mills-Dirac theory defined in ten dimensions we classify the semi-realistic particle physics models resulting from their Forgacs-Manton dimensional reduction. The higher-dimensional gauge group is chosen to be E{sub 8}. This choice as well as the dimensionality of the space-time is suggested by the heterotic string theory. Furthermore, we assume that the space-time on which the theory is defined can be written in the compactified form M{sup 4} x B, with M{sup 4} the ordinary Minkowski spacetime and B=S/R a 6-dim homogeneous coset space. We constrain our investigation in those cases where the dimensional reduction leads in four dimensions to phenomenologically interesting and anomaly-free GUTs such as E{sub 6}, SO(10) and SU(5). However the four-dimensional surviving scalars transform in the fundamental of the resulting gauge group are not suitable for the superstrong symmetry breaking of the Standard Model. The main objective of our work is the investigation to which extent the latter can be achieved by employing the Wilson flux breaking mechanism. (Abstract Copyright [2008], Wiley Periodicals, Inc.)
Hügli, R V; Duff, G; O'Conchuir, B; Mengotti, E; Rodríguez, A Fraile; Nolting, F; Heyderman, L J; Braun, H B
2012-12-28
Artificial spin-ice systems consisting of nanolithographic arrays of isolated nanomagnets are model systems for the study of frustration-induced phenomena. We have recently demonstrated that monopoles and Dirac strings can be directly observed via synchrotron-based photoemission electron microscopy, where the magnetic state of individual nanoislands can be imaged in real space. These experimental results of Dirac string formation are in excellent agreement with Monte Carlo simulations of the hysteresis of an array of dipoles situated on a kagome lattice with randomized switching fields. This formation of one-dimensional avalanches in a two-dimensional system is in sharp contrast to disordered thin films, where avalanches associated with magnetization reversal are two-dimensional. The self-organized restriction of avalanches to one dimension provides an example of dimensional reduction due to frustration. We give simple explanations for the origin of this dimensional reduction and discuss the disorder dependence of these avalanches. We conclude with the explicit demonstration of how these avalanches can be controlled via locally modified anisotropies. Such a controlled start and stop of avalanches will have potential applications in data storage and information processing.
Extrudate Expansion Modelling through Dimensional Analysis Method
A new model framework is proposed to correlate extrudate expansion and extrusion operation parameters for a food extrusion cooking process through dimensional analysis principle, i.e. Buckingham pi theorem. Three dimensionless groups, i.e. energy, water content and temperature, are suggested...... to describe the extrudates expansion. From the three dimensionless groups, an equation with three experimentally determined parameters is derived to express the extrudate expansion. The model is evaluated with whole wheat flour and aquatic feed extrusion experimental data. The average deviations...
Fetal magnetocardiography: Methods for rapid data reduction
Mosher, John C.; Flynn, Edward R.; Quinn, A.; Weir, A.; Shahani, U.; Bain, R. J. P.; Maas, P.; Donaldson, G. B.
1997-03-01
Fetal magnetocardigraphy (fMCG) provides a unique method for noninvasive observations of the fetal heart. Electrical currents generated by excitable tissues within the fetal heart yield measurable external magnetic fields. Measurements are performed with superconducting quantum interference devices inductively coupled to magnetometer or gradiometer coils, and the resulting signals are converted to digital form in the data acquisition system. The measured fields are usually contaminated by fetal and maternal movements (usually respiration), other physiological fields such as skeletal muscle contraction, the maternal cardiac signal, and environmental electromagnetic fields. Sensitivity to relatively distant sources, both physiological and environmental, is substantially reduced by the use of magnetic gradiometers. Other contaminants may be removed by proper signal conditioning which may be automatically applied using "black box" algorithms that are transparent to the user and highly efficient. These procedures can rapidly reduce the complex signal plus noise waveforms to the desired fMCG with minimal operator interference.
Simple noise-reduction method based on nonlinear forecasting
Tan, James P. L.
2017-03-01
Nonparametric detrending or noise reduction methods are often employed to separate trends from noisy time series when no satisfactory models exist to fit the data. However, conventional noise reduction methods depend on subjective choices of smoothing parameters. Here we present a simple multivariate noise reduction method based on available nonlinear forecasting techniques. These are in turn based on state-space reconstruction for which a strong theoretical justification exists for their use in nonparametric forecasting. The noise reduction method presented here is conceptually similar to Schreiber's noise reduction method using state-space reconstruction. However, we show that Schreiber's method has a minor flaw that can be overcome with forecasting. Furthermore, our method contains a simple but nontrivial extension to multivariate time series. We apply the method to multivariate time series generated from the Van der Pol oscillator, the Lorenz equations, the Hindmarsh-Rose model of neuronal spiking activity, and to two other univariate real-world data sets. It is demonstrated that noise reduction heuristics can be objectively optimized with in-sample forecasting errors that correlate well with actual noise reduction errors.
Dimensional Reduction for Filters of Nonlinear Systems with Time-Scale Separation
2013-03-01
Rapp, Edwin Kreuzer and N. Sri Namachchivaya, “Reduced Nor- mal Forms for Nonlinear Control of Underactuated Hoisting Systems ,” Archive of Applied Mechanics , Vol.82, 2012, pp. 297 - 315. 7 ... Mechanics , Vol. 78(6), 2011, pp. 61001-1 - 61001-10. 8. Lee DeVille, N. Sri Namachchivaya and Zoi Rapti, “Noisy Two Dimensional Non-Hamiltonian System ...AFRL-OSR-VA-TR-2013-0009 Dimensional Reduction for Filters of Nonlinear Systems with Time- Scale Separation Namachchivaya, N
Three-dimensional decomposition method of global atmospheric circulation
LIU HaiTao; HU ShuJuan; XU Ming; CHOU JiFan
2008-01-01
By adopting the idea of three-dimensional Walker, Hadley and Rossby stream functions, the global atmospheric circulation can be considered as the sum of three stream functions from a global perspective. Therefore, a mathematical model of three-dimensional decomposition of global atmospheric circulation is proposed and the existence and uniqueness of the model are proved. Besides, the model includes a numerical method leading to no truncation error in the discrete three-dimensional grid points. Results also show that the three-dimensional stream functions exist and are unique for a given velocity field. The mathematical model shows the generalized form of three-dimensional stream functions equal to the velocity field in representing the features of atmospheric motion. Besides, the vertical velocity calculated through the model can represent the main characteristics of the vertical motion. In sum, the three-dimensional decomposition of atmospheric circulation is convenient for the further investigation of the features of global atmospheric motions.
Holbrook, Andrew; Vandenberg-Rodes, Alexander; Fortin, Norbert; Shahbaba, Babak
2017-01-01
Neuroscientists are increasingly collecting multimodal data during experiments and observational studies. Different data modalities-such as EEG, fMRI, LFP, and spike trains-offer different views of the complex systems contributing to neural phenomena. Here, we focus on joint modeling of LFP and spike train data, and present a novel Bayesian method for neural decoding to infer behavioral and experimental conditions. This model performs supervised dual-dimensionality reduction: it learns low-dimensional representations of two different sources of information that not only explain variation in the input data itself, but also predict extra-neuronal outcomes. Despite being one probabilistic unit, the model consists of multiple modules: exponential PCA and wavelet PCA are used for dimensionality reduction in the spike train and LFP modules, respectively; these modules simultaneously interface with a Bayesian binary regression module. We demonstrate how this model may be used for prediction, parametric inference, and identification of influential predictors. In prediction, the hierarchical model outperforms other models trained on LFP alone, spike train alone, and combined LFP and spike train data. We compare two methods for modeling the loading matrix and find them to perform similarly. Finally, model parameters and their posterior distributions yield scientific insights.
Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform
Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah
2017-02-01
Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.
Three-dimensional PtNi Hollow Nanochains as Enhanced Electrocatalyst for Oxygen Reduction Reaction
Fu, Shaofang; Zhu, Chengzhou; Song, Junhua; Engelhard, Mark H.; He, Yang; Du, Dan; Wang, Chong M.; Lin, Yuehe
2016-05-05
Three-dimensional porous PtNi hollow nanochains are successfully synthesized via galvanic replacement method using Ni nanosponges as sacrificial templates in an aqueous solution. It is found that the composition and shell thickness of the 3D PtNi hollow nanochains can be easily controlled by tuning the concentration of Pt precursors. The as-prepared PtNi hollow nanochains with optimized composition present high electrochemical surface area (70.8 m2 g-1), which is close to that of commercial Pt/C (83 m2 g-1). Moreover, the PtNi catalyst with Pt content of ~77% presents superior electrocatalytic performance for oxygen reduction reaction compared to commercial Pt/C. It shows a mass activity of 0.58 A mgPt-1, which is around 3 times higher than that of Pt/C. This strategy may be extended to the preparation of other multimetallic nanocrystals with 3D hollow nanostructures, which are expected to present high catalytic properties.
丘宏龙; 陈海鹏; 林桦楠; 陈凯
2014-01-01
Objective:To observe the clinical effects of treating 34 cases of lumbarfacet joint disorders with three-dimensional reduction method combined with lumbar lateral recessinjection ofXiang-dan andHuangqi injecta.Methods:68 cases of lumbar facet joint disorders were randomly divided into a treatment group and a control group,34 cases in each group.The control group was treated with three-dimensional reduction method,for manipulation therapy once a day,while the treatment group was treated with lumbar lateral recess injec-tion ofXiangdanandHuangqi injecta on the ifrst,fourth,and seventh days based on the treatment of the control group. The clinical observation of the two groups were made on the tenth day.Results:In the treatment group,10 cases were cured,20 cases were markedly effective,4 cases were ineffective,the efficiency being 88.24%;while in the control group,6 cases were cured,16 cases were markedly effective,12 cases were ineffective,the efifciency being 64.71%.The difference between the two groups was statistically significant(P< 0.01).The PRI,VAS and PPI scores of the treatment group were significantly lower than those of the control group,the difference being statistically significant(P< 0.01).Conclusion:The effect of the three-dimensional reduction method combined with lumbar lateral recess injection is obvious in the treatment of lumbar facet joint disorders,worthy of being clinically promoted.%目的：观察腰椎三维复位法配合侧隐窝注射香丹、黄芪注射液治疗腰椎小关节紊乱的临床疗效。方法：将68例腰椎小关节紊乱患者随机分为治疗组和对照组，每组34例。对照组予腰椎三维复位法治疗，每日1次；治疗组在对照组治疗基础上，于治疗的第1，4，7天行香丹、黄芪注射液腰椎侧隐窝注射，两组均于第10天观察临床疗效。结果：治疗组治愈10例，显效20例，无效4例，有效率88.24%；对照组治愈6例，显效16例，无效12例，有效率64
SU(2) Reduction of Six-dimensional (1,0) Supergravity
Lü, H; Sezgin, E
2003-01-01
We obtain a gauged supergravity theory in three dimensions with eight real supersymmetries by means of a Scherk-Schwarz reduction of pure N=(1,0) supergravity in six dimension on the SU(2) group manifold. The SU(2) Yang-Mills fields in the model propagate, since they have an ordinary kinetic term in addition to Chern-Simons couplings. The other propagating degrees of freedom consist of a dilaton, five scalars which parameterise the coset SL(3,R)/SO(3), three vector fields in the adjoint of SU(2), and twelve spin 1/2 fermions. The model admits an AdS_3 vacuum solution. We also show how a charged black hole solution can be obtained, by performing a dimensional reduction of the rotating self-dual string of six-dimensional (1,0) supergravity.
N=2-Maxwell-Chern-Simons Model with Anomalous Magnetic Moment Coupling via Dimensional Reduction
Christiansen, H R; Helayël-Neto, José A; Mansur, L R; Nogueira, A L M A
1999-01-01
An N=1--supersymmetric version of the Cremmer-Scherk-Kalb-Ramond model with non-minimal coupling to matter is built up both in terms of superfields and in a component-field formalism. By adopting a dimensional reduction procedure, the N=2--D=3 counterpart of the model comes out, with two main features: a genuine (diagonal) Chern-Simons term and an anomalous magnetic moment coupling between matter and the gauge potential.
A 2+1-Dimensional Non-Isothermal Magnetogasdynamic System. Hamiltonian-Ermakov Integrable Reduction
Hongli An
2012-08-01
Full Text Available A 2+1-dimensional anisentropic magnetogasdynamic system with a polytropic gas law is shown to admit an integrable elliptic vortex reduction when γ=2 to a nonlinear dynamical subsystem with underlying integrable Hamiltonian-Ermakov structure. Exact solutions of the magnetogasdynamic system are thereby obtained which describe a rotating elliptic plasma cylinder. The semi-axes of the elliptical cross-section, remarkably, satisfy a Ermakov-Ray-Reid system.
Use of dimensionality reduction for structural mapping of hip joint osteoarthritis data
Theoharatos, C.; Boniatis, I.; Panagiotopoulos, E.; Panayiotakis, G.; Fotopoulos, S.
2009-10-01
A visualization-based, computer-oriented, classification scheme is proposed for assessing the severity of hip osteoarthritis (OA) using dimensionality reduction techniques. The introduced methodology tries to cope with the confined ability of physicians to structurally organize the entire available set of medical data into semantically similar categories and provide the capability to make visual observations among the ensemble of data using low-dimensional biplots. In this work, 18 pelvic radiographs of patients with verified unilateral hip OA are evaluated by experienced physicians and assessed into Normal, Mild and Severe following the Kellgren and Lawrence scale. Two regions of interest corresponding to radiographic hip joint spaces are determined and representative features are extracted using a typical texture analysis technique. The structural organization of all hip OA data is accomplished using distance and topology preservation-based dimensionality reduction techniques. The resulting map is a low-dimensional biplot that reflects the intrinsic organization of the ensemble of available data and which can be directly accessed by the physician. The conceivable visualization scheme can potentially reveal critical data similarities and help the operator to visually estimate their initial diagnosis. In addition, it can be used to detect putative clustering tendencies, examine the presence of data similarities and indicate the existence of possible false alarms in the initial perceptual evaluation.
Direct Linear Transformation Method for Three-Dimensional Cinematography
Shapiro, Robert
1978-01-01
The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)
Preliminary comparison of different reduction methods of graphene oxide
Yu Shang; Dong Zhang; Yanyun Liu; Chao Guo
2015-02-01
The reduction of graphene oxide (GO) is a promising route to bulk produce graphene-based sheets. Different reduction processes result in reduced graphene oxide (RGO) with different properties. In this paper three reduction methods, chemical, thermal and electrochemical reduction, were compared on three aspects including morphology and structure, reduction degree and electrical conductivity by means of scanning electron microscopy (SEM), X-ray diffraction(XRD), the Fourier transform infrared spectroscopy (FT-IR) spectrum, X-ray photoelectron spectroscopy (XPS) and four-point probe conductivity measurement. Understanding the different characteristics of different RGO by preliminary comparison is helpful in tailoring the characteristics of graphene materials for diverse applications and developing a simple, green, and efficient method for the mass production of graphene.
Uniform Deterministic Discrete Method for Three Dimensional Systems
无
1997-01-01
For radiative direct exchange areas in three dimensional system,the Uniform Deterministic Discrete Method(UDDM) was adopted.The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs.The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numercal accuracy.
Propensity score modelling in observational studies using dimension reduction methods.
Ghosh, Debashis
2011-07-01
Conditional independence assumptions are very important in causal inference modelling as well as in dimension reduction methodologies. These are two very strikingly different statistical literatures, and we study links between the two in this article. The concept of covariate sufficiency plays an important role, and we provide theoretical justification when dimension reduction and partial least squares methods will allow for valid causal inference to be performed. The methods are illustrated with application to a medical study and to simulated data.
Methods for two-dimensional cell confinement.
Le Berre, Maël; Zlotek-Zlotkiewicz, Ewa; Bonazzi, Daria; Lautenschlaeger, Franziska; Piel, Matthieu
2014-01-01
Protocols described in this chapter relate to a method to dynamically confine cells in two dimensions with various microenvironments. It can be used to impose on cells a given height, with an accuracy of less than 100 nm on large surfaces (cm(2)). The method is based on the gentle application of a modified glass coverslip onto a standard cell culture. Depending on the preparation, this confinement slide can impose on the cells a given geometry but also an environment of controlled stiffness, controlled adhesion, or a more complex environment. An advantage is that the method is compatible with most optical microscopy technologies and molecular biology protocols allowing advanced analysis of confined cells. In this chapter, we first explain the principle and issues of using these slides to confine cells in a controlled geometry and describe their fabrication. Finally, we discuss how the nature of the confinement slide can vary and provide an alternative method to confine cells with gels of controlled rigidity.
Simple self-reduction method for anterior shoulder dislocation
Reiner Wirbel; Martin Ruppert; Elmar Schwarz; Bernhard Zapp
2014-01-01
Objective:To demonstrate and evaluate a modified simple method about self-reduction of anterior shoulder dislocation for significance in the emergency room. Methods:TheBoss-Holzach-Matter method for self-reduction of anterior shoulder disloaction is described.Patients with an anterior shoulder dislocation were retrospectively analysed concerning age, gender, type of anterior shoulder dislocation, occurrence of associated fractures, time between injury and reduction, reduction time, and method of reduction with its respective success rate. Results:Eighty-six patients(52 men,34 women, mean age49 years) were treated fromJanuary 2010 toJune2014.The reduction time ranged between20 seconds and6 min(mean1.5 min). Subcoracoid type of shoulder dislocation was seen in72 cases(84%), subglenoid type in14 cases(16%).Associated factures were seen in20 cases, proportionally more often in subgleboid dislocations,12 at the greater tuberosity,6 at the inferior rim of the glenoid fossa and2 at both localizations.TheBoss-Holzach-Matter method was used in35 cases with a success rate of 71.5%; dieKocher method and traction/countertraction method with premedication were used in 14 cases and17 cases with success rates of64% and70%, respectively.All other cases and the failed primary attempts required hyponotic medication.All patients older than70(n=16) were not able to perform the self reducing procedure. Conclusion:The presentedBoss-Holzach-Matter method for reduction of anterior shoulder dislocation is a simple method without the need of anaesthesia, but cooperation from patients is crucial.The successful rate is comparable with other established methods.
A mixed model reduction method for preserving selected physical information
Zhang, Jing; Zheng, Gangtie
2017-03-01
A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.
Measurement reduction method for the Millikan oil-drop experiment
Li, Yingzi; Zhang, Liwen; Shan, Guanqiao; Li, Jin; Cui, Huaiyang; Chen, Ziyu
2015-09-01
To overcome the shortcomings of the measurement procedure used for the Millikan oil-drop experiment course, this paper suggests a measurement reduction method based on simplification of the conventional formula. In this method, only the voltage and the fall time are required to be recorded. This method can also simplify the analysis and the measurement error of the experiment and give proper parameter intervals, which results in a small measurement error. A solution is conducted to calculate the value of the elementary charge, and this solution verifies the measurement reduction method.
Coupled computation method of physics fields in aluminum reduction cells
周乃君; 梅炽; 姜昌伟; 周萍; 李劼
2003-01-01
Considering importance of study on physics fields and computer simulation for aluminum reduction cells so as to optimize design on aluminum reduction cells and develop new type of cells, based on analyzing coupled relation of physics fields in aluminum reduction cells, the mathematics and physics models were established and a coupled computation method on distribution of electric current and magnetic field, temperature profile and metal velocity in cells was developed. The computational results in 82kA prebaked cells agree well with the measured results, and the errors of maxium value calculated for three main physics property fields are less than 10%, which proves that the model and arithmetic are available. So the software developed can be not only applied to optimization design on traditional aluminum reduction cells, but also to establishing better technology basis to develop new drained aluminum reduction cells.
Romanov, Dmitri; Smith, Stanley; Brady, John; Levis, Robert J.
2008-02-01
We have studied the application of the diffusion mapping technique to dimensionality reduction and clustering in multidimensional optical datasets. The combinational (input-output) data were obtained by sampling search spaces related to optimization of a nonlinear physical process, short-pulse second harmonic generation. The diffusion mapping technique hierarchically reduces the dimensionality of the data set and unifies the statistics of input (the pulse shape) and output (the integral output intensity) parameters. The information content of the emerging clustered pattern can be optimized by modifying the parameters of the mapping procedure. The low-dimensional pattern captures essential features of the nonlinear process, based on a finite sampling set. In particular, the apparently parabolic two-dimensional projection of this pattern exhibits regular evolution with the increase of higher-intensity data in the sampling set. The basic shape of the pattern and the evolution are relatively insensitive to the size of the sampling set, as well as to the details of the mapping procedure. Moreover, the experimental data sets and the sets produced numerically on the basis of a theoretical model are mapped into patterns of remarkable similarity (as quantified by the similarity of the related quadratic-form coefficients). The diffusion mapping method is robust and capable of predicting higher-intensity points from a set of low-intensity points. With these attractive features, diffusion mapping stands poised to become a helpful statistical tool for preprocessing analysis of vast and multidimensional combinational optical datasets.
Rydzewski, J; Nowak, W
2016-04-12
In this work we propose an application of a nonlinear dimensionality reduction method to represent the high-dimensional configuration space of the ligand-protein dissociation process in a manner facilitating interpretation. Rugged ligand expulsion paths are mapped into 2-dimensional space. The mapping retains the main structural changes occurring during the dissociation. The topological similarity of the reduced paths may be easily studied using the Fréchet distances, and we show that this measure facilitates machine learning classification of the diffusion pathways. Further, low-dimensional configuration space allows for identification of residues active in transport during the ligand diffusion from a protein. The utility of this approach is illustrated by examination of the configuration space of cytochrome P450cam involved in expulsing camphor by means of enhanced all-atom molecular dynamics simulations. The expulsion trajectories are sampled and constructed on-the-fly during molecular dynamics simulations using the recently developed memetic algorithms [ Rydzewski, J.; Nowak, W. J. Chem. Phys. 2015 , 143 ( 12 ), 124101 ]. We show that the memetic algorithms are effective for enforcing the ligand diffusion and cavity exploration in the P450cam-camphor complex. Furthermore, we demonstrate that machine learning techniques are helpful in inspecting ligand diffusion landscapes and provide useful tools to examine structural changes accompanying rare events.
Many-body basis-set reduction applied to the two-dimensional t-Jz model
Riera, J.; Dagotto, E.
1993-06-01
A simple variation of the Lanczos method is discussed. The technique is based on a systematic reduction of the size of the Hilbert space of the model under consideration, and it has many similarities with the basis-set-reduction approach recently introduced by Wenzel and Wilson in the context of quantum chemistry. As an example, the two-dimensional t-Jz model of strongly correlated electrons is studied. Accurate results for the ground-state energy can be obtained on clusters of up to 50 sites, which are unreachable by conventional Lanczos approaches. In particular, the energy of one and two holes is analyzed as a function of Jz/t. In the bulk limit, the numerical results suggest that a finite coupling Jz/t]c~0.18 is necessary to induce ``binding'' of holes in the model.
DD-HDS: A method for visualization and exploration of high-dimensional data.
Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard
2007-09-01
Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.
An Improved Peak Sidelobe Reduction Method for Subarrayed Beam Scanning
Hang Hu
2015-01-01
Full Text Available This paper focused on PSL (peak sidelobe level reduction for subarrayed beam scanning in phased array radars. The desired GSP (Gaussian Subarray Patterns are achieved by creating a subarray weighting network. The GSP-based method could reduce PSL of array pattern; compared with the method based on the desired subarray pattern which is defined by ideal space-domain filter, the PSL reduction performance is improved remarkably. Further, based on the concept adopting superelement patterns to approximately express original subarray patterns, the simplified GSP-based method is proposed. So the dimension of each matrix required for creating the weighting network, which was originally the same as the element number, could be reduced to the same as the subarray number. Consequently, we achieve remarkable reduction of the computation burden; simultaneously, the PSL mitigation performance is degraded slightly. Simulation results demonstrate the validity of the introduced methods.
Lifetime of rho meson in correlation with magnetic-dimensional reduction
Kawaguchi, Mamiya [Nagoya University, Department of Physics, Nagoya (Japan); Matsuzaki, Shinya [Nagoya University, Department of Physics, Nagoya (Japan); Nagoya University, Institute for Advanced Research, Nagoya (Japan)
2017-04-15
It is naively expected that in a strong magnetic configuration, the Landau quantization ceases the neutral rho meson to decay to the charged pion pair, so the neutral rho meson will be long-lived. To closely access this naive observation, we explicitly compute the charged pion loop in the magnetic field at the one-loop level, to evaluate the magnetic dependence of the lifetime for the neutral rho meson as well as its mass. Due to the dimensional reduction induced by the magnetic field (violation of the Lorentz invariance), the polarization (spin s{sub z} = 0, ±1) modes of the rho meson, as well as the corresponding pole mass and width, are decomposed in a nontrivial manner compared to the vacuum case. To see the significance of the reduction effect, we simply take the lowest Landau level approximation to analyze the spin-dependent rho masses and widths. We find that the ''fate'' of the rho meson may be more complicated because of the magnetic-dimensional reduction: as the magnetic field increases, the rho width for the spin s{sub z} = 0 starts to develop, reaches a peak, then vanishes at the critical magnetic field to which the folklore refers. On the other side, the decay rates of the other rhos for s{sub z} = ±1 monotonically increase as the magnetic field develops. The correlation between the polarization dependence and the Landau level truncation is also addressed. (orig.)
Lifetime of rho meson in correlation with magnetic-dimensional reduction
Kawaguchi, Mamiya; Matsuzaki, Shinya
2017-04-01
It is naively expected that in a strong magnetic configuration, the Landau quantization ceases the neutral rho meson to decay to the charged pion pair, so the neutral rho meson will be long-lived. To closely access this naive observation, we explicitly compute the charged pion loop in the magnetic field at the one-loop level, to evaluate the magnetic dependence of the lifetime for the neutral rho meson as well as its mass. Due to the dimensional reduction induced by the magnetic field (violation of the Lorentz invariance), the polarization (spin sz=0,± 1 modes of the rho meson, as well as the corresponding pole mass and width, are decomposed in a nontrivial manner compared to the vacuum case. To see the significance of the reduction effect, we simply take the lowest Landau level approximation to analyze the spin-dependent rho masses and widths. We find that the "fate" of the rho meson may be more complicated because of the magnetic-dimensional reduction: as the magnetic field increases, the rho width for the spin sz=0 starts to develop, reaches a peak, then vanishes at the critical magnetic field to which the folklore refers. On the other side, the decay rates of the other rhos for sz = ± 1 monotonically increase as the magnetic field develops. The correlation between the polarization dependence and the Landau level truncation is also addressed.
Lifetime and mass of rho meson in correlation with magnetic-dimensional reduction
Kawaguchi, Mamiya
2016-01-01
It is simply anticipated that in a strong magnetic configuration, the Landau quantization ceases the neutral rho meson to decay to the charged pion pair, so the neutral rho meson will be long-lived. To closely access this naive observation, we explicitly compute the charged pion-loop in the magnetic field at the one-loop level, to evaluate the magnetic dependence of the lifetime for the neutral rho meson as well as its mass. Due to the dimensional reduction induced by the magnetic field (violation of the Lorentz invariance), the polarization (spin $s_z=0,\\pm 1$) modes of the rho meson, as well as the corresponding pole mass and width, are decomposed in a nontrivial manner compared to the vacuum case. To see the significance of the reduction effect, we simply take the lowest-Landau level approximation to analyze the spin-dependent rho masses and widths. We find that the "fate" of the rho meson may be more complicated to say because of the magnetic-dimensional reduction: as the magnetic field increases, the rho...
Noise reduction method based on weighted manifold decomposition
Gan Jian-Chao; Xiao Xian-Ci
2004-01-01
A noise reduction method based on weighted manifold decomposition is proposed in this paper, which does not need knowledge of the chaotic dynamics and choosing number of eigenvalues. The simulation indicates that the performance of this method can increase the signal-to-noise ratio of noisy chaotic time series.
TreePM Method for Two-Dimensional Cosmological Simulations
Suryadeep Ray
2004-09-01
We describe the two-dimensional TreePM method in this paper. The 2d TreePM code is an accurate and efficient technique to carry out large two-dimensional N-body simulations in cosmology. This hybrid code combines the 2d Barnes and Hut Tree method and the 2d Particle–Mesh method. We describe the splitting of force between the PM and the Tree parts. We also estimate error in force for a realistic configuration. Finally, we discuss some tests of the code.
Design of a 3-dimensional visual illusion speed reduction marking scheme.
Liang, Guohua; Qian, Guomin; Wang, Ye; Yi, Zige; Ru, Xiaolei; Ye, Wei
2017-03-01
To determine which graphic and color combination for a 3-dimensional visual illusion speed reduction marking scheme presents the best visual stimulus, five parameters were designed. According to the Balanced Incomplete Blocks-Law of Comparative Judgment, three schemes, which produce strong stereoscopic impressions, were screened from the 25 initial design schemes of different combinations of graphics and colors. Three-dimensional experimental simulation scenes of the three screened schemes were created to evaluate four different effects according to a semantic analysis. The following conclusions were drawn: schemes with a red color are more effective than those without; the combination of red, yellow and blue produces the best visual stimulus; a larger area from the top surface and the front surface should be colored red; and a triangular prism should be painted as the graphic of the marking according to the stereoscopic impression and the coordination of graphics with the road.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Joshua I Glaser
Full Text Available Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1 relatively precise 3-dimensional brain imaging is possible; (2 the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3 a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Template-free Synthesis of One-dimensional Cobalt Nanostructures by Hydrazine Reduction Route
Lan Tianmin
2011-01-01
Full Text Available Abstract One-dimensional cobalt nanostructures with large aspect ratio up to 450 have been prepared via a template-free hydrazine reduction route under external magnetic field assistance. The morphology and properties of cobalt nanostructures were characterized by scanning electron microscopy, X-ray diffractometer, and vibrating sample magnetometer. The roles of the reaction conditions such as temperature, concentration, and pH value on morphology and magnetic properties of fabricated Co nanostructures were investigated. This work presents a simple, low-cost, environment-friendly, and large-scale production approach to fabricate one-dimensional magnetic Co materials. The resulting materials may have potential applications in nanodevice, catalytic agent, and magnetic recording.
Hai-Ming Xu
Full Text Available The elusive but ubiquitous multifactor interactions represent a stumbling block that urgently needs to be removed in searching for determinants involved in human complex diseases. The dimensionality reduction approaches are a promising tool for this task. Many complex diseases exhibit composite syndromes required to be measured in a cluster of clinical traits with varying correlations and/or are inherently longitudinal in nature (changing over time and measured dynamically at multiple time points. A multivariate approach for detecting interactions is thus greatly needed on the purposes of handling a multifaceted phenotype and longitudinal data, as well as improving statistical power for multiple significance testing via a two-stage testing procedure that involves a multivariate analysis for grouped phenotypes followed by univariate analysis for the phenotypes in the significant group(s. In this article, we propose a multivariate extension of generalized multifactor dimensionality reduction (GMDR based on multivariate generalized linear, multivariate quasi-likelihood and generalized estimating equations models. Simulations and real data analysis for the cohort from the Study of Addiction: Genetics and Environment are performed to investigate the properties and performance of the proposed method, as compared with the univariate method. The results suggest that the proposed multivariate GMDR substantially boosts statistical power.
Non perturbative methods in two dimensional quantum field theory
Abdalla, Elcio; Rothe, Klaus D
1991-01-01
This book is a survey of methods used in the study of two-dimensional models in quantum field theory as well as applications of these theories in physics. It covers the subject since the first model, studied in the fifties, up to modern developments in string theories, and includes exact solutions, non-perturbative methods of study, and nonlinear sigma models.
METHODS OF REDUCTION OF FREE PHENOL CONTENT IN PHENOLIC FOAM
Bruyako Mikhail Gerasimovich
2012-12-01
method aimed at reduction of toxicity of phenolic foams consists in the introduction of a composite mixture of chelate compounds. Raw materials applied in the production of phenolic foams include polymers FRB-1A and VAG-3. The aforementioned materials are used to produce foams FRP-1. Introduction of 1% aluminum fluoride leads to the 40% reduction of the free phenol content in the foam. Introduction of crystalline zinc chloride accelerates the foaming and curing of phenolic foams. The technology that contemplates the introduction of zeolites into the mixture includes pre-mixing with FRB -1A and subsequent mixing with VAG-3; thereafter, the composition is poured into the form, in which the process of foaming is initiated. The content of free phenol was identified using the method of UV spectroscopy. The objective of the research was to develop methods of reduction of the free phenol content in the phenolic foam.
Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data
Palumbo, Francesco; D'Enza, Alfonso Iodice
The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.
Three-Dimensional Graphene-Based Nanomaterials as Electrocatalysts for Oxygen Reduction Reaction
Xuan Ji
2015-01-01
Full Text Available In recent years, three-dimensional (3D graphene-based nanomaterials have been demonstrated to be efficient and promising electrocatalysts for oxygen reduction reaction (ORR in fuel cells application. This review summarizes and categorizes the recent progress on the preparation and performance of these novel materials as ORR catalysts, including heteroatom-doped 3D graphene network, metal-free 3D graphene-based nanocomposites, nonprecious metal-containing 3D graphene-based nanocomposites, and precious metal-containing 3D graphene-based nanocomposites. The challenges and future perspective of this field are also discussed.
Dimensionality reduction for click-through rate prediction: Dense versus sparse representation
Fruergaard, Bjarne Ørum; Hansen, Toke Jansen; Hansen, Lars Kai
2013-01-01
intelligently such as clickthrough rate prediction need to be sufficiently fast. In this work, we propose to use dimensionality reduction of the user-website interaction graph in order to produce simplified features of users and websites that can be used as predictors of clickthrough rate. We demonstrate......In online advertising, display ads are increasingly being placed based on real-time auctions where the advertiser who wins gets to serve the ad. This is called real-time bidding (RTB). In RTB, auctions have very tight time constraints on the order of 100ms. Therefore mechanisms for bidding...
Exactly Embedded Wavefunction Methods for Characterizing Nitrogen Reduction Catalysis
2015-01-15
AFRL-OSR-VA-TR-2015-0038 Exactly Embedded Wavefunction Methods for Characterizing Nitrogen THOMAS MILLER CALIFORNIA INSTITUTE OF TECHNOLOGY Final...SUBTITLE Exactly Embedded Wavefunction Methods for Characterizing Nitrogen Reduction Catalysis 5a. CONTRACT NUMBER N/A 5b. GRANT NUMBER FA9550...of developing and applying exactly embedded density functional and wavefunction theory methods for the investigation of small-molecular activation
Numerical methods for high-dimensional probability density function equations
Cho, H.; Venturi, D.; Karniadakis, G. E.
2016-01-01
In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.
Numerical methods for high-dimensional probability density function equations
Cho, H. [Department of Mathematics, University of Maryland College Park, College Park, MD 20742 (United States); Venturi, D. [Department of Applied Mathematics and Statistics, University of California Santa Cruz, Santa Cruz, CA 95064 (United States); Karniadakis, G.E., E-mail: gk@dam.brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)
2016-01-15
In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker–Planck and Dostupov–Pugachev equations), random wave theory (Malakhov–Saichev equations) and coarse-grained stochastic systems (Mori–Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.
Hollerbach, K.; Van Vorhis, R.L. [Lawrence Livermore National Lab., CA (United States); Hollister, A. [Louisiana State Univ., Shreveport, LA (United States)
1996-03-01
Wrist posture and rapid wrist movements are risk factors for work related musculoskeletal disorders. Measurement studies frequently involve optoelectronic methods in which markers are placed on the subject`s hand and wrist and the trajectories of the markers are tracked in three dimensional space. A goal of wrist posture measurements is to quantitatively establish wrist posture orientation. Accuracy and fidelity of the measurement data with respect to kinematic mechanisms are essential in wrist motion studies. Fidelity with the physical kinematic mechanism can be limited by the choice of kinematic modeling techniques and the representation of motion. Frequently, ergonomic studies involving wrist kinematics make use of two dimensional measurement and analysis techniques. Two dimensional measurement of human joint motion involves the analysis of three dimensional displacements in an obersver selected measurement plane. Accurate marker placement and alignment of joint motion plane with the observer plane are difficult. In nature, joint axes can exist at any orientation and location relative to an arbitrarily chosen global reference frame. An arbitrary axis is any axis that is not coincident with a reference coordinate. We calculate the errors that result from measuring joint motion about an arbitrary axis using two dimensional methods.
Three-dimensional decomposition method of global atmospheric circulation
2008-01-01
By adopting the idea of three-dimensional Walker, Hadley and Rossby stream functions, the global atmospheric circulation can be considered as the sum of three stream functions from a global per- spective. Therefore, a mathematical model of three-dimensional decomposition of global atmospheric circulation is proposed and the existence and uniqueness of the model are proved. Besides, the model includes a numerical method leading to no truncation error in the discrete three-dimensional grid points. Results also show that the three-dimensional stream functions exist and are unique for a given velocity field. The mathematical model shows the generalized form of three-dimensional stream func- tions equal to the velocity field in representing the features of atmospheric motion. Besides, the vertical velocity calculated through the model can represent the main characteristics of the vertical motion. In sum, the three-dimensional decomposition of atmospheric circulation is convenient for the further in- vestigation of the features of global atmospheric motions.
MRFD Method for Scattering From Three Dimensional Dielectric Bodies
A. F. Yagli
2011-09-01
Full Text Available A three-dimensional multiresolution frequency domain (MRFD method is established to compute bistatic radar cross sections of arbitrarily shaped dielectric objects. The proposed formulation is successfully verified by computing the bistatic radar cross sections of a dielectric sphere and a dielectric cube. Comparing the results to those obtained from the finite difference frequency domain (FDFD method simulations and analytic calculations, we demonstrated the computational time and memory advantages of MRFD method.
Low dimensional gyrokinetic PIC simulation by δf method
Chen, C. M.; Nishimura, Yasutaro; Cheng, C. Z.
2015-11-01
A step by step development of our low dimensional gyrokinetic Particle-in-Cell (PIC) simulation is reported. One dimensional PIC simulation of Langmuir wave dynamics is benchmarked. We then take temporal plasma echo as a test problem to incorporate the δf method. Electrostatic driftwave simulation in one dimensional slab geometry is resumed in the presence of finite density gradients. By carefully diagnosing contour plots of the δf values in the phase space, we discuss the saturation mechanism of the driftwave instabilities. A v∥ formulation is employed in our new electromagnetic gyrokinetic method by solving Helmholtz equation for time derivative of the vector potential. Electron and ion momentum balance equations are employed in the time derivative of the Ampere's law. This work is supported by Ministry of Science and Technology of Taiwan, MOST 103-2112-M-006-007 and MOST 104-2112-M-006-019.
Computational methods for three-dimensional microscopy reconstruction
Frank, Joachim
2014-01-01
Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology. Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.
A comparative study of two stochastic mode reduction methods
Stinis, Panagiotis
2005-09-01
We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arising from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.
Coelho, Flávio S
2016-01-01
We analyse the causal structure of the two dimensional (2D) reduced background used in the perturbative treatment of a head-on collision of two $D$-dimensional Aichelburg-Sexl gravitational shock waves. After defining all causal boundaries, namely the future light-cone of the collision and the past light-cone of a future observer, we obtain characteristic coordinates using two independent methods. The first is a geometrical construction of the null rays which define the various light cones, using a parametric representation. The second is a transformation of the 2D reduced wave operator for the problem into a hyperbolic form. The characteristic coordinates are then compactified allowing us to represent all causal light rays in a conformal Carter-Penrose diagram. Our construction holds to all orders in perturbation theory. In particular, we can easily identify the singularities of the source functions and of the Green's functions appearing in the perturbative expansion, at each order, which is crucial for a su...
A simple and efficient electrochemical reductive method for graphene oxide
Yanyun Liu; Dong Zhang; Yu Shang; Chao Guo
2014-10-01
The electrochemical reduction of graphene oxide typically involves complicated procedures, such as modification of electrodes and preparation of electrolytes, which is often needed in previous reports. In this paper, a simple and efficient electrochemical process is described for the synthesis of high-quality reduced graphene oxide. The main procedures involve the electrophoretic deposition of graphene oxide onto positive electrode and the subsequent in situ electrochemical negative reduction when the electrode changes from positive to negative. This approach opens up a new, practical and green reducing method to prepare largescale graphene.
Two-Dimensional Change Detection Methods Remote Sensing Applications
Ilsever, Murat
2012-01-01
Change detection using remotely sensed images has many applications, such as urban monitoring, land-cover change analysis, and disaster management. This work investigates two-dimensional change detection methods. The existing methods in the literature are grouped into four categories: pixel-based, transformation-based, texture analysis-based, and structure-based. In addition to testing existing methods, four new change detection methods are introduced: fuzzy logic-based, shadow detection-based, local feature-based, and bipartite graph matching-based. The latter two methods form the basis for a
Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.; Hornegger, Joachim; Zhu Lei; Strobel, Norbert; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Department of Radiology, Stanford University, Stanford, California 94305 (United States) and Center for Medical Image Science and Visualization, Linkoeping University, Linkoeping (Sweden); Pattern Recognition Laboratory, Department of Computer Science, Friedrich-Alexander University of Erlangen-Nuremberg, 91054, Erlangen (Germany); Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Siemens AG Healthcare, Forchheim 91301 (Germany); Department of Radiology, Stanford University, Stanford, California 94305 (United States)
2011-11-15
Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-ray views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8
无
2010-01-01
A new noise reduction method for nonlinear signal based on maximum variance unfolding(MVU)is proposed.The noisy sig- nal is firstly embedded into a high-dimensional phase space based on phase space reconstruction theory,and then the manifold learning algorithm MVU is used to perform nonlinear dimensionality reduction on the data of phase space in order to separate low-dimensional manifold representing the attractor from noise subspace.Finally,the noise-reduced signal is obtained through reconstructing the low-dimensional manifold.The simulation results of Lorenz system show that the proposed MVU-based noise reduction method outperforms the KPCA-based method and has the advantages of simple parameter estimation and low parameter sensitivity.The proposed method is applied to fault detection of a vibration signal from rotor-stator of aero engine with slight rubbing fault.The denoised results show that the slight rubbing features overwhelmed by noise can be effectively extracted by the proposed noise reduction method.
Multi-dimensional blind separation method for STBC systems
Minggang Luo; Liping Li; Guobing Qian; Huaguo Zhang
2013-01-01
Intercepted signal blind separation is a research topic with high importance for both military and civilian communication systems. A blind separation method for space-time block code (STBC) systems is proposed by using the ordinary independent component analysis (ICA). This method cannot work when spe-cific complex modulations are employed since the assumption of mutual independence cannot be satisfied. The analysis shows that source signals, which are group-wise independent and use multi-dimensional ICA (MICA) instead of ordinary ICA, can be applied in this case. Utilizing the block-diagonal structure of the cumulant matrices, the JADE algorithm is generalized to the multi-dimensional case to separate the received data into mutual y in-dependent groups. Compared with ordinary ICA algorithms, the proposed method does not introduce additional ambiguities. Sim-ulations show that the proposed method overcomes the drawback and achieves a better performance without utilizing coding infor-mation than channel estimation based algorithms.
Extension of modified power method to two-dimensional problems
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung
2016-09-01
In this study, the generalized modified power method was extended to two-dimensional problems. A direct application of the method to two-dimensional problems was shown to be unstable when the number of requested eigenmodes is larger than a certain problem dependent number. The root cause of this instability has been identified as the degeneracy of the transfer matrix. In order to resolve this instability, the number of sub-regions for the transfer matrix was increased to be larger than the number of requested eigenmodes; and a new transfer matrix was introduced accordingly which can be calculated by the least square method. The stability of the new method has been successfully demonstrated with a neutron diffusion eigenvalue problem and the 2D C5G7 benchmark problem.
Routh Order Reduction Method of Relativistic Birkhoffian Systems
LUO Shao-Kai; GUO Yong-Xin
2007-01-01
Routh order reduction method of the relativistic Birkhoffian equations is studied.For a relativistic Birkhoffian system,the cyclic integrals can be found by using the perfect differential method.Through these cyclic integrals,the order of the system can be reduced.If the relativistic Birkhoffian system has a cyclic integral,then the Birkhoffian equations can be reduced at least by two degrees and the Birkhoffian form can be kept.The relations among the relativistic Birkhoffian mechanics,the relativistic Hamiltonian mechanics,and the relativistic Lagrangian mechanics are discussed,and the Routh order reduction method of the relativistic Lagrangian system is obtained.And an example is given to illustrate the application of the result.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
GENETIC ALGORITHM IN REDUCTION OF NUMERICAL DISPERSION OF 3-D ADI-FDTD METHOD
Zhang Yan; Lǖ Shanwei; Gao Wenjun
2007-01-01
A new method to reduce the numerical dispersion of the three-dimensional Alternating Direction Implicit Finite-Difference Time-Domain(3-D ADI-FDTD)method is proposed.Firstly,the numerical formulations of the 3-D ADI-FDTD method are modified with the artificial anisotropy,and the new numerical dispersion relation is derived.Secondly,the relative permittivity tensor of the artificial anisotropy can be obtained by the Adaptive Genetic Algorithm(AGA).In order to demonstrate the accuracy and efficiency of this new method,a monopole antenna is simulated as an example.And the numerical results and the computational requirements of the proposed method are cornpared with those of the conventional ADI-FDTD method and the measured data.In addition the reduction of the numerical dispersion is investigated as the objective function of the AGA.It is found that this new method is accurate and efficient by choosing proper objective function.
Fermentation, fractionation and purification of streptokinase by chemical reduction method
M Niakan
2011-05-01
Full Text Available Background and Objectives: Streptokinase is used clinically as an intravenous thrombolytic agent for the treatment of acute myocardial infarction and is commonly prepared from cultures of Streptococcus equisimilis strain H46A. The objective of the present study was the production of streptokinase from strain H46A and purification by chemical reduction method."nMaterials and Methods: The rate of streptokinase production evaluated under the effect of changes on some fermentation factors. Moreover, due to the specific structure of streptokinase, a chemical reduction method employed for the purification of streptokinase from the fermentation broth. The H46A strain of group C streptococcus, was grown in a fermentor. The proper pH adjusted with NaOH under glucose feeding in an optimum temperature. The supernatant of the fermentation product was sterilized by filtration and concentrated by ultrafiltration. The pH of the concentrate was adjusted, cooled, and precipitated by methanol. Protein solution was reduced with dithiothreitol (DTT. Impurities settled down by aldrithiol-2 and the biological activity of supernatant containing streptokinase was determined."nResults: In the fed -batch culture, the rate of streptokinase production increased over two times as compared with the batch culture and the impurities were effectively separated from streptokinase by reduction method."nConclusion: Improvements in SK production are due to a decrease in lag phase period and increase in the growth rate of logarithmic phase. The methods of purification often result in unacceptable losses of streptokinase, but the chemical reduction method give high yield of streptokinase and is easy to perform it.
50 CFR 600.1011 - Reduction methods and other conditions.
2010-10-01
... reduction loan balance that results from all reduction payments that NMFS actually makes and does not...' tender of the reduction payment for the reduction permit, forever revoked. Each reduction permit holder shall, upon NMFS' tender of the reduction payment, surrender the original reduction permit to NMFS....
Metal artifact reduction method using metal streaks image subtraction
Pua, Rizza D.; Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)
2014-04-15
Many studies have been dedicated for metal artifact reduction (MAR); however, the methods are successful to varying degrees depending on situations. Sinogram in-painting, filtering, iterative method are some of the major categories of MAR. Each has its own merits and weaknesses. A combination of these methods or hybrid methods have also been developed to make use of the different benefits of two techniques and minimize the unfavorable results. Our method focuses on the in-paitning approach and a hybrid MAR described by Xia et al. Although in-painting scheme is an effective technique in reducing the primary metal artifacts, a major drawback is the reintroduction of new artifacts that can be caused by an inaccurate interpolation process. Furthermore, combining the segmented metal image to the corrected nonmetal image in the final step of a conventional inpainting approach causes an issue of incorrect metal pixel values. Our proposed method begins with a sinogram in-painting approach and ends with an image-based metal artifact reduction scheme. This work provides a simple, yet effective solution for reducing metal artifacts and acquiring the original metal pixel information. The proposed method demonstrated its effectiveness in a simulation setting. The proposed method showed image quality that is comparable to the standard MAR; however, quantitatively more accurate than the standard MAR.
Tom Cattaert
Full Text Available We propose a novel multifactor dimensionality reduction method for epistasis detection in small or extended pedigrees, FAM-MDR. It combines features of the Genome-wide Rapid Association using Mixed Model And Regression approach (GRAMMAR with Model-Based MDR (MB-MDR. We focus on continuous traits, although the method is general and can be used for outcomes of any type, including binary and censored traits. When comparing FAM-MDR with Pedigree-based Generalized MDR (PGMDR, which is a generalization of Multifactor Dimensionality Reduction (MDR to continuous traits and related individuals, FAM-MDR was found to outperform PGMDR in terms of power, in most of the considered simulated scenarios. Additional simulations revealed that PGMDR does not appropriately deal with multiple testing and consequently gives rise to overly optimistic results. FAM-MDR adequately deals with multiple testing in epistasis screens and is in contrast rather conservative, by construction. Furthermore, simulations show that correcting for lower order (main effects is of utmost importance when claiming epistasis. As Type 2 Diabetes Mellitus (T2DM is a complex phenotype likely influenced by gene-gene interactions, we applied FAM-MDR to examine data on glucose area-under-the-curve (GAUC, an endophenotype of T2DM for which multiple independent genetic associations have been observed, in the Amish Family Diabetes Study (AFDS. This application reveals that FAM-MDR makes more efficient use of the available data than PGMDR and can deal with multi-generational pedigrees more easily. In conclusion, we have validated FAM-MDR and compared it to PGMDR, the current state-of-the-art MDR method for family data, using both simulations and a practical dataset. FAM-MDR is found to outperform PGMDR in that it handles the multiple testing issue more correctly, has increased power, and efficiently uses all available information.
Constellation Modification Method for OFDM Peak Power Reduction
R V. Orishko
2011-10-01
Full Text Available Constellation modification method for OFDM peak power reduction is considered. Choosing function which sum with OFDM symbol gives new symbol with lower peak power is the main idea of method. Signal modification by means of this function carrying out at subcarriers modulation constellation domain. Fourier transform of given function is executing, values obtained for the spectral components are added to the spectral components of the signal and reformed modified OFDM symbol. Main qualities and capabilities for given modification function and different subcarrier modulation methods is analyzed. Obtained plots of peak factor of the coefficients of the algorithm and the algorithm efficiency.
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Zekić-Sušac Marijana
2014-09-01
Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.
Two Dimensional Lattice Boltzmann Method for Cavity Flow Simulation
Panjit MUSIK
2004-01-01
Full Text Available This paper presents a simulation of incompressible viscous flow within a two-dimensional square cavity. The objective is to develop a method originated from Lattice Gas (cellular Automata (LGA, which utilises discrete lattice as well as discrete time and can be parallelised easily. Lattice Boltzmann Method (LBM, known as discrete Lattice kinetics which provide an alternative for solving the Navier–Stokes equations and are generally used for fluid simulation, is chosen for the study. A specific two-dimensional nine-velocity square Lattice model (D2Q9 Model is used in the simulation with the velocity at the top of the cavity kept fixed. LBM is an efficient method for reproducing the dynamics of cavity flow and the results which are comparable to those of previous work.
Estimating sufficient reductions of the predictors in abundant high-dimensional regressions
Cook, R Dennis; Rothman, Adam J; 10.1214/11-AOS962
2012-01-01
We study the asymptotic behavior of a class of methods for sufficient dimension reduction in high-dimension regressions, as the sample size and number of predictors grow in various alignments. It is demonstrated that these methods are consistent in a variety of settings, particularly in abundant regressions where most predictors contribute some information on the response, and oracle rates are possible. Simulation results are presented to support the theoretical conclusion.
Whittaker Order Reduction Method of Relativistic Birkhoffian Systems
LUOShao-Kai; HUANGFei-Jiang; LUYi-Bing
2004-01-01
The order reduction method of the relativistic Birkhollian equations is studied. For a relativistic autonomous Birkhotffian system, if the conservative law of the Birkhotffian holds, the conservative quantity can be called the generalized energy integral. Through the generalized energy integral, the order of the system can be reduced. If the relativisticBirkhoffian system has a generalized energy integral, then the Birkhoffian equations can be reduced by at least twodegrees and the Birkhoffian form can be kept. The relations among the relativistic Birkhoffian mechanics, the relativistic Hamiltonian mechanics and the relativistic Lagrangian mechanics are discussed, and the Whittaker order reduction method of the relativistic Lagrangian system is obtained. And an example is given to illustrate the application of theresult.
Whittaker Order Reduction Method of Relativistic Birkhoffian Systems
LUO Shao-Kai; HUANG Fei-Jiang; LU Yi-Bing
2004-01-01
The order reduction method of the relativistic Birkhoffian equations is studied. For a relativistic autonomous Birkhoffian system, if the conservative law of the Birkhoffian holds, the conservative quantity can be called the generalized energy integral. Through the generalized energy integral, the order of the system can be reduced. If the relativistic Birkhoffian system has a generalized energy integral, then the Birkhoffian equations can be reduced by at least two degrees and the Birkhoffian form can be kept. The relations among the relativistic Birkhoffian mechanics, the relativistic Hamiltonian mechanics and the relativistic Lagrangian mechanics are discussed, and the Whittaker order reduction method of the relativistic Lagrangian system is obtained. And an example is given to illustrate the application of the result.
Gell-Mann, M.; Zwiebach, B.
1985-10-28
We discuss general aspects of dimensional reduction induced by nonlinear scalar dynamics, including the small fluctuation expansion of the action. The case of compact positively curved scalar manifolds described by symmetric spaces G/H is shown to be free of tachyonic instabilities; the spectrum consists of a graviton, a massless scalar and towers of massive spin-two, spin-one, and spin-zero fields. These towers are worked out explicitly for the case of a two-sphere. The case of noncompact negatively curved scalar manifolds inducing a noncompact nonhomogeneous space for the extra dimensions is studied in the particular example of SU(1,1)/U(1). The massless spectrum consists of a graviton and a scalar and suitable boundary conditions are seen to give a discrete spectrum, actual conservation of formally conserved quantities, and no problems of interpretation. We discuss positive energy. (orig.).
Kinkhabwala, Ali
2013-01-01
The connection between network topology and stability remains unclear. General approaches that clarify this relationship and allow for more efficient stability analysis would be desirable. In this manuscript, I examine the mathematical notion of influence topology, which is fundamentally based on the network reaction stoichiometries and the first derivatives of the reactions with respect to each species at the steady state solution(s). The influence topology is naturally represented as a signed directed bipartite graph with arrows or blunt arrows connecting a species node to a reaction node (positive/negative derivative) or a reaction node to a species node (positive/negative stoichiometry). The set of all such graphs is denumerable. A significant reduction in dimensionality is possible through stoichiometric scaling, cycle compaction, and temporal scaling. All cycles in a network can be read directly from the graph of its influence topology, enabling efficient and intuitive computation of the principal minor...
Prescott, Aaron M.; Abel, Steven M.
2016-12-01
The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.
Dramatic reduction of dimensionality in large biochemical networks owing to strong pair correlations
Dworkin, Michael; Mukherjee, Sayak; Jayaprakash, Ciriyam; Das, Jayajit
2012-01-01
Large multi-dimensionality of high-throughput datasets pertaining to cell signalling and gene regulation renders it difficult to extract mechanisms underlying the complex kinetics involving various biochemical compounds (e.g. proteins and lipids). Data-driven models often circumvent this difficulty by using pair correlations of the protein expression levels to produce a small number (fewer than 10) of principal components, each a linear combination of the concentrations, to successfully model how cells respond to different stimuli. However, it is not understood if this reduction is specific to a particular biological system or to nature of the stimuli used in these experiments. We study temporal changes in pair correlations, described by the covariance matrix, between concentrations of different molecular species that evolve following deterministic mass-action kinetics in large biologically relevant reaction networks and show that this dramatic reduction of dimensions (from hundreds to less than five) arises from the strong correlations between different species at any time and is insensitive to the form of the nonlinear interactions, network architecture, and to a wide range of values of rate constants and concentrations. We relate temporal changes in the eigenvalue spectrum of the covariance matrix to low-dimensional, local changes in directions of the system trajectory embedded in much larger dimensions using elementary differential geometry. We illustrate how to extract biologically relevant insights such as identifying significant timescales and groups of correlated chemical species from our analysis. Our work provides for the first time, to our knowledge, a theoretical underpinning for the successful experimental analysis and points to a way to extract mechanisms from large-scale high-throughput datasets. PMID:22378749
In vitro method for prediction of plaque reduction by dentifrice.
Tepper, Bruce; Howard, Brian; Schnell, Daniel; Mills, Lisa; Xu, Jian
2015-11-01
An in vitro Particle Based Biofilm (PBB) model was developed to enable high throughput screening tests to predict clinical plaque reduction. Multi-species oral biofilms were cultured from pooled stimulated human saliva on continuously-colliding hydroxyapatite particles. After three days PBBs were saline washed prior to use in screening tests. Testing involved dosing PBBs for 1min followed by neutralization of test materials and rinsing. PBBs were then assayed for intact biofilm activity measured as ATP. The ranking of commercial dentifrices from most to least reduction of intact biofilm activity was Crest ProHealth Clinical Gum Protection, Crest ProHealth, Colgate Total and Crest Cavity Protection. We demonstrated five advantages of the PBB model: 1) the ATP metric had a linear response over ≥1000-fold dynamic range, 2) potential interference with the ATP assay by treatments was easily eliminated by rinsing PBBs with saline, 3) discriminating power was statistically excellent between all treatment comparisons with the negative controls, 4) screening test results were reproducible across four tests, and 5) the screening test produced the same rank order for dentifrices as clinical studies that measured plaque reduction. In addition, 454 pyrosequencing of the PBBs indicated an oral microbial consortium was present. The most prevalent genera were Neisseria, Rothia, Streptococcus, Porphyromonas, Prevotella, Actinomyces, Fusobacterium, Veillonella and Haemophilus. We conclude these in vitro methods offer an efficient, effective and relevant screening tool for reduction of intact biofilm activity by dentifrices. Moreover, dentifrice rankings by the in vitro test method are expected to predict clinical results for plaque reduction. Copyright © 2015 Elsevier B.V. All rights reserved.
One-Dimensional Optimal System and Similarity Reductions of Wu—Zhang Equation
Xiong, Na; Li, Yu-Qi; Chen, Jun-Chao; Chen, Yong
2016-07-01
The one-dimensional optimal system for the Lie symmetry group of the (2+1)-dimensional Wu—Zhang equation is constructed by the general and systematic approach. Based on the optimal system, the complete and inequivalent symmetry reduction systems are presented in the form of table. It is noteworthy that a new Painlevé integrable equation with constant coefficient is in the table besides the classic Boussinesq equation and the steady case of the Wu-Zhang equation. Supported by the Global Change Research Program of China under Grant No. 2015CB953904, National Natural Science Foundation of China under Grant Nos. 11375090, 11275072 and 11435005, Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20120076110024, the Network Information Physics Calculation of Basic Research Innovation Research Group of China under Grant No. 61321064, Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things under Grant No. ZF1213, and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LY14A010005
Reductions in finite-dimensional integrable systems and special points of classical r-matrices
Skrypnyk, T.
2016-12-01
For a given 𝔤 ⊗ 𝔤-valued non-skew-symmetric non-dynamical classical r-matrices r(u, v) with spectral parameters, we construct the general form of 𝔤-valued Lax matrices of finite-dimensional integrable systems satisfying linear r-matrix algebra. We show that the reduction in the corresponding finite-dimensional integrable systems is connected with "the special points" of the classical r-matrices in which they become degenerated. We also propose a systematic way of the construction of additional integrals of the Lax-integrable systems associated with the symmetries of the corresponding r-matrices. We consider examples of the Lax matrices and integrable systems that are obtained in the framework of the general scheme. Among them there are such physically important systems as generalized Gaudin systems in an external magnetic field, ultimate integrable generalization of Toda-type chains (including "modified" or "deformed" Toda chains), generalized integrable Jaynes-Cummings-Dicke models, integrable boson models generalizing Bose-Hubbard dimer models, etc.
Addition to the method of dimensional analysis in hydraulic problems
A.M. Kalyakin
2013-03-01
Full Text Available The modern engineering design, structures, and especially machines running of new technologies set to engineers the problems that require immediate solution. Therefore, the importance of the method of dimensional analysis as a tool for ordinary engineer is increasing, allows developers to get quick and quite simple solution of even very complex tasks.The method of dimensional analysis is being applied to almost any field of physics and engineering, but it is especially effective at solving problems of mechanics and applied mechanics – hydraulics, fluid mechanics, structural mechanics, etc.Until now the main obstacle to the application of the method of dimensional analysis in its classic form was a multifactorial problem (with many arguments, the solution of which was rather difficult and sometimes impossible. In order to overcome these difficulties, the authors of this study proposed a simple method – application of the combined option avoiding these difficulties.The main result of the study is a simple algorithm which application will make it possible to solve a large class of previously unsolvable problems.
Size reduction of the transfer matrix of two-dimensional Ising and Potts models
M. Ghaemi
2003-12-01
Full Text Available A new algebraic method is developed to reduce the size of the transfer matrix of Ising and three-state Potts ferromagnets on strips of width r sites of square and triangular lattices. This size reduction has been set up in such a way that the maximum eigenvalues of both the reduced and the original transfer matrices became exactly the same. In this method we write the original transfer matrix in a special blocked form in such a way that the sums of row elements of a block of the original transfer matrix be the same. The reduced matrix is obtained by replacing each block of the original transfer matrix with the sum of the elements of one of its rows. Our method results in significant matrix size reduction which is a crucial factor in determining the maximum eigenvalue.
Deepthi, Dasika Ratna; Eswaran, K
2007-01-01
In this paper, we present a Mirroring Neural Network architecture to perform non-linear dimensionality reduction and Object Recognition using a reduced lowdimensional characteristic vector. In addition to dimensionality reduction, the network also reconstructs (mirrors) the original high-dimensional input vector from the reduced low-dimensional data. The Mirroring Neural Network architecture has more number of processing elements (adalines) in the outer layers and the least number of elements in the central layer to form a converging-diverging shape in its configuration. Since this network is able to reconstruct the original image from the output of the innermost layer (which contains all the information about the input pattern), these outputs can be used as object signature to classify patterns. The network is trained to minimize the discrepancy between actual output and the input by back propagating the mean squared error from the output layer to the input layer. After successfully training the network, it ...
A mixed finite difference/Galerkin method for three-dimensional Rayleigh-Benard convection
Buell, Jeffrey C.
1988-01-01
A fast and accurate numerical method, for nonlinear conservation equation systems whose solutions are periodic in two of the three spatial dimensions, is presently implemented for the case of Rayleigh-Benard convection between two rigid parallel plates in the parameter region where steady, three-dimensional convection is known to be stable. High-order streamfunctions secure the reduction of the system of five partial differential equations to a system of only three. Numerical experiments are presented which verify both the expected convergence rates and the absolute accuracy of the method.
Ren, Zhuyin; Pope, Stephen B.; Vladimirsky, Alexander; Guckenheimer, John M.
2006-03-01
This work addresses the construction and use of low-dimensional invariant manifolds to simplify complex chemical kinetics. Typically, chemical kinetic systems have a wide range of time scales. As a consequence, reaction trajectories rapidly approach a hierarchy of attracting manifolds of decreasing dimension in the full composition space. In previous research, several different methods have been proposed to identify these low-dimensional attracting manifolds. Here we propose a new method based on an invariant constrained equilibrium edge (ICE) manifold. This manifold (of dimension nr) is generated by the reaction trajectories emanating from its (nr-1)-dimensional edge, on which the composition is in a constrained equilibrium state. A reasonable choice of the nr represented variables (e.g., nr "major" species) ensures that there exists a unique point on the ICE manifold corresponding to each realizable value of the represented variables. The process of identifying this point is referred to as species reconstruction. A second contribution of this work is a local method of species reconstruction, called ICE-PIC, which is based on the ICE manifold and uses preimage curves (PICs). The ICE-PIC method is local in the sense that species reconstruction can be performed without generating the whole of the manifold (or a significant portion thereof). The ICE-PIC method is the first approach that locally determines points on a low-dimensional invariant manifold, and its application to high-dimensional chemical systems is straightforward. The "inputs" to the method are the detailed kinetic mechanism and the chosen reduced representation (e.g., some major species). The ICE-PIC method is illustrated and demonstrated using an idealized H2/O system with six chemical species. It is then tested and compared to three other dimension-reduction methods for the test case of a one-dimensional premixed laminar flame of stoichiometric hydrogen/air, which is described by a detailed mechanism
A bio-inspired device for drag reduction on a three-dimensional model vehicle.
Kim, Dongri; Lee, Hoon; Yi, Wook; Choi, Haecheon
2016-03-10
In this paper, we introduce a bio-mimetic device for the reduction of the drag force on a three-dimensional model vehicle, the Ahmed body (Ahmed et al 1984 SAE Technical Paper 840300). The device, called automatic moving deflector (AMD), is designed inspired by the movement of secondary feathers on bird's wing suction surface: i.e., secondary feathers pop up when massive separation occurs on bird's wing suction surface at high angles of attack, which increases the lift force at landing. The AMD is applied to the rear slanted surface of the Ahmed body to control the flow separation there. The angle of the slanted surface considered is 25° at which the drag coefficient on the Ahmed body is highest. The wind tunnel experiment is conducted at Re H = 1.0 × 10(5)-3.8 × 10(5), based on the height of the Ahmed body (H) and the free-stream velocity (U ∞). Several AMDs of different sizes and materials are tested by measuring the drag force on the Ahmed body, and showed drag reductions up to 19%. The velocity and surface-pressure measurements show that AMD starts to pop up when the pressure in the thin gap between the slanted surface and AMD is much larger than that on the upper surface of AMD. We also derive an empirical formula that predicts the critical free-stream velocity at which AMD starts to operate. Finally, it is shown that the drag reduction by AMD is mainly attributed to a pressure recovery on the slanted surface by delaying the flow separation and suppressing the strength of the longitudinal vortices emanating from the lateral edges of the slanted surface.
A potential implicit particle method for high-dimensional systems
Weir, B.; Miller, R. N.; Spitz, Y. H.
2013-11-01
This paper presents a particle method designed for high-dimensional state estimation. Instead of weighing random forecasts by their distance to given observations, the method samples an ensemble of particles around an optimal solution based on the observations (i.e., it is implicit). It differs from other implicit methods because it includes the state at the previous assimilation time as part of the optimal solution (i.e., it is a lag-1 smoother). This is accomplished through the use of a mixture model for the background distribution of the previous state. In a high-dimensional, linear, Gaussian example, the mixture-based implicit particle smoother does not collapse. Furthermore, using only a small number of particles, the implicit approach is able to detect transitions in two nonlinear, multi-dimensional generalizations of a double-well. Adding a step that trains the sampled distribution to the target distribution prevents collapse during the transitions, which are strongly nonlinear events. To produce similar estimates, other approaches require many more particles.
New Cogging Torque Reduction Methods for Permanent Magnet Machine
Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.
2017-08-01
Permanent magnet type motors (PMs) especially permanent magnet synchronous motor (PMSM) are expanding its limbs in industrial application system and widely used in various applications. The key features of this machine include high power and torque density, extending speed range, high efficiency, better dynamic performance and good flux-weakening capability. Nevertheless, high in cogging torque, which may cause noise and vibration, is one of the threat of the machine performance. Therefore, with the aid of 3-D finite element analysis (FEA) and simulation using JMAG Designer, this paper proposed new method for cogging torque reduction. Based on the simulation, methods of combining the skewing with radial pole pairing method and skewing with axial pole pairing method reduces the cogging torque effect up to 71.86% and 65.69% simultaneously.
Algorithms for the nonclassical method of symmetry reductions
Clarkson, P A; Peter A Clarkson; Elizabeth L Mansfield
1994-01-01
In this article we present first an algorithm for calculating the determining equations associated with so-called "nonclassical method" of symmetry reductions (a la Bluman and Cole) for systems of partial differentail equations. This algorithm requires significantly less computation time than that standardly used, and avoids many of the difficulties commonly encountered. The proof of correctness of the algorithm is a simple application of the theory of Grobner bases. In the second part we demonstrate some algorithms which may be used to analyse, and often to solve, the resulting systems of overdetermined nonlinear PDEs. We take as our principal example a generalised Boussinesq equation, which arises in shallow water theory. Although the equation appears to be non-integrable, we obtain an exact "two-soliton" solution from a nonclassical reduction.
Three-dimensional protein structure prediction: Methods and computational strategies.
Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C
2014-10-12
A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction.
Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection
Jie Zhao
2014-01-01
Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.
Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh
2017-06-01
Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.
Recent advancements in mechanical reduction methods: particulate systems.
Leleux, Jardin; Williams, Robert O
2014-03-01
The screening of new active pharmaceutical ingredients (APIs) has become more streamlined and as a result the number of new drugs in the pipeline is steadily increasing. However, a major limiting factor of new API approval and market introduction is the low solubility associated with a large percentage of these new drugs. While many modification strategies have been studied to improve solubility such as salt formation and addition of cosolvents, most provide only marginal success and have severe disadvantages. One of the most successful methods to date is the mechanical reduction of drug particle size, inherently increasing the surface area of the particles and, as described by the Noyes-Whitney equation, the dissolution rate. Drug micronization has been the gold standard to achieve these improvements; however, the extremely low solubility of some new chemical entities is not significantly affected by size reduction in this range. A reduction in size to the nanometric scale is necessary. Bottom-up and top-down techniques are utilized to produce drug crystals in this size range; however, as discussed in this review, top-down approaches have provided greater enhancements in drug usability on the industrial scale. The six FDA approved products that all exploit top-down approaches confirm this. In this review, the advantages and disadvantages of both approaches will be discussed in addition to specific top-down techniques and the improvements they contribute to the pharmaceutical field.
A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.
Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu
2015-12-01
Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.
An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition
Jun Huang
2014-01-01
Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.
Microtubules accelerate the kinase activity of Aurora-B by a reduction in dimensionality.
Noujaim, Michael; Bechstedt, Susanne; Wieczorek, Michal; Brouhard, Gary J
2014-01-01
Aurora-B is the kinase subunit of the Chromosome Passenger Complex (CPC), a key regulator of mitotic progression that corrects improper kinetochore attachments and establishes the spindle midzone. Recent work has demonstrated that the CPC is a microtubule-associated protein complex and that microtubules are able to activate the CPC by contributing to Aurora-B auto-phosphorylation in trans. Aurora-B activation is thought to occur when the local concentration of Aurora-B is high, as occurs when Aurora-B is enriched at centromeres. It is not clear, however, whether distributed binding to large structures such as microtubules would increase the local concentration of Aurora-B. Here we show that microtubules accelerate the kinase activity of Aurora-B by a "reduction in dimensionality." We find that microtubules increase the kinase activity of Aurora-B toward microtubule-associated substrates while reducing the phosphorylation levels of substrates not associated to microtubules. Using the single molecule assay for microtubule-associated proteins, we show that a minimal CPC construct binds to microtubules and diffuses in a one-dimensional (1D) random walk. The binding of Aurora-B to microtubules is salt-dependent and requires the C-terminal tails of tubulin, indicating that the interaction is electrostatic. We show that the rate of Aurora-B auto-activation is faster with increasing concentrations of microtubules. Finally, we demonstrate that microtubules lose their ability to stimulate Aurora-B when their C-terminal tails are removed by proteolysis. We propose a model in which microtubules act as scaffolds for the enzymatic activity of Aurora-B. The scaffolding activity of microtubules enables rapid Aurora-B activation and efficient phosphorylation of microtubule-associated substrates.
Kas, Recep; Hummadi, Khalid Khazzal; Kortlever, Ruud; de Wit, Patrick; Milbrat, Alexander; Luiten-Olieman, Mieke W J; Benes, Nieck E; Koper, Marc T M; Mul, Guido
2016-01-01
Aqueous-phase electrochemical reduction of carbon dioxide requires an active, earth-abundant electrocatalyst, as well as highly efficient mass transport. Here we report the design of a porous hollow fibre copper electrode with a compact three-dimensional geometry, which provides a large area, three-phase boundary for gas-liquid reactions. The performance of the copper electrode is significantly enhanced; at overpotentials between 200 and 400 mV, faradaic efficiencies for carbon dioxide reduction up to 85% are obtained. Moreover, the carbon monoxide formation rate is at least one order of magnitude larger when compared with state-of-the-art nanocrystalline copper electrodes. Copper hollow fibre electrodes can be prepared via a facile method that is compatible with existing large-scale production processes. The results of this study may inspire the development of new types of microtubular electrodes for electrochemical processes in which at least one gas-phase reactant is involved, such as in fuel cell technology.
Kas, Recep; Hummadi, Khalid Khazzal; Kortlever, Ruud; de Wit, Patrick; Milbrat, Alexander; Luiten-Olieman, Mieke W. J.; Benes, Nieck E.; Koper, Marc T. M.; Mul, Guido
2016-02-01
Aqueous-phase electrochemical reduction of carbon dioxide requires an active, earth-abundant electrocatalyst, as well as highly efficient mass transport. Here we report the design of a porous hollow fibre copper electrode with a compact three-dimensional geometry, which provides a large area, three-phase boundary for gas-liquid reactions. The performance of the copper electrode is significantly enhanced; at overpotentials between 200 and 400 mV, faradaic efficiencies for carbon dioxide reduction up to 85% are obtained. Moreover, the carbon monoxide formation rate is at least one order of magnitude larger when compared with state-of-the-art nanocrystalline copper electrodes. Copper hollow fibre electrodes can be prepared via a facile method that is compatible with existing large-scale production processes. The results of this study may inspire the development of new types of microtubular electrodes for electrochemical processes in which at least one gas-phase reactant is involved, such as in fuel cell technology.
A novel model reduction method based on balanced truncation
无
2009-01-01
The main goal of this paper is to construct an efficient reduced-order model (ROM) for unsteady aerodynamic force modeling. Balanced truncation (BT) is presented to address the problem. For conventional BT method, it is necessary to compute exact controllability and observability grammians. Although it is relatively straightforward to compute these matrices in a control setting where the system order is moderate, the technique does not extend easily to high order systems. In response to the challenge, snapshots-BT (S-BT) method is introduced for high order system ROM construction. The outline idea of the S-BT method is that snapshots of primary and dual system approximate the controllability and observability matrices in the frequency domain. The method has been demonstrated for 3 high order systems: (1) unsteady motion of a two-dimensional airfoil in response to gust, (2) AGARD 445.6 wing aeroelastic system, and (3) BACT (benchmark active control technology) standard aeroservoelastic system. All the results indicate that S-BT based ROM is efficient and accurate enough to provide a powerful tool for unsteady aerodynamic force modeling.
A fast rank-reduction algorithm for three-dimensional seismic data interpolation
Jia, Yongna; Yu, Siwei; Liu, Lina; Ma, Jianwei
2016-09-01
Rank-reduction methods have been successfully used for seismic data interpolation and noise attenuation. However, highly intense computation is required for singular value decomposition (SVD) in most rank-reduction methods. In this paper, we propose a simple yet efficient interpolation algorithm, which is based on the Hankel matrix, for randomly missing traces. Following the multichannel singular spectrum analysis (MSSA) technique, we first transform the seismic data into a low-rank block Hankel matrix for each frequency slice. Then, a fast orthogonal rank-one matrix pursuit (OR1MP) algorithm is employed to minimize the low-rank constraint of the block Hankel matrix. In the new algorithm, only the left and right top singular vectors are needed to be computed, thereby, avoiding the complexity of computation required for SVD. Thus, we improve the calculation efficiency significantly. Finally, we anti-average the rank-reduction block Hankel matrix and obtain the reconstructed data in the frequency domain. Numerical experiments on 3D seismic data show that the proposed interpolation algorithm provides much better performance than the traditional MSSA algorithm in computational speed, especially for large-scale data processing.
Jeanneret, Fabienne; Boccard, Julien; Badoud, Flavia; Sorg, Olivier; Tonoli, David; Pelclova, Daniela; Vlckova, Stepanka; Rutledge, Douglas N; Samer, Caroline F; Hochstrasser, Denis; Saurat, Jean-Hilaire; Rudaz, Serge
2014-10-15
Untargeted metabolomic approaches offer new opportunities for a deeper understanding of the molecular events related to toxic exposure. This study proposes a metabolomic investigation of biochemical alterations occurring in urine as a result of dioxin toxicity. Urine samples were collected from Czech chemical workers submitted to severe dioxin occupational exposure in a herbicide production plant in the late 1960s. Experiments were carried out with ultra-high pressure liquid chromatography (UHPLC) coupled to high-resolution quadrupole time-of-flight (QTOF) mass spectrometry. A chemistry-driven feature selection was applied to focus on steroid-related metabolites. Supervised multivariate data analysis allowed biomarkers, mainly related to bile acids, to be highlighted. These results supported the hypothesis of liver damage and oxidative stress for long-term dioxin toxicity. As a second step of data analysis, the information gained from the urine analysis of Victor Yushchenko after his poisoning was examined. A subset of relevant urinary markers of acute dioxin toxicity from this extreme phenotype, including glucuro- and sulfo-conjugated endogenous steroid metabolites and bile acids, was assessed for its ability to detect long-term effects of exposure. The metabolomic strategy presented in this work allowed the determination of metabolic patterns related to dioxin effects in human and the discovery of highly predictive subsets of biologically meaningful and clinically relevant compounds. These results are expected to provide valuable information for a deeper understanding of the molecular events related to dioxin toxicity. Furthermore, it presents an original methodology of data dimensionality reduction by using extreme phenotype as a guide to select relevant features prior to data modeling (biologically driven data reduction).
Improved method for phase wraps reduction in profilometry
Du, Guangliang; Zhou, Canlin; Si, Shuchun; Li, Hui; Lei, Zhenkun; Li, Yanjie
2016-01-01
In order to completely eliminate, or greatly reduce the number of phase wraps in 2D wrapped phase map, Gdeisat et al. proposed an algorithm, which uses shifting the spectrum towards the origin. But the spectrum can be shifted only by an integer number, meaning that the phase wraps reduction is often not optimal. In addition, Gdeisat's method will take much time to make the Fourier transform, inverse Fourier transform, select and shift the spectral components. In view of the above problems, we proposed an improved method for phase wraps elimination or reduction. First, the wrapped phase map is padded with zeros, the carrier frequency of the projected fringe is determined by high resolution, which can be used as the moving distance of the spectrum. And then realize frequency shift in spatial domain. So it not only can enable the spectrum to be shifted by a rational number when the carrier frequency is not an integer number, but also reduce the execution time. Finally, the experimental results demonstrated that ...
Kas, Recep; Hummadi, Khalid Khazzal; Kortlever, Ruud; Wit, de Patrick; Milbrat, Alexander; Luiten-Olieman, Mieke W.J.; Benes, Nieck E.; Koper, Marc T.M.; Mul, Guido
2016-01-01
Aqueous-phase electrochemical reduction of carbon dioxide requires an active, earth-abundant electrocatalyst, as well as highly efficient mass transport. Here we report the design of a porous hollow fibre copper electrode with a compact three-dimensional geometry, which provides a large area, three-
Dimensional reduction of the CPT-even electromagnetic sector of the standard model extension
Casana, Rodolfo; Carvalho, Eduardo S.; Ferreira, Manoel M., Jr.
2011-08-01
The CPT-even Abelian gauge sector of the standard model extension is represented by the Maxwell term supplemented by (KF)μνρσFμνFρσ, where the Lorentz-violating background tensor, (KF)μνρσ, possesses the symmetries of the Riemann tensor. In the present work, we examine the planar version of this theory, obtained by means of a typical dimensional reduction procedure to (1+2) dimensions. The resulting planar electrodynamics is composed of a gauge sector containing six Lorentz-violating coefficients, a scalar field endowed with a noncanonical kinetic term, and a coupling term that links the scalar and gauge sectors. The dispersion relation is exactly determined, revealing that the six parameters related to the pure electromagnetic sector do not yield birefringence at any order. In this model, the birefringence may appear only as a second order effect associated with the coupling tensor linking the gauge and scalar sectors. The equations of motion are written and solved in the stationary regime. The Lorentz-violating parameters do not alter the asymptotic behavior of the fields but induce an angular dependence not observed in the Maxwell planar theory.
Phase transitions at finite temperature and dimensional reduction for fermions and bosons
Kocic, Aleksandar
1995-01-01
In a recent Letter we discussed the fact that large-N expansions and computer simulations indicate that the universality class of the finite temperature chiral symmetry restoration transition in the 3D Gross-Neveu model is mean field theory. This was seen to be a counterexample to the standard 'sigma model' scenario which predicts the 2D Ising model universality class. In this article we present more evidence, both theoretical and numerical, that this result is correct. We develop a physical picture for our results and discuss the width of the scaling region (Ginzburg criterion), 1/N corrections, and differences between the dynamics of BCS superconductors and Gross-Neveu models. Lattices as large as 12 \\times 72^2 are simulated for both the N=12 and N=4 cases and the numerical evidence for mean field scaling is quite compelling. We point out that the amplitude ratio for the model's susceptibility is a particulartly good observable for distinguishing between the dimensional reduction and the mean field sceneri...
Yi Long
2015-06-01
Full Text Available This paper describes a novel strategy for the visualization of hyperspectral imagery based on the analysis of image pixel pairwise distances. The goal of this approach is to generate a final color image with excellent interpretability and high contrast at the cost of distorting a few pairwise distances. Specifically, the principle of equal variance is introduced to divide all hyperspectral bands into three subgroups and to ensure the energy is distributed uniformly between them, as in natural color images. Then, after detecting both normal and outlier pixels, these three subgroups are mapped into three color components of the output visualization using two different mapping (i.e., dimensionality reduction schemes for the two types of pixels. The widely-used multidimensional scaling (MDS is used for normal pixels and a new objective function, taking into account the weighting of pairwise distances, is presented for the outlier pixels. The pairwise distance weighting is designed such that small pairwise distances between the outliers and their respective neighbors are emphasized and large deviations are suppressed. This produces an image with high contrast and good interpretability while retaining the detailed information content. The proposed algorithm is compared with several state-of-the-art visualization techniques and evaluated on the well-known AVIRIS hyperspectral images. The effectiveness of the proposed strategy is substantiated both visually and quantitatively.
Dimensional Reduction of a Lorentz- and CPT-violating Chern-Simons Model
Belich, H; Orlando, M T D
2003-01-01
Taking as starting point a Lorentz and CPT non-invariant Chern-Simons-like model defined in 1+3 dimensions, we proceed realizing its dimensional reduction to D=1+2. One then obtains a new planar model, composed by the Maxwell-Chern-Simons (MCS)sector, a Klein-Gordon massless scalar field, and a coupling term that mixes the gauge field to the external vector, $v^{\\mu}$. In spite of breaking Lorentz invariance in the particle frame, this model may preserve the CPT symmetry for a single particular choice of $v^{\\mu}$. The solution of the wave equations shows a behavior similar but which deviates from the usual MCS electrodynamics by some correction-terms (dependent on the background field). These solutions also indicate the existence of spatial-anisotropy in the case $v^{\\mu}$ is purely space-like, which is consistent with the determination of a privileged direction is space, v. The reduced model exhibits stability, but the causality can be jeopardized by some modes. PACS numbers: 11.10.Kk; 11.30.Cp; 11.30.Er; 1...
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
New Approach for Error Reduction in the Volume Penalization Method
Iwakami-Nakano, Wakana; Hatakeyama, Nozomu; Hattori, Yuji
2012-01-01
The volume penalization method offers an efficient way to numerically simulate flows around complex-shaped bodies which move and/or deform in general. In this method a penalization term which has permeability eta and a mask function is added to a governing equation as a forcing term in order to impose different dynamics in solid and fluid regions. In this paper we investigate the accuracy of the volume penalization method in detail. We choose the one-dimensional Burgers' equation as a governing equation since it enables us extensive study and it has a nonlinear term similar to the Navier-Stokes equations. It is confirmed that the error which consists of the discretization/truncation error, the penalization error, the round-off error, and others has the same features as those in previous results when we use the standard definition of the mask function. As the number of grid points increases, the error converges to a non-zero constant which is equal to the penalization error. We propose a new approach for reduc...
Ji-ming Wu; De-hao Yu
2000-01-01
In this paper, the overlapping domain decomposition method, which is based on the natural boundary reduction[1] and first suggested in [2], is applied to slove the exterior boundary value problem of harmonic equation over three-dimensional domain. The convergence and error estimates both for the continuous case and the discrete case are given. The contraction factor for the exterior spherical domain is also discussed. Moreover, numerical results are given which show that the accuracy and the convergence are in accord with the theoretical analyses.
Standard Test Method for Dimensional Stability of Sandwich Core Materials
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of the sandwich core dimensional stability in the two plan dimensions. 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Smoothed Particle Hydrodynamics Method for Two-dimensional Stefan Problem
Tarwidi, Dede
2016-01-01
Smoothed particle hydrodynamics (SPH) is developed for modelling of melting and solidification. Enthalpy method is used to solve heat conduction equations which involved moving interface between phases. At first, we study the melting of floating ice in the water for two-dimensional system. The ice objects are assumed as solid particles floating in fluid particles. The fluid and solid motion are governed by Navier-Stokes equation and basic rigid dynamics equation, respectively. We also propose a strategy to separate solid particles due to melting and solidification. Numerical results are obtained and plotted for several initial conditions.
Numerical method of characteristics for one-dimensional blood flow
Acosta, Sebastian; Riviere, Beatrice; Penny, Daniel J; Rusin, Craig G
2014-01-01
Mathematical modeling at the level of the full cardiovascular system requires the numerical approximation of solutions to a one-dimensional nonlinear hyperbolic system describing flow in a single vessel. This model is often simulated by computationally intensive methods like finite elements and discontinuous Galerkin, while some recent applications require more efficient approaches (e.g. for real-time clinical decision support, phenomena occurring over multiple cardiac cycles, iterative solutions to optimization/inverse problems, and uncertainty quantification). Further, the high speed of pressure waves in blood vessels greatly restricts the time-step needed for stability in explicit schemes. We address both cost and stability by presenting an efficient and unconditionally stable method for approximating solutions to diagonal nonlinear hyperbolic systems. Theoretical analysis of the algorithm is given along with a comparison of our method to a discontinuous Galerkin implementation. Lastly, we demonstrate the ...
Jiang, Jun
This dissertation summarizes a procedure to design blades with finite thickness in three dimensions. In this inverse method, the prescribed quantities are the blade pressure loading shape, the inlet and outlet spanwise distributions of swirl, and the blade thickness distributions, and the primary calculated quantity is the blade geometry. The method is formulated in the fully inverse mode for design of three-dimensional blades in rotational and compressible flows whereby the blade shape is determined iteratively using the flow tangency condition along the blade surfaces. This technique is demonstrated here in the first instance for the design of two-dimensional cascaded and three-dimensional blades with finite thickness in inviscid and incompressible flows. In addition, the incoming flow is assumed irrotational so that the only vorticity present in the flowfield is the blade bound and shed vorticities. Design calculations presented for two-dimensional cascaded blades include an inlet guide vane, an impulse turbine blade, and a compressor blade. Consistency check is carried out for these cascaded blade design calculations using a panel analysis method and the analytical solution for the Gostelow profile. Free-vortex design results are also shown for fully three-dimensional blades with finite thickness such as an inlet guide vane, a rotor of axial-flow pumps, and a high-flow-coefficient pump inducer with design parameters typically found in industrial applications. These three-dimensional inverse design results are verified using Adamczyk's inviscid code.
New method of 2-dimensional metrology using mask contouring
Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka
2008-10-01
We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.
An explicit four-dimensional variational data assimilation method
QIU ChongJian; ZHANG Lei; SHAO AiMei
2007-01-01
A new data assimilation method called the explicit four-dimensional variational (4DVAR) method is proposed. In this method, the singular value decomposition (SVD) is used to construct the orthogonal basis vectors from a forecast ensemble in a 4D space. The basis vectors represent not only the spatial structure of the analysis variables but also the temporal evolution. After the analysis variables are expressed by a truncated expansion of the basis vectors in the 4D space, the control variables in the cost function appear explicitly, so that the adjoint model, which is used to derive the gradient of cost function with respect to the control variables, is no longer needed. The new technique significantly simplifies the data assimilation process. The advantage of the proposed method is demonstrated by several experiments using a shallow water numerical model and the results are compared with those of the conventional 4DVAR. It is shown that when the observation points are very dense, the conventional 4DVAR is better than the proposed method. However, when the observation points are sparse, the proposed method performs better. The sensitivity of the proposed method with respect to errors in the observations and the numerical model is lower than that of the conventional method.
An explicit four-dimensional variational data assimilation method
2007-01-01
A new data assimilation method called the explicit four-dimensional variational (4DVAR) method is proposed. In this method, the singular value decomposition (SVD) is used to construct the orthogonal basis vectors from a forecast ensemble in a 4D space. The basis vectors represent not only the spatial structure of the analysis variables but also the temporal evolution. After the analysis variables are ex-pressed by a truncated expansion of the basis vectors in the 4D space, the control variables in the cost function appear explicitly, so that the adjoint model, which is used to derive the gradient of cost func-tion with respect to the control variables, is no longer needed. The new technique significantly simpli-fies the data assimilation process. The advantage of the proposed method is demonstrated by several experiments using a shallow water numerical model and the results are compared with those of the conventional 4DVAR. It is shown that when the observation points are very dense, the conventional 4DVAR is better than the proposed method. However, when the observation points are sparse, the proposed method performs better. The sensitivity of the proposed method with respect to errors in the observations and the numerical model is lower than that of the conventional method.
Inefficacy of cooking methods on mercury reduction from shark.
Chicourel, E L; Sakuma, A M; Zenebon, O; Tenuta-Filho, A
2001-09-01
Shark and other carnivorous fishes present high potential risk of excessive contamination by mercury. The distribution of mercury throughout the body of blue shark--Prionace glauca--was analysed, and the effects on mercury levels by frying and baking in a laboratory oven, and in a microwave oven, were measured. There was no significant statistical difference in mercury levels in the samples taken from regions near the head, or from central and tail parts, indicating homogeneous distribution of the metal in muscles throughout the body. Frying and baking did not affect original mercury levels present in blue shark. This study indicates that specific studies are needed to define the efficacy or inefficacy of the cooking methods on mercury reduction from fish, in order to clearly resolve divergent opinions in the literature.
Ojha VK
2015-02-01
Full Text Available Varun Kumar Ojha,1,2 Konrad Jackowski,3 Ajith Abraham,1,4 Václav Snášel1,2 1IT4Innovations, VŠB – Technical University of Ostrava, Ostrava, Czech Republic; 2Department of Computer Science, VŠB – Technical University of Ostrava, Ostrava, Czech Republic; 3Department of Systems and Computer Networks, Wroclaw University of Technology, Wroclaw, Poland; 4Machine Intelligence Research Labs, Auburn, WA, USA Abstract: Prediction of poly(lactic-co-glycolic acid (PLGA micro- and nanoparticles’ dissolution rates plays a significant role in pharmaceutical and medical industries. The prediction of PLGA dissolution rate is crucial for drug manufacturing. Therefore, a model that predicts the PLGA dissolution rate could be beneficial. PLGA dissolution is influenced by numerous factors (features, and counting the known features leads to a dataset with 300 features. This large number of features and high redundancy within the dataset makes the prediction task very difficult and inaccurate. In this study, dimensionality reduction techniques were applied in order to simplify the task and eliminate irrelevant and redundant features. A heterogeneous pool of several regression algorithms were independently tested and evaluated. In addition, several ensemble methods were tested in order to improve the accuracy of prediction. The empirical results revealed that the proposed evolutionary weighted ensemble method offered the lowest margin of error and significantly outperformed the individual algorithms and the other ensemble techniques. Keywords: feature selection, regression models, ensemble, protein dissolution
Three-dimensional beam propagation method based on the variable transformed Galerkin's method
XIAO Jinbiao; SUN Xiaohan; ZHANG Mingde
2004-01-01
A novel three-dimensional beam propagation method (BPM) based on the variable transformed Galerkin's method is introduced for simulating optical field propagation in three-dimensional dielectric structures. The infinite Cartesian x-y plane is mapped into a unit square by a tangent-type function transformation. Consequently, the infinite region problem is converted into the finite region problem. Thus, the boundary truncation is eliminated and the calculation accuracy is promoted. The three-dimensional BPM basic equation is reduced to a set of first-order ordinary differential equations through sinusoidal basis function, which fits arbitrary cladding optical waveguide, then direct solution of the resulting equations by means of the Runge-Kutta method. In addition,the calculation is efficient due to the small matrix derived from the present technique.Both z-invariant and z-variant examples are considered to test both the accuracy and utility of this approach.
Parallel processing method for two-dimensional Sn transport code DOT3.5
Uematsu, Mikio [Toshiba Corp., Kawasaki, Kanagawa (Japan)
1998-03-01
A parallel processing method for the two-dimensional Sn transport code DOT3.5 has been developed to achieve drastic reduction of computation time. In the proposed method, parallelization is made with angular domain decomposition and/or space domain decomposition. Calculational speedup for parallel processing by angular domain decomposition is achieved by minimizing frequency of communications between processing elements. As for parallel processing by space domain decomposition, two-step rescaling method consisting of segmentwise rescaling and the ordinary pointwise rescaling have been developed to accelerate convergence, which will otherwise be degraded because of discontinuity at the segment boundaries. The developed method was examined with a Sun workstation using the PVM message-passing library, and sufficient speedup was observed. (author)
A new noise reduction method for airborne gravity gradient data
Jirigalatu; Ebbing, Jörg; Sebera, Josef
2016-09-01
Airborne gravity gradient (AGG) measurements offer an increased resolution and accuracy compared to terrestrial measurements. But interpretation and processing of AGG data are often challenging as levelling errors and survey noise affect the data, and these effects are not easily recognised in the gradient components. We adopted the classic method of upward continuation in the noise reduction using the noise level estimates by the AGG system. By iteratively projecting the survey data to a lower level and upward continuing the data back to the survey height, parts of the high-frequency signal are suppressed. The filter, which is defined by this approach, is directly dependent on the noise level of the AGG data, the maximum number of iterations and the iterative step. We demonstrate the method by applying it to both synthetic data and real AGG data over Karasjok, Norway, and compare the results to the directional filtering method. The results show that the iterative filter can effectively reduce high-frequency noise in the data.
The dimensionality reduction at surfaces as a playground for many-body and correlation effects
Tejeda, A.; Michel, E. G.; Mascaraque, A.
2013-03-01
antiferromagnetic surfaces. Ortega reports on the gap of molecular layers on metal systems, where the metal-organic interaction affects the organic gap through correlation effects. Finally, Cazalilla presents a study of the phase diagram of one-dimensional atoms or molecules displaying a Kondo-exchange interaction with the substrate. Acknowledgments The editors are grateful to all the invited contributors to this special section of Journal of Physics: Condensed Matter. We also thank the IOP Publishing staff for handling the administrative matters and the refereeing process. Correlation and many-body effects at surfaces contents The dimensionality reduction at surfaces as a playground for many-body and correlation effectsA Tejeda, E G Michel and A Mascaraque Electron-phonon coupling in quasi-free-standing grapheneJens Christian Johannsen, Søren Ulstrup, Marco Bianchi, Richard Hatch, Dandan Guan, Federico Mazzola, Liv Hornekær, Felix Fromm, Christian Raidel, Thomas Seyller and Philip Hofmann Exploring highly correlated materials via electron pair emission: the case of NiO/Ag(100)F O Schumann, L Behnke, C H Li and J Kirschner Coherent excitations and electron-phonon coupling in Ba/EuFe2As2 compounds investigated by femtosecond time- and angle-resolved photoemission spectroscopyI Avigo, R Cortés, L Rettig, S Thirupathaiah, H S Jeevan, P Gegenwart, T Wolf, M Ligges, M Wolf, J Fink and U Bovensiepen Understanding the insulating nature of alkali-metal/Si(111):B interfacesY Fagot-Revurat, C Tournier-Colletta, L Chaput, A Tejeda, L Cardenas, B Kierren, D Malterre, P Le Fèvre, F Bertran and A Taleb-Ibrahimi What about U on surfaces? Extended Hubbard models for adatom systems from first principlesPhilipp Hansmann, Loïg Vaugier, Hong Jiang and Silke Biermann Influence of on-site Coulomb interaction U on properties of MnO(001)2 × 1 and NiO(001)2 × 1 surfacesA Schrön, M Granovskij and F Bechstedt On the organic energy gap problemF Flores, E Abad, J I Martínez, B Pieczyrak and J Ortega
Computational Methods for Multi-dimensional Neutron Diffusion Problems
Song Han
2009-10-15
Lead-cooled fast reactor (LFR) has potential for becoming one of the advanced reactor types in the future. Innovative computational tools for system design and safety analysis on such NPP systems are needed. One of the most popular trends is coupling multi-dimensional neutron kinetics (NK) with thermal-hydraulic (T-H) to enhance the capability of simulation of the NPP systems under abnormal conditions or during rare severe accidents. Therefore, various numerical methods applied in the NK module should be reevaluated to adapt the scheme of coupled code system. In the author's present work a neutronic module for the solution of two dimensional steady-state multigroup diffusion problems in nuclear reactor cores is developed. The module can produce both direct fluxes as well as adjoints, i.e. neutron importances. Different numerical schemes are employed. A standard finite-difference (FD) approach is firstly implemented, mainly to serve as a reference for less computationally challenging schemes, such as transverse-integrated nodal methods (TINM) and boundary element methods (BEM), which are considered in the second part of the work. The validation of the methods proposed is carried out by comparisons of the results for some reference structures. In particular a critical problem for a homogeneous reactor for which an analytical solution exists is considered as a benchmark. The computational module is then applied to a fast spectrum system, having physical characteristics similar to the proposed European Lead-cooled System (ELSY) project. The results show the effectiveness of the numerical techniques presented. The flexibility and the possibility to obtain neutron importances allow the use of the module for parametric studies, design assessments and integral parameter evaluations, as well as for future sensitivity and perturbation analyses and as a shape solver for time-dependent procedures
Method of local pointed function reduction of original shape in Fourier transformation
Dosch, H
2002-01-01
The method for analytical reduction of the original shape in the one-dimensional Fourier transformation by the fourier image modulus is proposed. The basic concept of the method consists in the presentation of the model shape in the form of the local peak functions sum. The eigenfunctions, generated by the linear differential equations with the polynomial coefficients, are selected as the latter ones. This provides for the possibility of managing the Fourier transformation without numerical integration. This reduces the reverse task to the nonlinear regression with a small number of the evaluated parameters and to the numerical or asymptotic study on the model peak functions - the eigenfunctions of the differential tasks and their fourier images
Histogram Bins Matching Approach for CBIR Based on Linear grouping for Dimensionality Reduction
H. B. Kekre
2013-11-01
Full Text Available This paper describes the histogram bins matching approach for CBIR. Histogram bins are reduced from 256 to 32 and 16 by linear grouping and effect of this dimensionality reduction is analyzed, compared, and evaluated. Work presented in this paper contributes in all three main phases of CBIR that are feature extraction, similarity matching and performance evaluation. Feature extraction explores the idea of histogram bins matching for three colors R, G and B. Histogram bin contents are used to represent the feature vector in three forms. First form of feature is count of pixels, and then other forms are obtained by computing the total and mean of intensities for the pixels falling in each of the histogram bins. Initially the size of the feature vector is 256 components as histogram with the all 256 bins. Further the size of the feature vector is reduced to 32 bins and then 16 bins by simple linear grouping of the bins. Feature extraction processes for each size and type of the feature vector is executed over the database of 2000 BMP images having 20 different classes. It prepares the feature vector databases as preprocessing part of this work. Similarity matching between query and database image feature vectors is carried out by means of first five orders of Minkowski distance and also with the cosine correlation distance. Same set of 200 query images are executed for all types of feature vector and for all similarity measures. Performance of all aspects addressed in this paper are evaluated using three parameters PRCP (Precision Recall Cross over Point, LS (longest string, LSRR (Length of String to Retrieve all Relevant images.
Arehart Eric
2009-03-01
Full Text Available Abstract Background The fidelity of DNA replication serves as the nidus for both genetic evolution and genomic instability fostering disease. Single nucleotide polymorphisms (SNPs constitute greater than 80% of the genetic variation between individuals. A new theory regarding DNA replication fidelity has emerged in which selectivity is governed by base-pair geometry through interactions between the selected nucleotide, the complementary strand, and the polymerase active site. We hypothesize that specific nucleotide combinations in the flanking regions of SNP fragments are associated with mutation. Results We modeled the relationship between DNA sequence and observed polymorphisms using the novel multifactor dimensionality reduction (MDR approach. MDR was originally developed to detect synergistic interactions between multiple SNPs that are predictive of disease susceptibility. We initially assembled data from the Broad Institute as a pilot test for the hypothesis that flanking region patterns associate with mutagenesis (n = 2194. We then confirmed and expanded our inquiry with human SNPs within coding regions and their flanking sequences collected from the National Center for Biotechnology Information (NCBI database (n = 29967 and a control set of sequences (coding region not associated with SNP sites randomly selected from the NCBI database (n = 29967. We discovered seven flanking region pattern associations in the Broad dataset which reached a minimum significance level of p ≤ 0.05. Significant models (p Conclusion The present study represents the first use of this computational methodology for modeling nonlinear patterns in molecular genetics. MDR was able to identify distinct nucleotide patterning around sites of mutations dependent upon the observed nucleotide change. We discovered one flanking region set that included five nucleotides clustered around a specific type of SNP site. Based on the strongly associated patterns identified in
Enrico eChiovetto
2013-02-01
Full Text Available A long standing hypothesis in the neuroscience community is that the CNS generates the muscle activities to accomplish movements by combining a relatively small number of stereotyped patterns of muscle activations, often referred to as muscle synergies. Different definitions of synergies have been given in the literature. The most well-known are those of synchronous, time-varying and temporal muscle synergies. Each one of them is based on a different mathematical model used to factor some EMG array recordings collected during the execution of variety of motor tasks into a well-determined spatial, temporal or spatio-temporal organization. This plurality of definitions and their separate application to complex tasks have so far complicated the comparison and interpretation of the results obtained across studies, and it has always remained unclear why and when one synergistic decomposition should be preferred to another one. By using well-understood motor tasks such as elbow flexions and extensions, we aimed in this study to clarify better what are the motor features characterized by each kind of decomposition and to assess whether, when and why one of them should be preferred to the others. We found that three temporal synergies, each one of them accounting for specific temporal phases of the movements could account for the majority of the data variation. Similar performances could be achieved by two synchronous synergies, encoding the agonist-antagonist nature of the two muscles considered, and by two time-varying muscle synergies, encoding each one a task-related feature of the elbow movements, specifically their direction. Our findings support the notion that each EMG decomposition provides a set of well-interpretable muscle synergies, identifying reduction of dimensionality in different aspects of the movements. Taken together, our findings suggest that all decompositions are not equivalent and may imply different neurophysiological substrates
Prabhakar, Sunil Kumar; Rajaguru, Harikumar
2015-12-01
The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.
Exact rebinning methods for three-dimensional PET.
Liu, X; Defrise, M; Michel, C; Sibomana, M; Comtat, C; Kinahan, P; Townsend, D
1999-08-01
The high computational cost of data processing in volume PET imaging is still hindering the routine application of this successful technique, especially in the case of dynamic studies. This paper describes two new algorithms based on an exact rebinning equation, which can be applied to accelerate the processing of three-dimensional (3-D) PET data. The first algorithm, FOREPROJ, is a fast-forward projection algorithm that allows calculation of the 3-D attenuation correction factors (ACF's) directly from a two-dimensional (2-D) transmission scan, without first reconstructing the attenuation map and then performing a 3-D forward projection. The use of FOREPROJ speeds up the estimation of the 3-D ACF's by more than a factor five. The second algorithm, FOREX, is a rebinning algorithm that is also more than five times faster, compared to the standard reprojection algorithm (3DRP) and does not suffer from the image distortions generated by the even faster approximate Fourier rebinning (FORE) method at large axial apertures. However, FOREX is probably not required by most existing scanners, as the axial apertures are not large enough to show improvements over FORE with clinical data. Both algorithms have been implemented and applied to data simulated for a scanner with a large axial aperture (30 degrees), and also to data acquired with the ECAT HR and the ECAT HR+ scanners. Results demonstrate the excellent accuracy achieved by these algorithms and the important speedup when the sinogram sizes are powers of two.
Real-Time Active Cosmic Neutron Background Reduction Methods
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Mitchell, Stephen; Guss, Paul
2013-09-01
Neutron counting using large arrays of pressurized 3He proportional counters from an aerial system or in a maritime environment suffers from the background counts from the primary cosmic neutrons and secondary neutrons caused by cosmic ray-induced mechanisms like spallation and charge-exchange reaction. This paper reports the work performed at the Remote Sensing Laboratory–Andrews (RSL-A) and results obtained when using two different methods to reduce the cosmic neutron background in real time. Both methods used shielding materials with a high concentration (up to 30% by weight) of neutron-absorbing materials, such as natural boron, to remove the low-energy neutron flux from the cosmic background as the first step of the background reduction process. Our first method was to design, prototype, and test an up-looking plastic scintillator (BC-400, manufactured by Saint Gobain Corporation) to tag the cosmic neutrons and then create a logic pulse of a fixed time duration (~120 μs) to block the data taken by the neutron counter (pressurized 3He tubes running in a proportional counter mode). The second method examined the time correlation between the arrival of two successive neutron signals to the counting array and calculated the excess of variance (Feynman variance Y2F)1 in the neutron count distribution from Poisson distribution. The dilution of this variance from cosmic background values ideally would signal the presence of man-made neutrons.2 The first method has been technically successful in tagging the neutrons in the cosmic-ray flux and preventing them from being counted in the 3He tube array by electronic veto—field measurement work shows the efficiency of the electronic veto counter to be about 87%. The second method has successfully derived an empirical relationship between the percentile non-cosmic component in a neutron flux and the Y2F of the measured neutron count distribution. By using shielding materials alone, approximately 55% of the neutron flux
Index-aware model order reduction methods applications to differential-algebraic equations
Banagaaya, N; Schilders, W H A
2016-01-01
The main aim of this book is to discuss model order reduction (MOR) methods for differential-algebraic equations (DAEs) with linear coefficients that make use of splitting techniques before applying model order reduction. The splitting produces a system of ordinary differential equations (ODE) and a system of algebraic equations, which are then reduced separately. For the reduction of the ODE system, conventional MOR methods can be used, whereas for the reduction of the algebraic systems new methods are discussed. The discussion focuses on the index-aware model order reduction method (IMOR) and its variations, methods for which the so-called index of the original model is automatically preserved after reduction.
Simulation of Thermal Stratification in BWR Suppression Pools with One Dimensional Modeling Method
Haihua Zhao; Ling Zou; Hongbin Zhang
2014-01-01
The suppression pool in a boiling water reactor (BWR) plant not only is the major heat sink within the containment system, but also provides the major emergency cooling water for the reactor core. In several accident scenarios, such as a loss-of-coolant accident and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; the pool temperature distribution also affects the NPSHa (available net positive suction head) and therefore the performance of the Emergency Core Cooling System and Reactor Core Isolation Cooling System pumps that draw cooling water back to the core. Current safety analysis codes use zero dimensional (0-D) lumped parameter models to calculate the energy and mass balance in the pool; therefore, they have large uncertainties in the prediction of scenarios in which stratification and mixing are important. While three-dimensional (3-D) computational fluid dynamics (CFD) methods can be used to analyze realistic 3-D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, resulting in a long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code (Berkeley mechanistic MIXing code in C++) has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by one-dimensional (1-D) transient partial differential equations and substructures (such as free or wall jets) are modeled with 1-D integral models. This allows very large reductions in computational effort compared to multi-dimensional CFD modeling. One heat-up experiment performed at the Finland POOLEX facility, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, is used for
A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.
YUE,M.; SCHLUETER,R.
2003-10-20
A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.
A dual-cable noise reduction method for Langmuir probes
Yang, T. F.; Zu, Q. X.; Liu, Ping
1995-07-01
To obtain fast time response plasma properties, electron density and electron temperature, with a Langmuir probe, the applied probe voltage has to be swept at high frequency. Due to the RC characteristics of coaxial cables, an induced noise of a square-wave form will appear when a sawtooth voltage is applied to the probe. Such a noise is very annoying and difficult to remove, particularly when the probe signal is weak. This paper discusses a noise reduction method using a dual-cable circuit. One of the cables is active and the other is a dummy. Both of them are of equal length and are laid parallel to each other. The active cable carries the applied probe voltage and the probe current signal. The dummy one is not connected to the probe. After being carefully tuned, the induced noises from both cables are nearly identical and therefore can be effectively eliminated with the use of a differential amplifier. A clean I-V characteristic curve can thus be obtained. This greatly improves the accuracy and the time resolution of the values of ne and Te.
Zulhan, Zulfiadi; Himawan, David Mangatur; Dimyati, Arbi
2017-01-01
In this study, isothermal-temperature gradient method was used to separate iron and alumina in lateritic iron ore as an alternative technique. The lateritic iron ore was ground to obtain grain size of less than 200 mesh and agglomerated in the form of cylindrical briquette using a press machine. The iron oxide in the briquette was reduced by addition of coal so that all surface of the briquette was covered by the coal. The temperature profile for the reduction process of the briquette was divided into three stages: the first stage was isothermal at 1000°C, the second stage was temperature gradient at varies heating rate of 5, 6.67 and 8.33°C/minutes from 1000 to 1400°C, and the final stage was isothermal at 1400°C. The effect of dehydroxylation of lateritic iron ore was studied as well. Aluminum distribution inside and outside the briquette was analyzed by scanning electron microscope with energy dispersive spectroscopy (SEM-EDS). The analysis results showed that the aluminum content increased from 8.01% at the outside of the briquette to 13.12% in the inside of the briquette. On contrary, iron content is higher at the outside of the briquette compared to that in the inside. These phenomena indicated that aluminum tends to migrate into the center of the briquette while iron moves outward to the surface of briquette. Furthermore, iron metallization of 91.03% could be achieved in the case of without dehydroxylation treatment. With the dehydroxylation treatment, iron metallization degree was increased up to 95.27%.
Maier, Andreas; Wigstrom, Lars; Hofmann, Hannes G; Hornegger, Joachim; Zhu, Lei; Strobel, Norbert; Fahrig, Rebecca
2011-11-01
The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-ray views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold speed-up of the
A vertical parallax reduction method for stereoscopic video based on adaptive interpolation
Li, Qingyu; Zhao, Yan
2016-10-01
The existence of vertical parallax is the main factor of affecting the viewing comfort of stereo video. Visual fatigue is gaining widespread attention with the booming development of 3D stereoscopic video technology. In order to reduce the vertical parallax without affecting the horizontal parallax, a self-adaptive image scaling algorithm is proposed, which can use the edge characteristics efficiently. In the meantime, the nonlinear Levenberg-Marquardt (L-M) algorithm is introduced in this paper to improve the accuracy of the transformation matrix. Firstly, the self-adaptive scaling algorithm is used for the original image interpolation. When the pixel point of original image is in the edge areas, the interpretation is implemented adaptively along the edge direction obtained by Sobel operator. Secondly the SIFT algorithm, which is invariant to scaling, rotation and affine transformation, is used to detect the feature matching points from the binocular images. Then according to the coordinate position of matching points, the transformation matrix, which can reduce the vertical parallax, is calculated using Levenberg-Marquardt algorithm. Finally, the transformation matrix is applied to target image to calculate the new coordinate position of each pixel from the view image. The experimental results show that: comparing with the method which reduces the vertical parallax using linear algorithm to calculate two-dimensional projective transformation, the proposed method improves the vertical parallax reduction obviously. At the same time, in terms of the impact on horizontal parallax, the proposed method has more similar horizontal parallax to that of the original image after vertical parallax reduction. Therefore, the proposed method can optimize the vertical parallax reduction.
Manifold learning for dimensionality reduction and clustering of skin spectroscopy data
Safi, Asad; Castañeda, Victor; Lasser, Tobias; Mateus, Diana C.; Navab, Nassir
2011-03-01
Diagnosis of benign and malign skin lesions is currently done mostly relying on visual assessment and frequent biopsies performed by dermatologists. As the timely and correct diagnosis of these skin lesions is one of the most important factors in the therapeutic outcome, leveraging new technologies to assist the dermatologist seems natural. Optical spectroscopy is a technology that is being established to aid skin lesion diagnosis, as the multi-spectral nature of this imaging method allows to detect multiple physiological changes like those associated with increased vasculature, cellular structure, oxygen consumption or edema in tumors. However, spectroscopy data is typically very high dimensional (on the order of thousands), which causes difficulties in visualization and classification. In this work we apply different manifold learning techniques to reduce the dimensions of the input data and get clustering results. Spectroscopic data of 48 patients with suspicious and actually malignant lesions was analyzed using ISOMAP, Laplacian Eigenmaps and Diffusion Maps with varying parameters and compared to results using PCA. Using optimal parameters, both ISOMAP and Laplacian Eigenmaps could cluster the data into suspicious and malignant with 96% accuracy, compared to the diagnosis of the treating physicians.
Kwok, Ka-Wai; Tsoi, Kuen Hung; Vitiello, Valentina; Clark, James; Chow, Gary C. T.; Luk, Wayne; Yang, Guang-Zhong
2014-01-01
This paper presents a real-time control framework for a snake robot with hyper-kinematic redundancy under dynamic active constraints for minimally invasive surgery. A proximity query (PQ) formulation is proposed to compute the deviation of the robot motion from predefined anatomical constraints. The proposed method is generic and can be applied to any snake robot represented as a set of control vertices. The proposed PQ formulation is implemented on a graphic processing unit, allowing for fast updates over 1 kHz. We also demonstrate that the robot joint space can be characterized into lower dimensional space for smooth articulation. A novel motion parameterization scheme in polar coordinates is proposed to describe the transition of motion, thus allowing for direct manual control of the robot using standard interface devices with limited degrees of freedom. Under the proposed framework, the correct alignment between the visual and motor axes is ensured, and haptic guidance is provided to prevent excessive force applied to the tissue by the robot body. A resistance force is further incorporated to enhance smooth pursuit movement matched to the dynamic response and actuation limit of the robot. To demonstrate the practical value of the proposed platform with enhanced ergonomic control, detailed quantitative performance evaluation was conducted on a group of subjects performing simulated intraluminal and intracavity endoscopic tasks. PMID:24741371
Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia
2016-03-01
Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.
Systems and methods to reduce reductant consumption in exhaust aftertreament systems
Gupta, Aniket; Cunningham, Michael J.
2017-02-14
Systems, apparatus and methods are provided for reducing reductant consumption in an exhaust aftertreatment system that includes a first SCR device and a downstream second SCR device, a first reductant injector upstream of the first SCR device, and a second reductant injector between the first and second SCR devices. NOx conversion occurs with reductant injection by the first reductant injector to the first SCR device in a first temperature range and with reductant injection by the second reductant injector to the second SCR device when the temperature of the first SCR device is above a reductant oxidation conversion threshold.
Systems and methods to reduce reductant consumption in exhaust aftertreament systems
Gupta, Aniket; Cunningham, Michael J.
2017-02-14
Systems, apparatus and methods are provided for reducing reductant consumption in an exhaust aftertreatment system that includes a first SCR device and a downstream second SCR device, a first reductant injector upstream of the first SCR device, and a second reductant injector between the first and second SCR devices. NOx conversion occurs with reductant injection by the first reductant injector to the first SCR device in a first temperature range and with reductant injection by the second reductant injector to the second SCR device when the temperature of the first SCR device is above a reductant oxidation conversion threshold.
Modern methods of analysis for three-dimensional orientational data
Davis, Joshua R.; Titus, Sarah J.
2017-03-01
Structural geology studies commonly include data about orientations of objects in space. By ;orientation; we mean not just a single direction, such as a foliation pole or the long axis of an ellipsoid, but a complete three-dimensional orientation of a body such as a foliation-lineation pair, a fold, an ellipsoid, etc. Over the past four decades, researchers in various fields have developed theory and algorithms for dealing with such data. In this paper, we explain how to apply orientation statistics to common geologic data types. We review plotting systems, measures of location and dispersion, inference (confidence/credible regions and hypothesis tests) for population means, and regression. We pay special attention to methods that work for small sample sizes and widely dispersed data. Our original contributions include a concept of Kamb contouring for orientations, a technique for handling anisotropy in confidence/credible regions, and large-scale numerical experiments on the performance of various inference methods. We conclude with a detailed study of foliation-lineations from the western Idaho shear zone, using statistical results to argue that the data are not consistent with a published model for them.
A three-dimensional measurement method for medical electric endoscope
Zhou, Tingting; Tao, Pei; Yuan, Bo; Wang, Liqiang
2017-01-01
One method for three-dimensional (3D) measurement based on structured light is proposed for the medical electric endoscope in the present study. The structured light of black and white strips is generated by that the point sources illuminate the grating mask plate. Four point sources are aligned linearly by a certain space and they are lighted sequentially. Then four images of modulated fringes by the height of object with different phase shifts can be obtained. The algorithm proposed by Wang Z is employed to extract the accurate phase shift from the fringe images since the phase shift cannot be exactly set as π/2 by hardware. An experimental prototype endoscope was built according to the proposed method. One high-definition CMOS camera module developed by ourselves was used to acquire the endoscopic images and the structured light was generated by four fiber LEDs and a transmission grating with a pitch of 0.1 mm. one C# program was designed to light up LEDs in turn, acquire the phase shifted images and calculate the 3D information. The experimental results indicate that its precise of depth measurement at the working distance of 40 mm is better than 0.5 mm and its consuming time of 3D depth calculation is less than 0.5 s.
Generalized non-separable two-dimensional Dammann encoding method
Yu, Junjie; Zhou, Changhe; Zhu, Linwei; Lu, Yancong; Wu, Jun; Jia, Wei
2017-01-01
We generalize for the first time, to the best of our knowledge, the Dammann encoding method into non-separable two-dimensional (2D) structures for designing various pure-phase Dammann encoding gratings (DEGs). For examples, three types of non-separable 2D DEGs, including non-separable binary Dammann vortex gratings, non-separable binary distorted Dammann gratings, and non-separable continuous-phase cubic gratings, are designed theoretically and demonstrated experimentally. Correspondingly, it is shown that 2D square arrays of optical vortices with topological charges proportional to the diffraction orders, focus spots shifting along both transversal and axial directions with equal spacings, and Airy-like beams with controllable orientation for each beam, are generated in symmetry or asymmetry by these three DEGs, respectively. Also, it is shown that a more complex-shaped array of modulated beams could be achieved by this non-separable 2D Dammann encoding method, which will be a big challenge for those conventional separable 2D Dammann encoding gratings. Furthermore, the diffractive efficiency of the gratings can be improved around ∼10% when the non-separable structure is applied, compared with their conventional separable counterparts. Such improvement in the efficiency should be of high significance for some specific applications.
Efficient computation method for two-dimensional nonlinear waves
无
2001-01-01
The theory and simulation of fully-nonlinear waves in a truncated two-dimensional wave tank in time domain are presented. A piston-type wave-maker is used to generate gravity waves into the tank field in finite water depth. A damping zone is added in front of the wave-maker which makes it become one kind of absorbing wave-maker and ensures the prescribed Neumann condition. The efficiency of nmerical tank is further enhanced by installation of a sponge layer beach (SLB) in front of downtank to absorb longer weak waves that leak through the entire wave train front. Assume potential flow, the space- periodic irrotational surface waves can be represented by mixed Euler- Lagrange particles. Solving the integral equation at each time step for new normal velocities, the instantaneous free surface is integrated following time history by use of fourth-order Runge- Kutta method. The double node technique is used to deal with geometric discontinuity at the wave- body intersections. Several precise smoothing methods have been introduced to treat surface point with high curvature. No saw-tooth like instability is observed during the total simulation.The advantage of proposed wave tank has been verified by comparing with linear theoretical solution and other nonlinear results, excellent agreement in the whole range of frequencies of interest has been obtained.
A Three Dimensional Simulation Method of the Gantry Crane
Jingsong LI
2013-04-01
Full Text Available Until now, many companies have developed lots of the port machinery remote monitoring systems. However, these monitoring systems usual display the operating status of the port machinery by the schematic diagram, Legend and data. The presentation of information is unable to describe the status of the large number of port machinery. In order to solve the problem, a three-dimensional simulation method of the gantry crane based on the WPF is proposed. This paper studies WPF technology and 3D modeling techniques, on this basis, proposes a kind of the gantry crane 3D simulation method based on WPF, establishes a new generation monitoring system based on 3D, immersive and interactive real-time simulation environment. This system could simulate the real-time 3D virtual scene of the gantry crane, and real-time 3D analog display port machinery running posture and operating environment. Experiments show that CPU and memory usage rate is low enough when the system is running.
Dimension Reduction and Discretization in Stochastic Problems by Regression Method
Ditlevsen, Ove Dalager
1996-01-01
The chapter mainly deals with dimension reduction and field discretizations based directly on the concept of linear regression. Several examples of interesting applications in stochastic mechanics are also given.Keywords: Random fields discretization, Linear regression, Stochastic interpolation, ......, Slepian models, Stochastic finite elements.......The chapter mainly deals with dimension reduction and field discretizations based directly on the concept of linear regression. Several examples of interesting applications in stochastic mechanics are also given.Keywords: Random fields discretization, Linear regression, Stochastic interpolation...
S.S. Aleshin
2017-01-01
Full Text Available At the three-loop level we analyze, how the NSVZ relation appears for N=1 SQED regularized by the dimensional reduction. This is done by the method analogous to the one which was earlier used for the theories regularized by higher derivatives. Within the dimensional technique, the loop integrals cannot be written as integrals of double total derivatives. However, similar structures can be written in the considered approximation and are taken as a starting point. Then we demonstrate that, unlike the higher derivative regularization, the NSVZ relation is not valid for the renormalization group functions defined in terms of the bare coupling constant. However, for the renormalization group functions defined in terms of the renormalized coupling constant, it is possible to impose boundary conditions to the renormalization constants giving the NSVZ scheme in the three-loop order. They are similar to the all-loop ones defining the NSVZ scheme obtained with the higher derivative regularization, but are more complicated. The NSVZ schemes constructed with the dimensional reduction and with the higher derivative regularization are related by a finite renormalization in the considered approximation.
Aleshin, S. S.; Goriachuk, I. O.; Kataev, A. L.; Stepanyantz, K. V.
2017-01-01
At the three-loop level we analyze, how the NSVZ relation appears for N = 1 SQED regularized by the dimensional reduction. This is done by the method analogous to the one which was earlier used for the theories regularized by higher derivatives. Within the dimensional technique, the loop integrals cannot be written as integrals of double total derivatives. However, similar structures can be written in the considered approximation and are taken as a starting point. Then we demonstrate that, unlike the higher derivative regularization, the NSVZ relation is not valid for the renormalization group functions defined in terms of the bare coupling constant. However, for the renormalization group functions defined in terms of the renormalized coupling constant, it is possible to impose boundary conditions to the renormalization constants giving the NSVZ scheme in the three-loop order. They are similar to the all-loop ones defining the NSVZ scheme obtained with the higher derivative regularization, but are more complicated. The NSVZ schemes constructed with the dimensional reduction and with the higher derivative regularization are related by a finite renormalization in the considered approximation.
Aleshin, S S; Kataev, A L; Stepanyantz, K V
2016-01-01
At the three-loop level we analyze, how the NSVZ relation appears for ${\\cal N}=1$ SQED regularized by the dimensional reduction. This is done by the method analogous to the one which was earlier used for the theories regularized by higher derivatives. Within the dimensional technique, the loop integrals cannot be written as integrals of double total derivatives. However, similar structures can be written in the considered approximation and are taken as a starting point. Then we demonstrate that, unlike the higher derivative regularization, the NSVZ relation is not valid for the renormalization group functions defined in terms of the bare coupling constant. However, for the renormalization group functions defined in terms of the renormalized coupling constant, it is possible to impose boundary conditions to the renormalization constants giving the NSVZ scheme in the three-loop order. They are similar to the all-loop ones defining the NSVZ scheme obtained with the higher derivative regularization, but are more...
Synthetic Spectrum Methods for Three-Dimensional Supernova Models
Thomas, R C
2003-01-01
Current observations stimulate the production of fully three-dimensional explosion models, which in turn motivates three-dimensional spectrum synthesis for supernova atmospheres. We briefly discuss techniques adapted to address the latter problem, and consider some fundamentals of line formation in supernovae without recourse to spherical symmetry. Direct and detailed extensions of the technique are discussed, and future work is outlined.
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Method and system for determining a volume of an object from two-dimensional images
Abercrombie, Robert K [Knoxville, TN; Schlicher, Bob G [Portsmouth, NH
2010-08-10
The invention provides a method and a computer program stored in a tangible medium for automatically determining a volume of three-dimensional objects represented in two-dimensional images, by acquiring at two least two-dimensional digitized images, by analyzing the two-dimensional images to identify reference points and geometric patterns, by determining distances between the reference points and the component objects utilizing reference data provided for the three-dimensional object, and by calculating a volume for the three-dimensional object.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
Comparative study of synthesis and reduction methods for graphene oxide
Alazmi, Amira
2016-05-14
Graphene oxide (GO) and reduced graphene oxide (rGO) have congregated much interest as promising active materials for a variety of applications such as electrodes for supercapacitors. Yet, partially given the absence of comparative studies in synthesis methodologies, a lack of understanding persists on how to best tailor these materials. In this work, the effect of using different graphene oxidation-reduction strategies in the structure and chemistry of rGOs is systematically discussed. Two of the most popular oxidation routes in the literature were used to obtain GO. Subsequently, two sets of rGO powders were synthesised employing three different reduction routes, totalling six separate products. It is shown that the extension of the structural rearrangement in rGOs is not just dependent on the reduction step but also on the approach followed for the initial graphite oxidation.
A superior method for the reduction of secondary phosphine oxides.
Busacca, Carl A; Lorenz, Jon C; Grinberg, Nelu; Haddad, Nizar; Hrapchak, Matt; Latli, Bachir; Lee, Heewon; Sabila, Paul; Saha, Anjan; Sarvestani, Max; Shen, Sherry; Varsolona, Richard; Wei, Xudong; Senanayake, Chris H
2005-09-15
[reaction: see text] Diisobutylaluminum hydride (DIBAL-H) and triisobutylaluminum have been found to be outstanding reductants for secondary phosphine oxides (SPOs). All classes of SPOs can be readily reduced, including diaryl, arylalkyl, and dialkyl members. Many SPOs can now be reduced at cryogenic temperatures, and conditions for preservation of reducible functional groups have been found. Even the most electron-rich and sterically hindered phosphine oxides can be reduced in a few hours at 50-70 degrees C. This new reduction has distinct advantages over existing technologies.
Bedani, F.; Schoenmakers, P.J.; Janssen, H.-G.
2012-01-01
On-line comprehensive two-dimensional liquid chromatography techniques promise to resolve samples that current one-dimensional liquid chromatography methods cannot adequately deal with. To make full use of the potential of two-dimensional liquid chromatography, optimization is required. Optimization
Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich
2016-05-17
Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.
Eckardt, Henrik; Lind, Marianne
2015-01-01
articular displacement was 0 mm in 69% of the Sanders type 2 fractures and 57% of the Sanders type 3 fractures. Operation duration averaged 118 minutes, and there were no reoperations due to misplaced screws or plates. The average absorbed radiation dose per patient was 288 mGy·cm. CONCLUSION......BACKGROUND: Operative treatment of displaced calcaneal fractures should restore joint congruence, but conventional fluoroscopy is unable to fully visualize the subtalar joint. We questioned whether intraoperative 3-dimensional (3D) imaging would aid in the reduction of calcaneal fractures......, resulting in improved articular congruence and implant positioning. METHOD: Sixty-two displaced calcaneal fractures were operated on using standard fluoroscopic views. When the surgeon had achieved a satisfactory reduction, an intraoperative 3D scan was conducted, malreductions or implant imperfections were...
Component reduction for regularity criteria of the three-dimensional magnetohydrodynamics systems
Kazuo Yamazaki
2014-04-01
Full Text Available We study the regularity of the three-dimensional magnetohydrodynamics system, and obtain criteria in terms of one velocity field component and two magnetic field components. In contrast to the previous results such as [22], we have eliminated the condition on the third component of the magnetic field completely while preserving the same upper bound on the integrability condition.
Bais, F.A.; Barnes, K.J.; Forgacs, P.; Zoupanos, G.
1986-01-27
By dimensional reduction of pure gauge theories (with gauge group G) over a compact coset space S/R, one obtains four-dimensional theories where scalar fields and a symmetry breaking potential appear naturally. We present a complete analysis (including the fermion sector) of all unified models with simple G which are spontaneously broken to SU/sub 3/xU/sub 1/, and which can be obtained by this technique with the added restriction that S is contained in G. Such models only exist when G is an exceptional group; however, the surviving fermions do not have the correct quantum numbers. The paper also provides an exhaustive list of SU/sub 3/ embeddings in the exceptional groups. (orig.).
Lattice Methods for Pricing American Strangles with Two-Dimensional Stochastic Volatility Models
Xuemei Gao
2014-01-01
Full Text Available The aim of this paper is to extend the lattice method proposed by Ritchken and Trevor (1999 for pricing American options with one-dimensional stochastic volatility models to the two-dimensional cases with strangle payoff. This proposed method is compared with the least square Monte-Carlo method via numerical examples.
Method of glitch reduction in DAC with weight redundancy
Azarov, Olexiy D.; Murashchenko, Olexander G.; Chernyak, Olexander I.; Smolarz, Andrzej; Kashaganova, Gulzhan
2015-12-01
The appearance of glitches in digital-to-analog converters leads to significant limitations of conversion accuracy and speed, which is critical for DAC and limits their usage. This paper researches the possibility of using the redundant positional number system in order to reduce glitches in DAC. There had been described the usage pattern of number systems with fractional digit weights of bits as well as with the whole number weights of bits. Hereafter there had been suggested the algorithm for glitches reduction in the DAC generation mode of incessant analogue signal. There had also been estimated the efficiency of weight redundancy application with further presentation of the most efficient parameters of number systems. The paper describes a block diagram of a low-glitch DAC based on Fibonacci codes. The simulation results prove the feasibility of weight redundancy application and show a significant reduction of glitches in DAC in comparison with the classical binary system.
Method of selective reduction of polyhalosilanes with alkyltin hydrides
Sharp, Kenneth G.; D'Errico, John J.
1989-01-01
The invention relates to the selective and stepwise reduction of polyhalosilanes by reacting at room temperature or below with alkyltin hydrides without the use of free radical intermediates. Alkyltin hydrides selectively and stepwise reduce the Si--Br, Si--Cl, or Si--I bonds while leaving intact any Si--F bonds. When two or more different halogens are present on the polyhalosilane, the halogen with the highest atomic weight is preferentially reduced.
Method of selective reduction of halodisilanes with alkyltin hydrides
D'Errico, John J.; Sharp, Kenneth G.
1989-01-01
The invention relates to the selective and sequential reduction of halodisilanes by reacting these compounds at room temperature or below with trialkyltin hydrides or dialkyltin dihydrides without the use of free radical intermediates. The alkyltin hydrides selectively and sequentially reduce the Si-Cl, Si-Br or Si-I bonds while leaving intact the Si-Si and Si-F bonds present.
On some method of the space elevator maximum stress reduction
Ambartsumian S. A.
2007-03-01
Full Text Available The possibility of the realization and exploitation of the space elevator project is connected with a number of complicated problems. One of them are large elastic stresses arising in the space elevator ribbon body, which are considerably bigger that the limit of strength of modern materials. This note is devoted to the solution of problem of maximum stress reduction in the ribbon by the modification of the ribbon cross-section area.
Monodispersive CoPt Nanoparticles Synthesized Using Chemical Reduction Method
SHEN Cheng-Min; HUI Chao; YANG Tian-Zhong; XIAO Cong-Wen; CHEN Shu-Tang; DING Hao; GAO Hong-Jun
2008-01-01
@@ Monodispersive CoPt nanoparticles in sizes of about 2.2 nm are synthesized by superhydride reduction of CoCl2 and PtCl2 in diphenyl ether. The as-prepared nanoparticles show a chemically disordered A1 structure and are superparamagnetic. Thermal annealing transforms the A1 structure into chemically ordered L1o structure and the particles are ferromagnetic at room temperature.
Three-dimensional nanoelectronic device simulation using spectral element methods
Cheng, Candong
The purpose of this thesis is to develop an efficient 3-Dimensional (3-D) nanoelectronic device simulator. Specifically, the self-consistent Schrodinger-Poisson model was implemented in this simulator to simulate band structures and quantum transport properties. Also, an efficient fast algorithm, spectral element method (SEM), was used in this simulator to achieve spectral accuracy where the error decreases exponentially with the increase of sampling densities and the basis order of the polynomial functions, thus significantly reducing the CPU time and memory usage. Moreover, within this simulator, a perfectly matched layer (PML) boundary condition method was used for the Schrodinger solver, which significantly simplifies the problem and reduces the computational time. Furthermore, the effective mass in semiconductor devices was treated as a full anisotropic mass tensor, which provides an excellent tool to study the anisotropy characteristics along arbitrary orientation of the device. Nanoelectronic devices usually involve the simulations of energy band and quantum transport properties. One of the models to perform these simulations is by solving a self-consistent Schrodinger-Poisson system. Two efficient fast algorithms, spectral grid method (SGM) and SEM, are investigated and implemented in this thesis. The spectral accuracy is achieved in both algorithms, whose errors decrease exponentially with the increase of the sampling density and basis orders. The spectral grid method is a pseudospectral method to achieve a high-accuracy result by choosing special nonuniform grid set and high-order Lagrange interpolants for a partial differential equation. Spectral element method is a high-order finite element method which uses the Gauss-Lobatto-Legendre (GLL) polynomials to represent the field variables in the Schrodinger-Poisson system and, therefore, to achieve spectral accuracy. We have implemented the SGM in the Schrodinger equation to solve the energy band structures
Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A
2009-11-07
Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).
Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A.
2009-11-01
Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).
A New Method for Measurement and Reduction of Software Complexity
SHI Yindun; XU Shiyi
2007-01-01
This paper develops an improved structural software complexity metrics named information flow complexity which is closely related to the reliability of software. Together with the three software complexity metrics, the total software complexity is measured and some rules to reduce the complexity are presented in the paper. To illustrate and explain the process of measurement and reduction of software complexity, several examples and experiments are given. It is proposed that software complexity metrics can be measured earlier in software development and can provide substantial information of software systems whose reliability can be modeled and used in the determination of initial parameter estimation.
Polyuga, Rostyslav V.; Schaft, Arjan J. van der
2012-01-01
The geometric formulation of general port-Hamiltonian systems is used in order to obtain two structure preserving reduction methods. The main idea is to construct a reduced-order Dirac structure corresponding to zero power flow in some of the energy-storage ports. This can be performed in two canoni
Yehorchenko, Irina
2010-01-01
We study possible Lie and non-classical reductions of multidimensional wave equations and the special classes of possible reduced equations - their symmetries and equivalence classes. Such investigation allows to find many new conditional and hidden symmetries of the original equations.
Rumpler, Romain; Deü, Jean-François; Göransson, Peter
2012-11-01
Structural-acoustic finite element models including three-dimensional (3D) modeling of porous media are generally computationally costly. While being the most commonly used predictive tool in the context of noise reduction applications, efficient solution strategies are required. In this work, an original modal reduction technique, involving real-valued modes computed from a classical eigenvalue solver is proposed to reduce the size of the problem associated with the porous media. In the form presented in this contribution, the method is suited for homogeneous porous layers. It is validated on a 1D poro-acoustic academic problem and tested for its performance on a 3D application, using a subdomain decomposition strategy. The performance of the proposed method is estimated in terms of degrees of freedom downsizing, computational time enhancement, as well as matrix sparsity of the reduced system.
Bowes, M. A.
1978-01-01
Analytical methods were developed and/or adopted for calculating helicopter component noise, and these methods were incorporated into a unified total vehicle noise calculation model. Analytical methods were also developed for calculating the effects of noise reduction methodology on helicopter design, performance, and cost. These methods were used to calculate changes in noise, design, performance, and cost due to the incorporation of engine and main rotor noise reduction methods. All noise reduction techniques were evaluated in the context of an established mission performance criterion which included consideration of hovering ceiling, forward flight range/speed/payload, and rotor stall margin. The results indicate that small, but meaningful, reductions in helicopter noise can be obtained by treating the turbine engine exhaust duct. Furthermore, these reductions do not result in excessive life cycle cost penalties. Currently available main rotor noise reduction methodology, however, is shown to be inadequate and excessively costly.
Holttinen, Hannele; Kiviluoma, Juha; McCann, John; Clancy, Matthew; Millgan, Michael; Pineda, Ivan; Eriksen, Peter Borre; Orths, Antje; Wolfgang, Ove
2015-10-05
This paper presents ways of estimating CO2 reductions of wind power using different methodologies. Estimates based on historical data have more pitfalls in methodology than estimates based on dispatch simulations. Taking into account exchange of electricity with neighboring regions is challenging for all methods. Results for CO2 emission reductions are shown from several countries. Wind power will reduce emissions for about 0.3-0.4 MtCO2/MWh when replacing mainly gas and up to 0.7 MtCO2/MWh when replacing mainly coal powered generation. The paper focuses on CO2 emissions from power system operation phase, but long term impacts are shortly discussed.
Algorithm for statistical noise reduction in three-dimensional ion implant simulations
Hernandez-Mangas, J.M. E-mail: jesman@ele.uva.es; Arias, J.; Jaraiz, M.; Bailon, L.; Barbolla, J
2001-05-01
As integrated circuit devices scale into the deep sub-micron regime, ion implantation will continue to be the primary means of introducing dopant atoms into silicon. Different types of impurity profiles such as ultra-shallow profiles and retrograde profiles are necessary for deep submicron devices in order to realize the desired device performance. A new algorithm to reduce the statistical noise in three-dimensional ion implant simulations both in the lateral and shallow/deep regions of the profile is presented. The computational effort in BCA Monte Carlo ion implant simulation is also reduced.
Dimensional reduction of the Standard Model coupled to a new singlet scalar field
Brauner, Tomáš; Tranberg, Anders; Vuorinen, Aleksi; Weir, David J
2016-01-01
We derive an effective dimensionally reduced theory for the Standard Model augmented by a real singlet scalar. We treat the singlet as a superheavy field and integrate it out, leaving an effective theory involving only the Higgs and $\\mathrm{SU}(2)_\\mathrm{L} \\times \\mathrm{U}(1)_Y$ gauge fields, identical to the one studied previously for the Standard Model. This opens up the possibility of efficiently computing the order and strength of the electroweak phase transition, numerically and nonperturbatively, in this extension of the Standard Model. Understanding the phase diagram is crucial for models of electroweak baryogenesis and for studying the production of gravitational waves at thermal phase transitions.
Vázquez, Marco-Vinicio; Dagdug, Leonardo
2010-12-01
Computer simulations of the diffusion of a Brownian particle, in a hemispherical shaped tube, were carried out to assess the range of applicability of the reduction of the three-dimensional diffusion to an effective one-dimensional description. Previously Berezhkovskii et al. [21] founded that the one-dimensional description centered in the Fick-Jacobs' equation with a position dependent diffusion coefficients, D(x) (one due to R. Zwanzig [14], and another by Reguera-Rubí [15]), has a restricted range of applicability, for a conical tube. Remarkably, our results have shown that applying the Zwanzig's formula one can predict variation of τ in the whole range of a/R in n→w direction, while the Reguera-Rubí's formula fits simulations' data in the w→n direction. This is an important result since it is known that Reguera-Rubí's predicts better the mean first-passage time's behavior without regard of direction in other geometries, and this is our principal result.
A novel TOA estimation method with effective NLOS error reduction
ZHANG Yi-heng; CUI Qi-mei; LI Yu-xiang; ZHANG Ping
2008-01-01
It is well known that non-line-of-sight (NLOS)error has been the major factor impeding the enhancement ofaccuracy for time of arrival (TOA) estimation and wirelesspositioning. This article proposes a novel method of TOAestimation effectively reducing the NLOS error by 60%,comparing with the traditional timing and synchronizationmethod. By constructing the orthogonal training sequences,this method converts the traditional TOA estimation to thedetection of the first arrival path (FAP) in the NLOS multipathenvironment, and then estimates the TOA by the round-triptransmission (RTT) technology. Both theoretical analysis andnumerical simulations prove that the method proposed in thisarticle achieves better performance than the traditional methods.
Catalyst and method for reduction of nitrogen oxides
Ott, Kevin C.
2008-05-27
A Selective Catalytic Reduction (SCR) catalyst was prepared by slurry coating ZSM-5 zeolite onto a cordierite monolith, then subliming an iron salt onto the zeolite, calcining the monolith, and then dipping the monolith either into an aqueous solution of manganese nitrate and cerium nitrate and then calcining, or by similar treatment with separate solutions of manganese nitrate and cerium nitrate. The supported catalyst containing iron, manganese, and cerium showed 80 percent conversion at 113 degrees Celsius of a feed gas containing nitrogen oxides having 4 parts NO to one part NO.sub.2, about one equivalent ammonia, and excess oxygen; conversion improved to 94 percent at 147 degrees Celsius. N.sub.2O was not detected (detection limit: 0.6 percent N.sub.2O).
Methods for communication-network reliability analysis - Probabilistic graph reduction
Shooman, Andrew M.; Kershenbaum, Aaron
The authors have designed and implemented a graph-reduction algorithm for computing the k-terminal reliability of an arbitrary network with possibly unreliable nodes. The two contributions of the present work are a version of the delta-y transformation for k-terminal reliability and an extension of Satyanarayana and Wood's polygon to chain transformations to handle graphs with imperfect vertices. The exact algorithm is faster than or equal to that of Satyanarayana and Wood and the simple algorithm without delta-y and polygon to chain transformations for every problem considered. The exact algorithm runs in linear time on series-parallel graphs and is faster than the above-stated algorithms for huge problems which run in exponential time. The approximate algorithms reduce the computation time for the network reliability problem by two to three orders of magnitude for large problems, while providing reasonably accurate answers in most cases.
Dmitri A. Viattchenin
2009-06-01
Full Text Available This paper describes a modification of a possibilistic clustering method based on the concept of allotment among fuzzy clusters. Basic ideas of the method are considered and the concept of a principal allotment among fuzzy clusters is introduced. The paper provides the description of the plan of the algorithm for detection principal allotment. An analysis of experimental results of the proposed algorithm’s application to the Tamura’s portrait data in comparison with the basic version of the algorithm and with the NERFCM-algorithm is carried out. A methodology of the algorithm’s application to the dimensionality reduction problem is outlined and the application of the methodology is illustrated on the example of Anderson’s Iris data in comparison with the result of principal component analysis. Preliminary conclusions are formulated also.
Comparisons of SCR and Active-set Methods for PAPR Reduction in OFDM Systems
Qihui Liang
2010-04-01
Full Text Available signal to clipping noise ratio (SCR and active-set methods are two existing methods for peak-to-average power ratio (PAPR reduction based on tone reservation. In this paper, the computational complexities of these two methods are analyzed and simulation is done for comparing their PAPR-reducing performance. The simulation results show that active-set method requires less computational complexity than that of SCR method while they achieve similar PAPR reduction performance.
Chu, Yi-Zen
2015-12-01
This work was mainly driven by the desire to explore to what extent embedding some given geometry in a higher dimensional flat one is useful for understanding the causal structure of classical fields traveling in the former, in terms of that in the latter. We point out, in the four-dimensional (4D) spatially flat Friedmann-Lemaître-Robertson-Walker universe, that the causal structure of transverse-traceless (TT) gravitational waves can be elucidated by first reducing the problem to a two-dimensional (2D) Minkowski wave equation with a time-dependent potential, where the relevant Green's function is a pure tail—waves produced by a physical source propagate strictly within the null cone. By viewing this 2D world as embedded in a 4D one, the 2D Green's function can also be seen to be sourced by a cylindrically symmetric scalar field in three dimensions (3D). From both the 2D wave equation and the 3D scalar perspective, we recover the exact solution of the 4D graviton tail for the case where the scale factor written in conformal time is a power law. There are no TT gravitational-wave tails when the universe is radiation dominated because the background Ricci scalar is zero. In a matter-dominated one, we estimate the amplitude of the tail to be suppressed relative to its null counterpart by both the ratio of the duration of the (isolated) source to the age of the universe η0 and the ratio of the observer-source spatial distance (at the observer's time) to the same η0. In a universe driven primarily by a cosmological constant, the tail contribution to the background geometry a [η ]2ημ ν after the source has ceased is the conformal factor a2 times a spacetime-constant symmetric matrix proportional to the spacetime volume integral of the TT part of the source's stress-energy-momentum tensor. In other words, massless spin-2 gravitational waves exhibit a tail-induced memory effect in 4D de Sitter spacetime.
Reduction of scour around bridge piers using a modified method for vortex reduction
Entesar A.S. EL-Ghorab
2013-09-01
Full Text Available The current study presents a modified method to reduce the scour depth in front of the bridge piers. The idea of this method is based on reducing the stagnation of the flow and vortex formation in front of the pier. Therefore, the pressure difference around the pier is used for driving the flow through an arrangement of openings in front and connected to the openings along the pier’s side. A test program was planned using an experimental flume at the Hydraulics Research Institute (HRI and three hundred thirty six runs were conducted. Three different pier shapes, circular, square, and rectangular, provided with different openings arrangement and vertical spacing are tested. This method showed that the scour depth is reduced by 45% and also the volume of the scoured material is decreased up to 64%. These results were obtained using opening diameter of 20% of the pier width (w and vertical spacing equals the pier width (w. Also, a dimensionless regression equation was developed based on the obtained results. These findings when implemented in the field can easily safeguard the bridge piers and dramatically reduce the maintenance efforts and costs as well as improve the hydraulic performance of the water structure.
Dimensional reduction of a Lorentz and CPT-violating Maxwell-Chern-Simons model
Belich, H. Jr.; Helayel Neto, J.A. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil). Coordenacao de Teoria de Campos e Particulas; Grupo de Fisica Teorica Jose Leite Lopes, Petropolis, RJ (Brazil); E-mails: belich@cbpf.br; helayel@cbpf.br; Ferreira, M.M. Jr. [Grupo de Fisica Teorica Jose Leite Lopes, Petropolis, RJ (Brazil); Maranhao Univ., Sao Luiz, MA (Brazil). Dept. de Fisica]. E-mail: manojr@cbpf.br; Orlando, M.T.D. [Grupo de Fisica Teorica Jose Leite Lopes, Petropolis, RJ (Brazil); Espirito Santo Univ., Vitoria, ES (Brazil). Dept. de Fisica e Quimica; E-mail: orlando@cce.ufes.br
2003-01-01
Taking as starting point a Lorentz and CPT non-invariant Chern-Simons-like model defined in 1+3 dimensions, we proceed realizing its dimensional to D = 1+2. One then obtains a new planar model, composed by the Maxwell-Chern-Simons (MCS) sector, a Klein-Gordon massless scalar field, and a coupling term that mixes the gauge field to the external vector, {nu}{sup {mu}}. In spite of breaking Lorentz invariance in the particle frame, this model may preserve the CPT symmetry for a single particular choice of {nu}{sup {mu}} . Analyzing the dispersion relations, one verifies that the reduced model exhibits stability, but the causality can be jeopardized by some modes. The unitary of the gauge sector is assured without any restriction , while the scalar sector is unitary only in the space-like case. (author)
Zacharuk, Matthias; Stamen, Dolaptchiev; Ulrich, Achatz; Ilya, Timofeyev
2016-04-01
Due to the finite spatial resolution in numerical atmospheric models subgrid-scale (SGS) processes are excluded. A SGS parameterization of these excluded processes might improve the model on all scales. To parameterize the SGS processes we choose the MTV stochastic mode reduction (Majda, Timofeyev, Vanden-Eijnden 2001, A mathematical framework for stochastic climate models. Commun. Pure Appl. Math., 54:891-974). For this the model is separated into fast and slow processes. Using the statistics of the fast processes, a SGS parameterization is found. To identify fast processes the state vector of the model is separated into two state vectors. One vector is the average of the full model state vector in a coarse grid cell. The other describes SGS processes which are defined as the deviation of the full state vector from the coarse cell average. If the SGS vector decorrelates faster in time than the coarse grid vector, the interactions of SGS processes in the equation of the SGS processes are replaced by a local Ornstein-Uhlenbeck process. Afterwards the MTV SGS parameterization can be derived. This method was successfully applied on the Burgers-equation (Dolaptchiev et al. 2013, Stochastic closure for local averages in the finite-difference discretization of the forced Burgers equation. Theor. Comp. Fluid Dyn., 27:297-317). In this study we consider a more atmosphere like model and choose a model of the one dimensional shallow water equations (SWe). It will be shown, that the fine state vector decorrelates faster than the coarse state vector. Due to the non-polynomial form of the SWe in flux formulation an approximation of all 1/h (h = fluid depth) terms needs to be done, except of the interactions between coarse state vector to coarse state vector. It will be shown, that this approximation has only minor impact on the model results. In the following the model with the local Ornstein-Uhlenbeck process approximation of SGS interactions is analyzed and compared to the
Nilpotent action on the KdV variables and 2-dimensional Drinfeld-Sokolov reduction
Enriquez, B
1994-01-01
We note that a version ``with spectral parameter'' of the Drinfeld-Sokolov reduction gives a natural mapping from the KdV phase space to the group of loops with values in $\\widehat N_{+}/A, \\widehat N_{+}$~: affine nilpotent and $A$ principal commutative (or anisotropic Cartan) subgroup~; this mapping is connected to the conserved densities of the hierarchy. We compute the Feigin-Frenkel action of $\\widehat n_{+}$ (defined in terms of screening operators) on the conserved densities, in the $sl_2$ case.
Solution of two-dimensional Fredholm integral equation via RBF-triangular method
Amir Fallahzadeh
2012-04-01
Full Text Available In this paper, a new method is introduced to solve a two-dimensional Fredholm integral equation. The method is based on the approximation by Gaussian radial basis functions and triangular nodes and weights. Also, a new quadrature is introduced to approximate the two dimensional integrals which is called the triangular method. The results of the example illustrate the accuracy of the proposed method increases.
Six-dimensional Methods for Four-dimensional Conformal Field Theories II: Irreducible Fields
Weinberg, Steven
2012-01-01
This note supplements an earlier paper on conformal field theories. There it was shown how to construct tensor, spinor, and spinor-tensor primary fields in four dimensions from their counterparts in six dimensions, where conformal transformations act simply as SO(4,2) Lorentz transformations. Here we show how to constrain fields in six dimensions so that the corresponding primary fields in four dimensions transform according to irreducible representations of the four-dimensional Lorentz group, even when the irreducibility conditions on these representations involve the four-component Levi-Civita tensor $\\epsilon_{\\mu\
Reduction Methods for Real-time Simulations in Hybrid Testing
Andersen, Sebastian
2016-01-01
Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...... without introducing further unknowns into the system. The basis formulation is shown to exhibit high precision and to reduce the computational cost significantly. Furthermore, the basis formulation exhibits a significant higher stability, than standard nonlinear algorithms. A real-time hybrid test...
Methods of body mass reduction by combat sport athletes.
Brito, Ciro José; Roas A, Fernanda Castro Martins; Brito I, Surian Souza; Marins J, Carlos Bouzas; Córdova, Claudio; Franchini, Emerson
2012-04-01
The aim of this study was to investigate the methods adopted to reduce body mass (BM) in competitive athletes from the grappling (judo, jujitsu) and striking (karate and tae kwon do) combat sports in the state of Minas Gerais, Brazil. An exploratory methodology was employed through descriptive research, using a standardized questionnaire with objective questions self-administered to 580 athletes (25.0 ± 3.7 yr, 74.5 ± 9.7 kg, and 16.4% ± 5.1% body fat). Regardless of the sport, 60% of the athletes reported using a method of rapid weight loss (RWL) through increased energy expenditure. Strikers tend to begin reducing BM during adolescence. Furthermore, 50% of the sample used saunas and plastic clothing, and only 26.1% received advice from a nutritionist. The authors conclude that a high percentage of athletes uses RWL methods. In addition, a high percentage of athletes uses unapproved or prohibited methods such as diuretics, saunas, and plastic clothing. The age at which combat sport athletes reduce BM for the first time is also worrying, especially among strikers.
Coatings and methods for corrosion detection and/or reduction
Calle, Luz M. (Inventor); Li, Wenyan (Inventor)
2010-01-01
Coatings and methods are provided. An embodiment of the coating includes microcapsules that contain at least one of a corrosion inhibitor, a film-forming compound, and an indicator. The microcapsules are dispersed in a coating vehicle. A shell of each microcapsule breaks down in the presence of an alkaline condition, resulting from corrosion.
Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B.; Schmieg, Steven J.; Sloane, Thompson M.; Hilden, David L.; Mulawa, Patricia A.; Lee, Jong H.; Cheng, Shi-Wai S.
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
A Speckle Reduction Filter Using Wavelet-Based Methods for Medical Imaging Application
2001-10-25
A Speckle Reduction Filter using Wavelet-Based Methods for Medical Imaging Application Su...Wavelet-Based Methods for Medical Imaging Application Contract Number Grant Number Program Element Number Author(s) Project Number Task Number Work
Parallax scanning methods for stereoscopic three-dimensional imaging
Mayhew, Christopher A.; Mayhew, Craig M.
2012-03-01
Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.
Three dimensional stress vector sensor array and method therefor
Pfeifer, Kent Bryant; Rudnick, Thomas Jeffery
2005-07-05
A sensor array is configured based upon capacitive sensor techniques to measure stresses at various positions in a sheet simultaneously and allow a stress map to be obtained in near real-time. The device consists of single capacitive elements applied in a one or two dimensional array to measure the distribution of stresses across a mat surface in real-time as a function of position for manufacturing and test applications. In-plane and normal stresses in rolling bodies such as tires may thus be monitored.
Methods for preparation of three-dimensional bodies
Mulligan, Anthony C [Tucson, AZ; Rigali, Mark J [Carlsbad, NM; Sutaria, Manish P [Malden, MA; Artz, Gregory J [Tucson, AZ; Gafner, Felix H [Tucson, AZ; Vaidyanathan, K Ranji [Tucson, AZ
2008-06-17
Processes for mechanically fabricating two and three-dimensional fibrous monolith composites include preparing a fibrous monolith filament from a core composition of a first powder material and a boundary material of a second powder material. The filament includes a first portion of the core composition surrounded by a second portion of the boundary composition. One or more filaments are extruded through a mechanically-controlled deposition nozzle onto a working surface to create a fibrous monolith composite object. The objects may be formed directly from computer models and have complex geometries.
General methods for alarm reduction; Larmsanering med generella metoder
Ahnlund, Jonas; Bergquist, Tord; Raaberg, Martin [Lund Univ. (Sweden). Dept. of Information Technology
2003-10-01
The information in the control rooms has increased due to the technological advances in process control. Large industries produce large data quantities, where some information is unnecessary or even incorrect. The operator needs support from an advanced and well-adjusted alarm system to be able to separate a real event from a minor disturbance. The alarms must be of assistance and not a nuisance. An enhanced alarm situation qualifies an increased efficiency with fewer production disturbances and an improved safety. Yet, it is still unusual that actions are taken to improve the situation. An alarm cleanup with general methods can shortly be described as taking advantage of the control systems built-in functions, the possibility to modify or create function blocks and fine-tune the settings in the alarm system. In this project, we make use of an intelligent software, Alarm Cleanup Toolbox, that simulate different signal processing methods and tries to find improved settings on all the signals in the process. This is a fast and cost-efficient way to improve the overall alarm situation, and lays a foundation for more advanced alarm systems. An alarm cleanup has been carried out at Flintraennan district heating plant in Malmoe, where various signal processing methods has been implemented in a parallel alarm system. This made it possible to compare the two systems under the same conditions. The result is very promising, and shows that a lot of improvements can be achieved with very little effort. An analysis of the alarm system at Vattenreningen (the water purification process) at Heleneholmsverket in Malmoe has been carried out. Alarm Cleanup Toolbox has, besides suggesting improved settings, also found logical errors in the alarm system. Here, no implementation was carried out and therefore the results are analytical, but they validate the efficiency of the general methods. The project has shown that an alarm cleanup with general methods is cost-efficient, and that the
Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.
2016-10-01
Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally
Jorge, Érica Gouveia; Tanomaru-Filho, Mario; Guerreiro-Tanomaru, Juliane Maria; Reis, José Maurício dos Santos Nunes; Spin-Neto, Rubens; Gonçalves, Marcelo
2015-01-01
This study quantitatively assessed the periapical bone repair following endodontic surgery, using planimetric evaluation based on two- (conventional and digital intraoral radiographic images - IRs) and three-dimensional (cone beam computed tomography - CBCT) evaluation. Eleven maxillary anterior teeth (of 11 patients) with periapical bone lesions and indication for surgical endodontic treatment were selected. IRs and CBCT images were acquired before the endodontic surgery, and 48 h, 4, and 8-months after the surgery. In each period of evaluation, the area (mm2) of the bone lesion was measured in the images, and the values for the three methods were compared. The area in the CBCT images was measured in the mesio-distal sections comprising the largest diameter of the lesion. Data were submitted to repeated measures 2-way ANOVA and t-tests with Bonferroni correction. There was significant difference between the periods of evaluation (p=0.002) regarding the assessed periapical bone lesion area. There was no statistically significant difference between the methods of evaluation (p=0.023). In the CBCT images the lesion areas were 10% larger than those observed in the conventional IRs (22.84 mm2) and 15% larger than those observed in the digital IRs (21.48 mm2). From the baseline (40.12 mm2) to 4 (20.06 mm2) and 8-months (9.40 mm2), reductions of 50 and 77% in the lesion area, respectively, were observed (p<0.0001). From 4 to 8-months, this value was 53%. Progressive bone repair could be seen from 48 h to 8-months following endodontic surgery based on two- (conventional and digital IRs) and three-dimensional (CBCT) evaluation. CBCT images provided results similar to those assessed by means of IRs.
Method for catalyzing oxidation/reduction reactions of simple molecules
Bicker, D.; Bonaventura, J.
1988-06-14
A method for oxidizing carbon monoxide to carbon dioxide is described comprising: (1) contacting, together, carbon monoxide, a nitrogen-containing chelating agent and water; wherein the chelating agent is at least one member selected from the group consisting of methmeoglobin bound to a support, ferric hemoglobin bound to a support, iron-containing porphyrins bound to a support, and sperm whale myoglobin bound to a support, wherein the support is glass, a natural fiber, a synthetic fiber, a gel, charcoal, carbon ceramic material, a metal oxide, a synthetic polymer, a zeolite, a silica compound of an alumina compound; and (2) obtaining carbon dioxide.
Matsumoto, Hisanori; Tokiwano, Kazuo; Hosoi, Hirotaka; Sueoka, Kazuhisa; Mukasa, Koichi
2002-05-01
We present a new technique for the restoration of scanning tunneling microscopy (STM) images, which is a two-dimensional extension of a recently developed statistical approach based on the one-dimensional least-squares method (LSM). An STM image is regarded as a realization of a stochastic process and assumed to be a composition of an underlying image and noise. We express the underlying image in terms of a two-dimensional generalized trigonometric polynomial suitable for representing the atomic protrusions in STM images. The optimization of the polynomial is performed by the two-dimensional LSM combined with the power spectral density function estimated by means of the maximum entropy method (MEM) iterative algorithm for two-dimensional signals. The restored images are obtained as the optimum least-squares fitting polynomial which is a continuous surface. We apply this technique to modeled and actual STM data. Results show that the present method yields a reasonable restoration of STM images.
Primary component analysis method and reduction of seismicity parameters
WANG Wei; MA Qin-zhong; LIN Ming-zhou; WU Geng-feng; WU Shao-chun
2005-01-01
In the paper, the primary component analysis is made using 8 seismicity parameters of earthquake frequency N (ML≥3.0), b-value, 7-value, A(b)-value, Mf-value, Ac-value, C-value and D-value that reflect the characteristics of magnitude, time and space distribution of seismicity from different respects. By using the primary component analysis method, the synthesis parameter W reflecting the anomalous features of earthquake magnitude, time and space distribution can be gained. Generally, there is some relativity among the 8 parameters, but their variations are different in different periods. The earthquake prediction based on these parameters is not very well. However,the synthesis parameter W showed obvious anomalies before 13 earthquakes (MS＞5.8) occurred in North China,which indicates that the synthesis parameter W can reflect the anomalous characteristics of magnitude, time and space distribution of seismicity better. Other problems related to the conclusions drawn by the primary component analysis method are also discussed.
Block-Krylov component synthesis method for structural model reduction
Craig, Roy R., Jr.; Hale, Arthur L.
1988-01-01
A new analytical method is presented for generating component shape vectors, or Ritz vectors, for use in component synthesis. Based on the concept of a block-Krylov subspace, easily derived recurrence relations generate blocks of Ritz vectors for each component. The subspace spanned by the Ritz vectors is called a block-Krylov subspace. The synthesis uses the new Ritz vectors rather than component normal modes to reduce the order of large, finite-element component models. An advantage of the Ritz vectors is that they involve significantly less computation than component normal modes. Both 'free-interface' and 'fixed-interface' component models are derived. They yield block-Krylov formulations paralleling the concepts of free-interface and fixed-interface component modal synthesis. Additionally, block-Krylov reduced-order component models are shown to have special disturbability/observability properties. Consequently, the method is attractive in active structural control applications, such as large space structures. The new fixed-interface methodology is demonstrated by a numerical example. The accuracy is found to be comparable to that of fixed-interface component modal synthesis.
Method and system for manipulating a digital representation of a three-dimensional object
2010-01-01
A method of manipulating a three-dimensional virtual building block model by means of two-dimensional cursor movements, the virtual building block model including a plurality of virtual building blocks each including a number of connection elements for connecting the virtual building block...
Method and system for manipulating a digital representation of a three-dimensional object
2010-01-01
A method of manipulating a three-dimensional virtual building block model by means of two-dimensional cursor movements, the virtual building block model including a plurality of virtual building blocks each including a number of connection elements for connecting the virtual building block with a...
Xiong, Jie L; Atkins, Phillip; Chew, Weng Cho
2010-01-01
In this paper, we generalized the surface integral equation method for the evaluation of Casimir force in arbitrary three-dimensional geometries. Similar to the two-dimensional case, the evaluation of the mean Maxwell stress tensor is cast into solving a series of three-dimensional scattering problems. The formulation and solution of the three-dimensional scattering problem is well-studied in classical computational electromagnetics. This paper demonstrates that this quantum electrodynamic phenomena can be studied using the knowledge and techniques of classical electrodynamics.
Natural position of the head: review of two-dimensional and three-dimensional methods of recording.
Cassi, D; De Biase, C; Tonni, I; Gandolfini, M; Di Blasio, A; Piancino, M G
2016-04-01
Both the correct position of the patient's head and a standard system for the acquisition of images are essential for objective evaluation of the facial profile and the skull, and for longitudinal superimposition. The natural position of the head was introduced into orthodontics in the late 1950s, and is used as a postural basis for craniocervical and craniofacial morphological analysis. It can also have a role in the planning of the surgical correction of craniomaxillofacial deformities. The relatively recent transition in orthodontics from 2-dimensional to 3-dimensional imaging, and from analogue to digital technology, has renewed attention in finding a versatile method for the establishment of an accurate and reliable head position during the acquisition of serial records. In this review we discuss definition, clinical applications, and procedures to establish the natural head position and their reproducibility. We also consider methods to reproduce and record the position in two and three planes.
New Blocking Artifacts Reduction Method Based on Wavelet Transform
SHI Min; YI Qing-ming
2007-01-01
It is well known that a block discrete cosine transform compressed image exhibits visually annoying blocking artifacts at low-bit-rate. A new post-processing deblocking algorithm in wavelet domain is proposed. The algorithm exploits blocking-artifact features shown in wavelet domain. The energy of blocking artifacts is concentrated into some lines to form annoying visual effects after wavelet transform. The aim of reducing blocking artifacts is to capture excessive energy on the block boundary effectively and reduce it below the visual scope. Adaptive operators for different subbands are computed based on the wavelet coefficients. The operators are made adaptive to different images and characteristics of blocking artifacts. Experimental results show that the proposed method can significantly improve the visual quality and also increase the peak signal-noise-ratio(PSNR) in the output image.
Adaptive Subband Filtering Method for MEMS Accelerometer Noise Reduction
Piotr PIETRZAK
2008-12-01
Full Text Available Silicon microaccelerometers can be considered as an alternative to high-priced piezoelectric sensors. Unfortunately, relatively high noise floor of commercially available MEMS (Micro-Electro-Mechanical Systems sensors limits the possibility of their usage in condition monitoring systems of rotating machines. The solution of this problem is the method of signal filtering described in the paper. It is based on adaptive subband filtering employing Adaptive Line Enhancer. For filter weights adaptation, two novel algorithms have been developed. They are based on the NLMS algorithm. Both of them significantly simplify its software and hardware implementation and accelerate the adaptation process. The paper also presents the software (Matlab and hardware (FPGA implementation of the proposed noise filter. In addition, the results of the performed tests are reported. They confirm high efficiency of the solution.
Singular perturbations introduction to system order reduction methods with applications
Shchepakina, Elena; Mortell, Michael P
2014-01-01
These lecture notes provide a fresh approach to investigating singularly perturbed systems using asymptotic and geometrical techniques. It gives many examples and step-by-step techniques, which will help beginners move to a more advanced level. Singularly perturbed systems appear naturally in the modelling of many processes that are characterized by slow and fast motions simultaneously, for example, in fluid dynamics and nonlinear mechanics. This book’s approach consists in separating out the slow motions of the system under investigation. The result is a reduced differential system of lesser order. However, it inherits the essential elements of the qualitative behaviour of the original system. Singular Perturbations differs from other literature on the subject due to its methods and wide range of applications. It is a valuable reference for specialists in the areas of applied mathematics, engineering, physics, biology, as well as advanced undergraduates for the earlier parts of the book, and graduate stude...
Microbial copper reduction method to scavenge anthropogenic radioiodine
Lee, Seung Yeop; Lee, Ji Young; Min, Je Ho; Kim, Seung Soo; Baik, Min Hoon; Chung, Sang Yong; Lee, Minhee; Lee, Yongjae
2016-06-01
Unexpected reactor accidents and radioisotope production and consumption have led to a continuous increase in the global-scale contamination of radionuclides. In particular, anthropogenic radioiodine has become critical due to its highly volatile mobilization and recycling in global environments, resulting in widespread, negative impact on nature. We report a novel biostimulant method to effectively scavenge radioiodine that exhibits remarkable selectivity for the highly difficult-to-capture radioiodine of >500-fold over other anions, even under circumneutral pH. We discovered a useful mechanism by which microbially reducible copper (i.e., Cu2+ to Cu+) acts as a strong binder for iodide-iodide anions to form a crystalline halide salt of CuI that is highly insoluble in wastewater. The biocatalytic crystallization of radioiodine is a promising way to remove radioiodine in a great capacity with robust growth momentum, further ensuring its long-term stability through nuclear I- fixation via microcrystal formation.
Gao Chao; Zhou Shanxue
2010-01-01
This letter investigates the wavelet transform,as well as the principle and the method of the noise reduction based on wavelet transform,it chooses the threshold noise reduction,and discusses in detail the principles,features and design steps of the threshold method. Rigrsure,heursure,sqtwolog and minimization four kinds of threshold selection method are compared qualitatively,and quantitatively. The wavelet analysis toolbox of MATLAB helps to realize the computer simulation of the signal noise reduction. The graphics and calculated standard deviation of the various threshold noise reductions show that,when dealing with the actual pressure signal of the oil pipeline leakage,sqtwolog threshold selection method can effectively remove the noise. Aiming to the pressure signal of the oil pipeline leakage,the best choice is the wavelet threshold noise reduction with sqtwolog threshold. The leakage point is close to the actual position,with the relative error of less than 1%.
Ying, Yulong; Liu, Yu; Wang, Xinyu; Mao, Yiyin; Cao, Wei; Hu, Pan; Peng, Xinsheng
2015-01-28
Two dimensional (2-D) Ti3C2Tx nanosheets are obtained by etching bulk Ti3C2Tx powders in HF solution and delaminating ultrasonically, which exhibit excellent removal capacity for toxic Cr(VI) from water, due to their high surface area, well dispersibility, and reductivity. The Ti3C2Tx nanosheets delaminated by 10% HF solution present more efficient Cr(VI) removal performance with capacity of 250 mg g(-1), and the residual concentration of Cr(VI) in treated water is less than 5 ppb, far below the concentration (0.05 ppm) of Cr(VI) in the drinking water standard recommended by the World Health Organization. This kind of 2-D Ti3C2Tx nanosheet can not only remove Cr(VI) rapidly and effectively in one step from aqueous solution by reducing Cr(VI) to Cr(III) but also adsorb the reduced Cr(III) simultaneously. Furthermore, these reductive 2-D Ti3C2Tx nanosheets are generally explored to remove other oxidant agents, such as K3[Fe(CN)6], KMnO4, and NaAuCl4 solutions, by converting them to low oxidation states. These significantly expand the potential applications of 2-D Ti3C2Tx nanosheets in water treatment.
Cai, Kai; Liu, Jiawei; Zhang, Huan; Huang, Zhao; Lu, Zhicheng; Foda, Mohamed F; Li, Tingting; Han, Heyou
2015-05-11
An intermediate-template-directed method has been developed for the synthesis of quasi-one-dimensional Au/PtAu heterojunction nanotubes by the heterogeneous nucleation and growth of Au on Te/Pt core-shell nanostructures in aqueous solution. The synthesized porous Au/PtAu bimetallic nanotubes (PABNTs) consist of porous tubular framework and attached Au nanoparticles (AuNPs). The reaction intermediates played an important role in the preparation, which fabricated the framework and provided a localized reducing agent for the reduction of the Au and Pt precursors. The Pt7 Au PABNTs showed higher electrocatalytic activity and durability in the oxygen-reduction reaction (ORR) in 0.1 M HClO4 than porous Pt nanotubes (PtNTs) and commercially available Pt/C. The mass activity of PABNTs was 218 % that of commercial Pt/C after an accelerated durability test. This study demonstrates the potential of PABNTs as highly efficient electrocatalysts. In addition, this method provides a facile strategy for the synthesis of desirable hetero-nanostructures with controlled size and shape by utilizing an intermediate template. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Supervised dimensionality reduction and contextual pattern recognition in medical image processing
Loog, Marco
2004-01-01
The past few years have witnessed a significant increase in the number of supervised methods employed in diverse image processing tasks. Especially in medical image analysis the use of, for example, supervised shape and appearance modelling has increased considerably and has proven to be successful.
Xian-Qian Wu; Xi Wang; Yan-Peng Wei; Hong-Wei Song; Chen-Guang Huang
2012-01-01
Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue,cracking,etc.Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process.In this paper,the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated.Firstly,dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method.Secondly,the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method.Furthermore,related empirical formulas were given for each dimensionless parameter based on the simulation results.Finally,comparison was made and good agreement was found between the simulation results and the empirical formula,which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.
Hunziker, Jürg; Laloy, Eric; Linde, Niklas
2016-04-01
an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.
Prell, D; Kalender, W A; Kyriakou, Y
2010-12-01
The purpose of this study was to develop, implement and evaluate a dedicated metal artefact reduction (MAR) method for flat-detector CT (FDCT). The algorithm uses the multidimensional raw data space to calculate surrogate attenuation values for the original metal traces in the raw data domain. The metal traces are detected automatically by a three-dimensional, threshold-based segmentation algorithm in an initial reconstructed image volume, based on twofold histogram information for calculating appropriate metal thresholds. These thresholds are combined with constrained morphological operations in the projection domain. A subsequent reconstruction of the modified raw data yields an artefact-reduced image volume that is further processed by a combining procedure that reinserts the missing metal information. For image quality assessment, measurements on semi-anthropomorphic phantoms containing metallic inserts were evaluated in terms of CT value accuracy, image noise and spatial resolution before and after correction. Measurements of the same phantoms without prostheses were used as ground truth for comparison. Cadaver measurements were performed on complex and realistic cases and to determine the influences of our correction method on the tissue surrounding the prostheses. The results showed a significant reduction of metal-induced streak artefacts (CT value differences were reduced to below 22 HU and image noise reduction of up to 200%). The cadaver measurements showed excellent results for imaging areas close to the implant and exceptional artefact suppression in these areas. Furthermore, measurements in the knee and spine regions confirmed the superiority of our method to standard one-dimensional, linear interpolation.
A three dimensional implicit immersed boundary method with application
无
2011-01-01
Most algorithms of the immersed boundary method originated by Peskin are explicit when it comes to the computation of the elastic forces exerted by the immersed boundary to the fluid. A drawback of such an explicit approach is a severe restriction on the time step size for maintaining numerical stability. An implicit immersed boundary method in two dimensions using the lattice Boltzmann approach has been proposed. This paper reports an extension of the method to three dimensions and its application to simul...
Rozhkov, Mikhail; Bobrov, Dmitry; Kitov, Ivan
2014-05-01
The Master Event technique is a powerful tool for Expert Technical Analysis within the CTBT framework as well as for real-time monitoring with the waveform cross-correlation (CC) (match filter) approach. The primary goal of CTBT monitoring is detection and location of nuclear explosions. Therefore, the cross-correlation monitoring should be focused on finding such events. The use of physically adequate waveform templates may significantly increase the number of valid, both natural and manmade, events in the Reviewed Event Bulletin (REB) of the International Data Centre. Inadequate templates for master events may increase the number of CTBT irrelevant events in REB and reduce the sensitivity of the CC technique to valid events. In order to cover the entire earth, including vast aseismic territories, with the CC based nuclear test monitoring we conducted a thorough research and defined the most appropriate real and synthetic master events representing underground explosion sources. A procedure was developed on optimizing the master event template simulation and narrowing the classes of CC templates used in detection and location process based on principal and independent component analysis (PCA and ICA). Actual waveforms and metadata from the DTRA Verification Database were used to validate our approach. The detection and location results based on real and synthetic master events were compared. The prototype of CC-based Global Grid monitoring system developed in IDC during last year was populated with different hybrid waveform templates (synthetics, synthetics components, and real components) and its performance was assessed with the world seismicity data flow, including the DPRK-2013 event. The specific features revealed in this study for the P-waves from the DPRK underground nuclear explosions (UNEs) can reduce the global detection threshold of seismic monitoring under the CTBT by 0.5 units of magnitude. This corresponds to the reduction in the test yield by a
A simple method for the determination of reduction potentials in heme proteins.
Efimov, Igor; Parkin, Gary; Millett, Elizabeth S; Glenday, Jennifer; Chan, Cheuk K; Weedon, Holly; Randhawa, Harpreet; Basran, Jaswir; Raven, Emma L
2014-03-03
We describe a simple method for the determination of heme protein reduction potentials. We use the method to determine the reduction potentials for the PAS-A domains of the regulatory heme proteins human NPAS2 (Em=-115 mV ± 2 mV, pH 7.0) and human CLOCK (Em=-111 mV ± 2 mV, pH 7.0). We suggest that the method can be easily and routinely applied to the determination of reduction potentials across the family of heme proteins.
2013-07-01
information within time-critical environments (1, 2). Innovated methods are required that allow the efficient and effective transformation of data...Clicking the mouse on the column heading will cause the rows to sort alphabetically (words) or number order ( digits ) according to the data in that column...void setCriminalRec(String crm ) { criminalRec = crm ; if (criminalRec.equalsIgnoreCase("Guilty")) criminalRecBinary = 0; else if
DONG Zhong-Zhou; LIU Xi-Qiang; BAI Cheng-Lin
2006-01-01
Using the classical Lie method of infinitesimals, we first obtain the symmetry of the (2+1)-dimensional Burgers-Korteweg-de-Vries (3D-BKdV) equation. Then we reduce the 3D-BKdV equation using the symmetry and give some exact solutions of the 3D-BKdV equation. When using the direct method, we restrict a condition and get a relationship between the new solutions and the old ones. Given a solution of the 3D-BKdV equation, we can get a new one from the relationship. The relationship between the symmetry obtained by using the classical Lie method and that obtained by using the direct method is also mentioned. At last, we give the conservation laws of the 3D-BKdV equation.
Xiaoni Dong
2016-01-01
Full Text Available Process models and parameters are two critical steps for fault prognosis in the operation of rotating machinery. Due to the requirement for a short and rapid response, it is important to study robust sensor data representation schemes. However, the conventional holospectrum defined by one-dimensional or two-dimensional methods does not sufficiently present this information in both the frequency and time domains. To supply a complete holospectrum model, a new three-dimensional spatial representation method is proposed. This method integrates improved three-dimensional (3D holospectra and 3D filtered orbits, leading to the integration of radial and axial vibration features in one bearing section. The results from simulation and experimental analysis on a complex compressor show that the proposed method can present the real operational status and clearly reveal early faults, thus demonstrating great potential for condition-based maintenance prediction in industrial machinery.
NONLINEAR GALERKIN METHODS FOR SOLVING TWO DIMENSIONAL NEWTON-BOUSSINESQ EQUATIONS
GUOBOLING
1995-01-01
The nonlinear Galerkin methods for solving two-dimensional Newton-Boussinesq equations are proposed. The existence and uniqueness of global generalized solution of these equations,and the convergence of approximate solutions are also obtained.
Publishing nutrition research: a review of multivariate techniques--part 3: data reduction methods.
Gleason, Philip M; Boushey, Carol J; Harris, Jeffrey E; Zoellner, Jamie
2015-07-01
This is the ninth in a series of monographs on research design and analysis, and the third in a set of these monographs devoted to multivariate methods. The purpose of this article is to provide an overview of data reduction methods, including principal components analysis, factor analysis, reduced rank regression, and cluster analysis. In the field of nutrition, data reduction methods can be used for three general purposes: for descriptive analysis in which large sets of variables are efficiently summarized, to create variables to be used in subsequent analysis and hypothesis testing, and in questionnaire development. The article describes the situations in which these data reduction methods can be most useful, briefly describes how the underlying statistical analyses are performed, and summarizes how the results of these data reduction methods should be interpreted.
Balawejder Maciej
2014-12-01
Full Text Available The method for the reduction of pesticide residues in soft fruits based on utilization of ozone was proposed. The procedure allows for effective reduction of boscalid residues by 38% in raspberries, and about 58% thiram in blackcurrants. Furthermore, it can be used on an industrial scale.
Three-dimensional velocity obstacle method for UAV deconflicting maneuvers
Jenie, Y.I.; Van Kampen, E.J.; De Visser, C.C.; Ellerbroek, J.; Hoekstra, J.M.
2015-01-01
Autonomous systems are required in order to enable UAVs to conduct self-separation and collision avoidance, especially for flights within the civil airspace system. A method called the Velocity Obstacle Method can provide the necessary situational awareness for UAVs in a dynamic environment, and can
Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory
Feng Hu
2012-01-01
Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.
Wang Zhen-Li; Liu Qiang
2015-07-01
In this paper, the classical Lie group method is employed to obtain exact travelling wave solutions of the generalized Camassa–Holm Kadomtsev–Petviashvili (g-CH–KP) equation. We give the conservation laws of the g-CH–KP equation. Using the symmetries, we find six classical similarity reductions of g-CH–KP equation. Many types of exact solutions of the g-CH–KP equation are derived by solving the reduced equations.
Correlation analysis of PCB and comparison of test-analysis model reduction methods
Xu Fei; Li Chuanri; Jiang Tongmin; Rong Shuanglong
2014-01-01
The validity of correlation analysis between finite element model (FEM) and modal test data is strongly affected by three factors, i.e., quality of excitation and measurement points in modal test, FEM reduction methods, and correlation check techniques. A new criterion based on modified mode participation (MMP) for choosing the best excitation point is presented. Comparison between this new criterion and mode participation (MP) criterion is made by using Case 1 with a simple printed circuit board (PCB). The result indicates that this new criterion produces better results. In Case 2, 35 measure-ment points are selected to perform modal test and correlation analysis while 9 selected in Case 3. System equivalent reduction expansion process (SEREP), modal assurance criteria (MAC), coordinate modal assurance criteria (CoMAC), pseudo orthogonality check (POC) and coordinate orthogonality check (CORTHOG) are used to show the error introduced by modal test in Cases 2 and 3. Case 2 shows that additional errors which cannot be identified by using CoMAC can be found by using CORTHOG. In both Cases 2 and 3, Guyan reduction, improved reduced system (IRS) method, SEREP and Hybrid reduction are compared for accuracy and robustness. The results suggest that the quality of the reduction process is problem dependent. However, the IRS method is an improvement over the Guyan reduction, and the Hybrid reduction is an improvement over the SEREP reduction.
Correlation analysis of PCB and comparison of test-analysis model reduction methods
Xu Fei
2014-08-01
Full Text Available The validity of correlation analysis between finite element model (FEM and modal test data is strongly affected by three factors, i.e., quality of excitation and measurement points in modal test, FEM reduction methods, and correlation check techniques. A new criterion based on modified mode participation (MMP for choosing the best excitation point is presented. Comparison between this new criterion and mode participation (MP criterion is made by using Case 1 with a simple printed circuit board (PCB. The result indicates that this new criterion produces better results. In Case 2, 35 measurement points are selected to perform modal test and correlation analysis while 9 selected in Case 3. System equivalent reduction expansion process (SEREP, modal assurance criteria (MAC, coordinate modal assurance criteria (CoMAC, pseudo orthogonality check (POC and coordinate orthogonality check (CORTHOG are used to show the error introduced by modal test in Cases 2 and 3. Case 2 shows that additional errors which cannot be identified by using CoMAC can be found by using CORTHOG. In both Cases 2 and 3, Guyan reduction, improved reduced system (IRS method, SEREP and Hybrid reduction are compared for accuracy and robustness. The results suggest that the quality of the reduction process is problem dependent. However, the IRS method is an improvement over the Guyan reduction, and the Hybrid reduction is an improvement over the SEREP reduction.
Wang, Xiuhong; Mao, Xingpeng; Wang, Yiming; Zhang, Naitong; Li, Bo
2016-09-15
Based on sparse representations, the problem of two-dimensional (2-D) direction of arrival (DOA) estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR), is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD). By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D) estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer-Rao bound (CRB), even under a single snapshot and low signal-to-noise ratio (SNR) condition.
Xiuhong Wang
2016-09-01
Full Text Available Based on sparse representations, the problem of two-dimensional (2-D direction of arrival (DOA estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR, is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD. By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer–Rao bound (CRB, even under a single snapshot and low signal-to-noise ratio (SNR condition.
Using Data Reduction Methods To Predict Quality Of Life In Brest ...
But usually existed a lot of factor cause difficulty for fitting the models and predicting. ... method of data reduction was used for reducing the number of predictors. ... regression showed that only role function, social function and diarrhea were ...
Method of reduction of nitroaromatics by enzymatic reaction with redox enzymes
Shah, Manish M.
2000-01-01
A method for the controlled reduction of nitroaromatic compounds such as nitrobenzene and 2,4,6-trinitrotoluene by enzymatic reaction with redox enzymes, such as Oxyrase (Trademark of Oxyrase, Inc., Mansfield, Ohio).
Shah, Manish M.; Campbell, James A.
1998-01-01
A method for the controlled reduction of nitroaromatic compounds such as nitrobenzene and 2,4,6-trinitrotoluene by enzymatic reaction with oxygen sensitive nitroreductase enzymes, such as ferredoxin NADP oxidoreductase.
Zamora, A.; Gutierrez, A. E.; Velasco, A. A.
2014-12-01
2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.
FAN En-Gui
2001-01-01
Two new applications of homogeneous balance (HB) method are presented.It is shown that HB methodcan be extended to search for the Backlund transformations and similarity reductions of nonlinear partial differentialequations.The close relations among the HB method,Weiss-Tabor-Carnevale method and Clarkson-Kruskal directreduction method are also found.KdV-MKdV equation is considered as an illustrative example,and its one kind of Backlund transformation,three kinds of similarity reductions and several kinds of travelling wave solutions are obtained by using extended HB method.
A PDE based Method for Speckle Reduction of Log-compressed Ultrasound Image
Jie Huang
2011-04-01
Full Text Available Speckle noise is widely existence in coherent imaging systems, such as synthetic aperture radar, sonar, ultrasound and laser imaging, and is commonly described as signal correlated. In this paper, we focus on speckle reduction problem in real ultrasound image. Unlike traditional anisotropic diffusion methods usually taking image gradient as a diffusion index, in this paper, we present a new texture based anisotropic diffusion method for speckle reduction in real ultrasound image. The results comparing our new method with other well known methods on both synthetic images and real ultrasound images are reported to show the superiority of our method in keeping important features of real ultrasound images.
Mehmet Z. Baykara
2012-09-01
Full Text Available Noncontact atomic force microscopy (NC-AFM is being increasingly used to measure the interaction force between an atomically sharp probe tip and surfaces of interest, as a function of the three spatial dimensions, with picometer and piconewton accuracy. Since the results of such measurements may be affected by piezo nonlinearities, thermal and electronic drift, tip asymmetries, and elastic deformation of the tip apex, these effects need to be considered during image interpretation.In this paper, we analyze their impact on the acquired data, compare different methods to record atomic-resolution surface force fields, and determine the approaches that suffer the least from the associated artifacts. The related discussion underscores the idea that since force fields recorded by using NC-AFM always reflect the properties of both the sample and the probe tip, efforts to reduce unwanted effects of the tip on recorded data are indispensable for the extraction of detailed information about the atomic-scale properties of the surface.
Method of selecting vibro-isolation properties of vibration reduction systems
Maciejewski, Igor; Krzyzynski, Tomasz [Koszalin University of Technology, Koszalin (Poland)
2016-04-15
An effective optimization method is proposed in this paper to determine the basic characteristics of non-linear visco-elastic elements used in passive vibration reduction systems. The developed method is examined by performing experimental investigations on the exemplary vibration reduction system, i.e., the passive seat suspension. The shaping of vibro-isolation properties is analyzed for different spectral classes of excitation signals.
Methods and devices for fabricating three-dimensional nanoscale structures
Rogers, John A.; Jeon, Seokwoo; Park, Jangung
2010-04-27
The present invention provides methods and devices for fabricating 3D structures and patterns of 3D structures on substrate surfaces, including symmetrical and asymmetrical patterns of 3D structures. Methods of the present invention provide a means of fabricating 3D structures having accurately selected physical dimensions, including lateral and vertical dimensions ranging from 10s of nanometers to 1000s of nanometers. In one aspect, methods are provided using a mask element comprising a conformable, elastomeric phase mask capable of establishing conformal contact with a radiation sensitive material undergoing photoprocessing. In another aspect, the temporal and/or spatial coherence of electromagnetic radiation using for photoprocessing is selected to fabricate complex structures having nanoscale features that do not extend entirely through the thickness of the structure fabricated.
Frequency-domain generelaized singular peruturbation method for relative error model order reduction
Hamid Reza SHAKER
2009-01-01
A new mixed method for relative error model order reduction is proposed.In the proposed method the frequency domain balanced stochastic truncation method is improved by applying the generalized singular perturbation method to the frequency domain balanced system in the reduction procedure.The frequency domain balanced stochastic truncation method,which was proposed in [15] and [17] by the author,is based on two recently developed methods,namely frequency domain balanced truncation within a desired frequency bound and inner-outer factorization techniques.The proposed method in this paper is a carry over of the frequency-domain balanced stochastic truncation and is of interest for practical model order reduction because in this context it shows to keep the accuracy of the approximation as high as possible without sacrificing the computational efficiency and important system properties.It is shown that some important properties of the frequency domain stochastic balanced reduction technique are extended to the proposed reduction method by using the concept and properties of the reciprocal systems.Numerical results show the accuracy,simplicity and flexibility enhancement of the method.
NOISE REDUCTION SCHEDULING METHOD IN A SHOP FLOOR AND ITS CASE STUDY
Liu Fei; Cao Huajun; Zhang Hua; Yuan Chuanping
2003-01-01
Noise reduction in a shop floor is one of the important parts of green manufacturing. In a shop floor, machine tools are the main noise sources in a shop floor. A new approach is discovered by investigation that the noise can be obviously reduced in a shop floor by optimizing the scheduling between work pieces and machine tools. Based on the discovery, a new method of noise reduction is proposed. A noise reduction scheduling model in a shop floor is established, and the application of the model is also discussed. A case is studied, which shows that the method and model are practical.
Xu, Congcong; Su, Yan; Liu, Dajun; He, Xingquan
2015-10-14
Here, a novel N,B-doped graphene aerogel, abbreviated as N,B-GA, was obtained via a two-step approach and served as a metal-free catalyst for the oxygen reduction reaction (ORR). This two-step method involved a hydrothermal reaction and a pyrolysis procedure, guaranteeing the efficient insertion of the heteroatoms. The resulting three-dimensional (3D) N,B-GA obtained at pyrolysis temperature of 1000 °C exhibited outstanding catalytic activity for the oxygen reduction reaction (ORR), comparable to that of Pt/C. In addition, the catalytic activity of this 3D N,B-GA was obviously better than that of the nitrogen-doped graphene aerogel (N-GA) and boron-doped graphene aerogel (B-GA) in terms of the onset potential, half-wave potential and diffusion limiting current density. The superior catalytic reactivity arises from the synergistic coupling of the B and N dopants within the graphene domains.
Two-Dimensional Rectangular Stock Cutting Problem and Solution Methods
Zhao Hui; Yu Liang; Ning Tao; Xi Ping
2001-01-01
Optimal layout of rectangular stock cutting is still in great demand from industry for diversified applications. This paper introduces four basic solution methods to the problem: linear programming, dynamic programming, tree search and heuristic approach. A prototype of application software is developed to verify the pros and cons of various approaches.
Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD
Gagan S Sidhu
2012-11-01
Full Text Available This article explores various preprocessing tools that select/create features to help a learner produce a classifier that can use fMRI data to effectively discriminate Attention-Deficit Hyperactivity Disorder (ADHD patients from healthy controls. We consider four different learning tasks: predicting either two (ADHD vs control or three classes (ADHD-1 vs ADHD-3 vs control, where each use either the imaging data only, or the phenotypic and imaging data. After averaging, BOLD-signal normalization, and masking of the fMRI images, we considered applying Fast Fourier Transform (FFT, possibly followed by some Principal Component Analysis (PCA variant (over time: PCA-t; over space and time: PCA-st or the kernelized variant, kPCA-st, to produce inputs to a learner, to determine which learned classifier performs the best – or at least better than the baseline of 64.2%, which is the proportion of the majority class (here, controls.In the two-class setting, PCA-t and PCA-st did not perform statistically better than baseline, whereas FFT and kPCA-st did (FFT, 68.4%; kPCA-st, 70.3%; when combined with the phenotypic data, which by itself produces 72.9% accuracy, all methods performed statistically better than the baseline, but none did better than using the phenotypic data. In the three-class setting, neither the PCA variants, or the phenotypic data classifiers, performed statistically better than the baseline.We next used the FFT output as input to the PCA variants. In the two-class setting, the PCA variants performed statistically better than the baseline using either the FFTed waveforms only (FFT+PCA-t, 69.6%,; FFT+PCA-st, 69.3% ; FFT+kPCA-st, 68.7%, or combining them with the phenotypic data (FFT+PCA-t, 70.6%; FFT+PCA-st, 70.6%; kPCA-st, 76%. In both settings, combining FFT+kPCA-st’s features with the phenotypic data was better than using only the phenotypic data, with the result in the two-class setting being statistically better.
Discontinuous Galerkin Method for Total Variation Minimization on one-dimensional Inpainting Problem
Wang, Xijian
2011-01-01
This paper is concerned with the numerical minimization of energy functionals in $BV(\\Omega)$ (the space of bounded variation functions) involving total variation for gray-scale 1-dimensional inpainting problem. Applications are shown by finite element method and discontinuous Galerkin method for total variation minimization. We include the numerical examples which show the different recovery image by these two methods.
Fusong Yuan
Full Text Available To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions.A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject's upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests.The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05.Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is
A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows
Felici, Helene Marie
1992-06-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.
Lorquet, J. C.
2017-04-01
energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.
VARIATION METHOD FOR ACOUSTIC WAVE IMAGING OF TWO DIMENSIONAL TARGETS
冯文杰; 邹振祝
2003-01-01
A new way of acoustic wave imaging was investigated. By using the Green function theory a system of integral equations, which linked wave number perturbation function with wave field, was firstly deduced. By taking variation on these integral equations an inversion equation, which reflected the relation between the little variation of wave number perturbation function and that of scattering field, was further obtained. Finally, the perturbation functions of some identical targets were reconstructed, and some properties of the novel method including converging speed, inversion accuracy and the abilities to resist random noise and identify complex targets were discussed. Results of numerical simulation show that the method based on the variation principle has great theoretical and applicable value to quantitative nondestructive evaluation.
Application of Symmetry Methods to Low-Dimensional Heisenberg Magnets
Irene G. Bostrem
2010-04-01
Full Text Available An account of symmetry is very fruitful in studies of quantum spin systems. In the present paper we demonstrate how to use the spin SU(2 and the point symmetries in optimization of the theoretical condensed matter tools: the exact diagonalization, the renormalization group approach, the cluster perturbation theory. We apply the methods for study of Bose-Einstein condensation in dimerized antiferromagnets, for investigations of magnetization processes and magnetocaloric effect in quantum ferrimagnetic chain.
Solution of (3+1-Dimensional Nonlinear Cubic Schrodinger Equation by Differential Transform Method
Hassan A. Zedan
2012-01-01
Full Text Available Four-dimensional differential transform method has been introduced and fundamental theorems have been defined for the first time. Moreover, as an application of four-dimensional differential transform, exact solutions of nonlinear system of partial differential equations have been investigated. The results of the present method are compared very well with analytical solution of the system. Differential transform method can easily be applied to linear or nonlinear problems and reduces the size of computational work. With this method, exact solutions may be obtained without any need of cumbersome work, and it is a useful tool for analytical and numerical solutions.
刘保国; 殷学纲; 蹇开林; 吴永
2003-01-01
A general method based on Riccati transfer matrix is presented to calculate the2 nd order perturbations of eigendatas for one-dimensional structural system with parameteruncertainties. The method is applicable to both real and complex eigendatas of any one-dimensional structural system. The formulas for calculating the sensitivity derivatives ofeigendatas based on this method are also presented. The method is applied to theperturbation analysis for the eigendatas of a rotor with gyroscopic moment, and thedifferences between the perturbation results and the accurate calculating results are small.
Tsuneo Yamashiro
Full Text Available To assess the advantages of Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D for image quality improvement and dose reduction for chest computed tomography (CT.Institutional Review Boards approved this study and informed consent was obtained. Eighty-eight subjects underwent chest CT at five institutions using identical scanners and protocols. During a single visit, each subject was scanned using different tube currents: 240, 120, and 60 mA. Scan data were converted to images using AIDR3D and a conventional reconstruction mode (without AIDR3D. Using a 5-point scale from 1 (non-diagnostic to 5 (excellent, three blinded observers independently evaluated image quality for three lung zones, four patterns of lung disease (nodule/mass, emphysema, bronchiolitis, and diffuse lung disease, and three mediastinal measurements (small structure visibility, streak artifacts, and shoulder artifacts. Differences in these scores were assessed by Scheffe's test.At each tube current, scans using AIDR3D had higher scores than those without AIDR3D, which were significant for lung zones (p<0.0001 and all mediastinal measurements (p<0.01. For lung diseases, significant improvements with AIDR3D were frequently observed at 120 and 60 mA. Scans with AIDR3D at 120 mA had significantly higher scores than those without AIDR3D at 240 mA for lung zones and mediastinal streak artifacts (p<0.0001, and slightly higher or equal scores for all other measurements. Scans with AIDR3D at 60 mA were also judged superior or equivalent to those without AIDR3D at 120 mA.For chest CT, AIDR3D provides better image quality and can reduce radiation exposure by 50%.
A new three-dimensional topology optimization method based on moving morphable components (MMCs)
Zhang, Weisheng; Li, Dong; Yuan, Jie; Song, Junfu; Guo, Xu
2017-04-01
In the present paper, a new method for solving three-dimensional topology optimization problem is proposed. This method is constructed under the so-called moving morphable components based solution framework. The novel aspect of the proposed method is that a set of structural components is introduced to describe the topology of a three-dimensional structure and the optimal structural topology is found by optimizing the layout of the components explicitly. The standard finite element method with ersatz material is adopted for structural response analysis and the shape sensitivity analysis only need to be carried out along the structural boundary. Compared to the existing methods, the description of structural topology is totally independent of the finite element/finite difference resolution in the proposed solution framework and therefore the number of design variables can be reduced substantially. Some widely investigated benchmark examples, in the three-dimensional topology optimization designs, are presented to demonstrate the effectiveness of the proposed approach.
Real eigenvalue analysis in NASTRAN by the tridiagonal reduction (FEER) method
Newman, M.; Flanagen, P. F.; Rogers, J. L., Jr.
1976-01-01
Implementation of the tridiagonal reduction method for real eigenvalue extraction in structural vibration and buckling problems is described. The basic concepts underlying the method are summarized and special features, such as the computation of error bounds and default modes of operation are discussed. In addition, the new user information and error messages and optional diagnostic output relating to the tridiagonal reduction method are presented. Some numerical results and initial experiences relating to usage in the NASTRAN environment are provided, including comparisons with other existing NASTRAN eigenvalue methods.
Dimensional analysis and self-similarity methods for engineers and scientists
Zohuri, Bahman
2015-01-01
This ground-breaking reference provides an overview of key concepts in dimensional analysis, and then pushes well beyond traditional applications in fluid mechanics to demonstrate how powerful this tool can be in solving complex problems across many diverse fields. Of particular interest is the book's coverage of dimensional analysis and self-similarity methods in nuclear and energy engineering. Numerous practical examples of dimensional problems are presented throughout, allowing readers to link the book's theoretical explanations and step-by-step mathematical solutions to practical impleme
Generalized Kudryashov method for solving some (3+1-dimensional nonlinear evolution equations
Md. Shafiqul Islam
2015-06-01
Full Text Available In this work, we have applied the generalized Kudryashov methods to obtain the exact travelling wave solutions for the (3+1-dimensional Jimbo-Miwa (JM equation, the (3+1-dimensional Kadomtsev-Petviashvili (KP equation and the (3+1-dimensional Zakharov-Kuznetsov (ZK. The attained solutions show distinct physical configurations. The constraints that will guarantee the existence of specific solutions will be investigated. These solutions may be useful and desirable for enlightening specific nonlinear physical phenomena in genuinely nonlinear dynamical systems.
Computational method for the quantum Hamilton-Jacobi equation: one-dimensional scattering problems.
Chou, Chia-Chun; Wyatt, Robert E
2006-12-01
One-dimensional scattering problems are investigated in the framework of the quantum Hamilton-Jacobi formalism. First, the pole structure of the quantum momentum function for scattering wave functions is analyzed. The significant differences of the pole structure of this function between scattering wave functions and bound state wave functions are pointed out. An accurate computational method for the quantum Hamilton-Jacobi equation for general one-dimensional scattering problems is presented to obtain the scattering wave function and the reflection and transmission coefficients. The computational approach is demonstrated by analysis of scattering from a one-dimensional potential barrier. We not only present an alternative approach to the numerical solution of the wave function and the reflection and transmission coefficients but also provide a computational aspect within the quantum Hamilton-Jacobi formalism. The method proposed here should be useful for general one-dimensional scattering problems.
FORMULATIONS OF THE THREE-DIMENSIONAL DISCONTINUOUS DEFORMATION ANALYSIS METHOD
LIU Jun; KONG Xianjing; LIN Gao
2004-01-01
This paper extends the original 2D discontinuous deformation analysis (DDA) method proposed by Shi to 3D cases, and presents the formulations of the 3D DDA. The formulations maintain the characteristics of the original 2D DDA approach. Contacts between the blocks are detected by using Common-Plane (C-P) approach and the non-smooth contact, such as of vertex-to-vertex, vertex-to-edge and edge-to-edge types, can be handled easily based on the C-P method. The matrices of equilibrium equations have been given in detail for programming purposes. The C program codes for the 3D DDA are developed. The ability and accuracy of the formulations and the program are verified by the analytical solutions of several dynamic examples. The robustness and versatility of the algorithms presented in this paper are demonstrated with the aid of an example of scattering of densely packed cubes. Finally, implications and future extensions are discussed.
Two-Dimensional Correlation Method for Polymer Analysis
Herman, Matthew Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-06-08
Since its introduction by Noda in 1986 two-dimension correlation spectroscopy has been offering polymer scientists an opportunity to look more deeply into collected spectroscopic data. When the spectra are recorded in response to an external perturbation, it is possible to correlate the spectra and expand the information over a separate spectra axis allow for enhancement of spectral resolution, the ability to determine synchronous change, and a unique way to organize observed changes in the spectra into sequential order following a set of three simple rules. By organizing the 2D spectra into synchronous change plots and asynchronous change plots it is possible to correlate change between spectral regions and develop their temporal relationships to one another. With the introduction of moving-window correlation-spectroscopy by Thomas and Richardson in 2000, a method of binning and processing data, it became possible to directly correlate relationships generated in the spectra from the change in the perturbation variable. This method takes advantage of the added resolution of two-dimension spectroscopy and has been applied to study very week transitions found in polymer materials. Appling both of these techniques we are beginning to develop an understanding of how polymers decay under radiolytic aging, to develop a stronger understanding of changes in mechanical properties and the service capabilities of materials.
Multi-Symplectic Splitting Method for Two-Dimensional Nonlinear Schriidinger Equation
陈亚铭; 朱华君; 宋松和
2011-01-01
Using the idea of splitting numerical methods and the multi-symplectic methods, we propose a multisymplectic splitting （MSS） method to solve the two-dimensional nonlinear Schrodinger equation （2D-NLSE） in this paper. It is further shown that the method constructed in this way preserve the global symplectieity exactly. Numerical experiments for the plane wave solution and singular solution of the 2D-NLSE show the accuracy and effectiveness of the proposed method.
Vacuum solutions of five dimensional Einstein equations generated by inverse scattering method
Tomizawa, S; Yasui, Y; Morisawa, Yoshiyuki; Tomizawa, Shinya; Yasui, Yukinori
2006-01-01
We study stationary and axially symmetric two solitonic solutions of five dimensional vacuum Einstein equations by using the inverse scattering method developed by Belinski and Zakharov. In this generation of the solutions, we use five dimensional Minkowski spacetime as a seed. It is shown that if we restrict ourselves to the case of one angular momentum component, the generated solution coincides with a black ring solution with a rotating two sphere which was found by Mishima and Iguchi recently.
Investigation of Three-Dimensional Flow Structure in a Transonic Diffuser by the LIF Method
小野, 大輔; 半田, 太郎; 青木, 俊之; 益田, 光治
2007-01-01
The three-dimensional flow structure induced by normal shock-wave/boundary-layer interaction in a transonic diffuser is investigated by a laser-induced fluorescence (LIF) method. This diagnostic system uses an argon-ion laser as a light source, and target gas is dry nitrogen with iodine seeded as a fluorescence material. The Mach-number distributions in the diffuser are obtained from the measured fluorescence intensity, and the three-dimensional shape of the boundary layers is obtained immedi...
Bergmann, Tommy; Heinke, Florian; Labudde, Dirk
2017-09-01
The age determination of blood traces provides important hints for the chronological assessment of criminal events and their reconstruction. Current methods are often expensive, involve significant experimental complexity and often fail to perform when being applied to aged blood samples taken from different substrates. In this work an absorption spectroscopy-based blood stain age estimation method is presented, which utilizes 400-640nm absorption spectra in computation. Spectral data from 72 differently aged pig blood stains (2h to three weeks) dried on three different substrate surfaces (cotton, polyester and glass) were acquired and the turnover-time correlations were utilized to develop a straightforward age estimation scheme. More precisely, data processing includes data dimensionality reduction, upon which classic k-nearest neighbor classifiers are employed. This strategy shows good agreement between observed and predicted blood stain age (r>0.9) in cross-validation. The presented estimation strategy utilizes spectral data from dissolved blood samples to bypass spectral artifacts which are well known to interfere with other spectral methods such as reflection spectroscopy. Results indicate that age estimations can be drawn from such absorbance spectroscopic data independent from substrate the blood dried on. Since data in this study was acquired under laboratory conditions, future work has to consider perturbing environmental conditions in order to assess real-life applicability. Copyright © 2017 Elsevier B.V. All rights reserved.
Spectral (Finite) Volume Method for One Dimensional Euler Equations
Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)
2002-01-01
Consider a mesh of unstructured triangular cells. Each cell is called a Spectral Volume (SV), denoted by Si, which is further partitioned into subcells named Control Volumes (CVs), indicated by C(sub i,j). To represent the solution as a polynomial of degree m in two dimensions (2D) we need N = (m+1)(m+2)/2 pieces of independent information, or degrees of freedom (DOFs). The DOFs in a SV method are the volume-averaged mean variables at the N CVs. For example, to build a quadratic reconstruction in 2D, we need at least (2+1)(3+1)/2 = 6 DOFs. There are numerous ways of partitioning a SV, and not every partition is admissible in the sense that the partition may not be capable of producing a degree m polynomial. Once N mean solutions in the CVs of a SV are given, a unique polynomial reconstruction can be obtained.
Method and apparatus for two-dimensional spectroscopy
DeCamp, Matthew F.; Tokmakoff, Andrei
2010-10-12
Preferred embodiments of the invention provide for methods and systems of 2D spectroscopy using ultrafast, first light and second light beams and a CCD array detector. A cylindrically-focused second light beam interrogates a target that is optically interactive with a frequency-dispersed excitation (first light) pulse, whereupon the second light beam is frequency-dispersed at right angle orientation to its line of focus, so that the horizontal dimension encodes the spatial location of the second light pulse and the first light frequency, while the vertical dimension encodes the second light frequency. Differential spectra of the first and second light pulses result in a 2D frequency-frequency surface equivalent to double-resonance spectroscopy. Because the first light frequency is spatially encoded in the sample, an entire surface can be acquired in a single interaction of the first and second light pulses.
On High Dimensional Searching Spaces and Learning Methods
Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz
2017-01-01
In data science, there are important parameters that affect the accuracy of the algorithms used. Some of these parameters are: the type of data objects, the membership assignments, and distance or similarity functions. In this chapter we describe different data types , membership functions...... , and similarity functions and discuss the pros and cons of using each of them. Conventional similarity functions evaluate objects in the vector space. Contrarily, Weighted Feature Distance (WFD) functions compare data objects in both feature and vector spaces, preventing the system from being affected by some...... dominant features. Traditional membership functions assign membership values to data objects but impose some restrictions. Bounded Fuzzy Possibilistic Method (BFPM) makes possible for data objects to participate fully or partially in several clusters or even in all clusters. BFPM introduces intervals...
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
Numerical comparison of robustness of some reduction methods in rough grids
Hou, Jiangyong
2014-04-09
In this article, we present three nonsymmetric mixed hybrid RT 1 2 methods and compare with some recently developed reduction methods which are suitable for the single-phase Darcy flow problem with full anisotropic and highly heterogeneous permeability on general quadrilateral grids. The methods reviewed are multipoint flux approximation (MPFA), multipoint flux mixed finite element method, mixed-finite element with broken RT 1 2 method, MPFA-type mimetic finite difference method, and symmetric mixed-hybrid finite element method. The numerical experiments of these methods on different distorted meshes are compared, as well as their differences in performance of fluxes are discussed. © 2014 Wiley Periodicals, Inc.
Sato, T.; Matsuoka, T. [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T. [Japan National Oil Corp., Tokyo (Japan). Technology Research Center
1997-05-27
Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.
High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods
Johnson, Christopher C; Ravikumar, Pradeep
2011-01-01
In this paper we consider the task of estimating the non-zero pattern of the sparse inverse covariance matrix of a zero-mean Gaussian random vector from a set of iid samples. Note that this is also equivalent to recovering the underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We present two novel greedy approaches to solving this problem. The first estimates the non-zero covariates of the overall inverse covariance matrix using a series of global forward and backward greedy steps. The second estimates the neighborhood of each node in the graph separately, again using greedy forward and backward steps, and combines the intermediate neighborhoods to form an overall estimate. The principal contribution of this paper is a rigorous analysis of the sparsistency, or consistency in recovering the sparsity pattern of the inverse covariance matrix. Surprisingly, we show that both the local and global greedy methods learn the full structure of the model with high probability given just $O(d\\log...
Yang Xiu-Li; Dai Bao-Dong; Zhang Wei-Wei
2012-01-01
Based on the complex variable moving least-square (CVMLS) approximation and a local symmetric weak form,the complex variable meshless local Petrov-Galerkin (CVMLPG) method of solving two-dimensional potential problems is presented in this paper.In the present formulation,the trial function of a two-dimensional problem is formed with a one-dimensional basis function.The number of unknown coefficients in the trial function of the CVMLS approximation is less than that in the trial function of the moving least-square (MLS) approximation.The essential boundary conditions are imposed by the penalty method.The main advantage of this approach over the conventional meshless local PetrovGalerkin (MLPG) method is its computational efficiency.Several numerical examples are presented to illustrate the implementation and performance of the present CVMLPG method.
Development of a three dimensional circulation model based on fractional step method
Abualtayef, Mazen; Kuroiwa, Masamitsu; Seif, Ahmed Khaled; Matsubara, Yuhei; Aly, Ahmed M.; Sayed, Ahmed A.; Sambe, Alioune Nar
2010-03-01
A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic.
Nakayama, Katsuyuki; Mizushima, Lucas Dias; Murata, Junsuke; Maeda, Takao
2016-06-01
A numerical method is presented to extract three-dimensional vortical structure of a spiral vortex (wing tip vortex) in a wind turbine, from two-dimensional velocity data at several azimuthal angles. This numerical method contributes to analyze a vortex observed in experiment where three-dimensional velocity field is difficult to be measured. This analysis needs two-dimensional velocity data in parallel planes at different azimuthal angles of a rotating blade, which facilitates the experiment since the angle of the plane does not change. The vortical structure is specified in terms of the invariant flow topology derived from eigenvalues and eigenvectors of three-dimensional velocity gradient tensor and corresponding physical properties. In addition, this analysis enables to investigate not only vortical flow topology but also important vortical features such as pressure minimum and vortex stretching that are derived from the three-dimensional velocity gradient tensor.
PSO type-reduction method for geometric interval type-2 fuzzy logic systems
ZHAO Xian-zhang; GAO Yi-bo; ZENG Jun-fang; YANG Yi-ping
2008-01-01
In a special case of type-2 fuzzy logic systems (FLS), i.e. geometric interval type-2 fuzzy logic sys-tems (GIT-2FLS), the crisp output is obtained by computing the geometric center of footprint of uncertainty (FOU) without type-reduction, but the defuzzifying method acts against the corner concepts of type-2 fuzzy sets in some cases. In this paper, a PSO type-reduction method for GIT-2FLS based on the particle swarm optimiza-tion (PSO) algorithm is presented. With the PSO type-reduction, the inference principle of geometric interval FLS operating on the continuous domain is consistent with that of traditional interval type-2 FLS operating on the discrete domain. With comparative experiments, it is proved that the PSO type-reduction exhibits good perform-ance, and is a satisfactory complement for the theory of GIT-2FLS.
A New Method for the Reduction of Methemoglobin and Methemoglobin Derivatives
1991-09-03
the protein and lead to modifications of the protein and deterioration of the pigment [8,9]. The reduction with other reducing agents, such as...AND METHODS Riboflavin mononucleotide, riboflavin , DL-methionine, and other chemicals were purchased from Sigma Chemical Co., St. Louis, MO. FMN...reduced riboflavin be done in the absolute absence of oxygen. Rate Profile of Methemoglobin Reduction Fig. 2 shows a typical set of data obtained when 0.1
A new method to the(2+1)-dimensional modified KP equation
无
2011-01-01
By means of the auxiliary ordinary differential equation method,we have obtained many solitary wave solutions,periodic wave solutions and variable separation solutions for the (2+1)-dimensional KP equation.Using a mixed method,many exact solutions have been obtained.
Kruyt, N.P.; Esch, van B.P.M.; Jonker, J.B.
1999-01-01
A numerical method is presented for the computation of unsteady, three-dimensional potential flows in hydraulic pumps and turbines. The superelement method has been extended in order to eliminate slave degrees of freedom not only from the governing Laplace equation, but also from the Kutta condition
CHEN Jiang; HE Hong-Sheng; YANG Kong-Qing
2005-01-01
A generalized F-expansion method is introduced and applied to (3+ 1)-dimensional Kadomstev-Petviashvili(KP) equation. As a result, some new Jacobi elliptic function solutions of the equation are found, from which the trigonometric function solutions and the solitary wave solutions can be obtained. The method can also be extended to other types of nonlinear evolution equations in mathematical physics.
A description of Lax type integrable dynamical systems via the Marsden-Weinstein reduction method
Prykarpatsky, Yarema A.
2013-09-01
A Lie-algebraic approach to constructing nonlinear Lax type integrable dynamical systems of modern mathematical and theoretical physics, based on the Marsden-Weinstein reduction method on canonically symplectic manifolds with group symmetry, is proposed. Its natural relationship with the well known Adler-Kostant-Souriau-Berezin-Kirillov method and the associated R-matrix approach is analyzed.
New Generalized Transformation Method and Its Application in Higher-Dimensional Soliton Equation
无
2006-01-01
A new generalized transformation method is presented to find more exact solutions of nonlinear partial differential equation. As an application of the method, we choose the (3+1)-dimensional breaking soliton equation to illustrate the method. As a result many types of explicit and exact traveling wave solutions, which contain solitary wave solutions, trigonometric function solutions, Jacobian elliptic function solutions, and rational solutions, are obtained. The new method can be extended to other nonlinear partial differential equations in mathematical physics.