He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming
2016-08-16
Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided.
Jingjing He
2016-08-01
Full Text Available Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided.
Domain Decomposition Based High Performance Parallel Computing
Raju, Mandhapati P
2009-01-01
The study deals with the parallelization of finite element based Navier-Stokes codes using domain decomposition and state-ofart sparse direct solvers. There has been significant improvement in the performance of sparse direct solvers. Parallel sparse direct solvers are not found to exhibit good scalability. Hence, the parallelization of sparse direct solvers is done using domain decomposition techniques. A highly efficient sparse direct solver PARDISO is used in this study. The scalability of both Newton and modified Newton algorithms are tested.
Image decomposition as a tool for validating stress analysis models
Mottershead J.
2010-06-01
Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.
Distributed Prognostics based on Structural Model Decomposition
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
Stochastic Plane Stress Analysis with Elementary Stiffness Matrix Decomposition Method
Er, G. K.; Wang, M. C.; Iu, V. P.; Kou, K. P.
2010-05-01
In this study, the efficient analytical method named elementary stiffness matrix decomposition (ESMD) method is further investigated and utilized for the moment evaluation of stochastic plane stress problems in comparison with the conventional perturbation method in stochastic finite element analysis. In order to evaluate the performance of this method, computer programs are written and some numerical results about stochastic plane stress problems are obtained. The numerical analysis shows that the computational efficiency is much increased and the computer EMS memory requirement can be much reduced by using ESMD method.
Eigenvalue Decomposition-Based Modified Newton Algorithm
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
Overlapping Community Detection based on Network Decomposition
Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin
2016-04-01
Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.
QBF-Based Boolean Function Bi-Decomposition
Chen, Huan; Marques-Silva, Joao
2011-01-01
Boolean function bi-decomposition is ubiquitous in logic synthesis. It entails the decomposition of a Boolean function using two-input simple logic gates. Existing solutions for bi-decomposition are often based on BDDs and, more recently, on Boolean Satisfiability. In addition, the partition of the input set of variables is either assumed, or heuristic solutions are considered for finding good partitions. In contrast to earlier work, this paper proposes the use of Quantified Boolean Formulas (QBF) for computing bi- decompositions. These bi-decompositions are optimal in terms of the achieved disjointness and balancedness of the input set of variables. Experimental results, obtained on representative benchmarks, demonstrate clear improvements in the quality of computed decompositions, but also the practical feasibility of QBF-based bi-decomposition.
Gate-based decomposition of index generation functions
Łuba, Tadeusz; Borowik, Grzegorz; Jankowski, Cezary
2016-09-01
Index Generation Functions may be useful in distribution of IP addresses, virus scanning, or undesired data detection. Traditional approach leads to universal cells based decomposition. In this paper an original method is proposed. The proposed multilevel logic synthesis method based on functional decomposition uses gates instead of cells. Furthermore, it preserves advantages of functional decomposition and is well suited for ROM-based synthesis of Index Generation Functions.
IMAGE ENCRYPTION BASED ON SINGULAR VALUE DECOMPOSITION
Nidhal K. El Abbadi
2014-01-01
Full Text Available Image encryption is one of the most methods of information hiding. A novel secure encryption method for image encryption is presented in this study. The proposed algorithm based on using singular value decomposition SVD. In this study we start to scramble the image data according to suggested keys (two sequence scrambling process with two different keys to finally create two different matrices. The diagonal matrix from the SVD will be interchanged with the resulted matrices. Another scrambling and diagonal matrices interchange will apply to increase the complexity. The resulted two matrices combine to one matrix according to predefined procedure. The encrypted image is a meaningfull image. The suggested method tested with many images encryption and gives promised results.
Central-force decomposition of spline-based modified embedded atom method potential
Winczewski, S.; Dziedzic, J.; Rybicki, J.
2016-10-01
Central-force decompositions are fundamental to the calculation of stress fields in atomic systems by means of Hardy stress. We derive expressions for a central-force decomposition of the spline-based modified embedded atom method (s-MEAM) potential. The expressions are subsequently simplified to a form that can be readily used in molecular-dynamics simulations, enabling the calculation of the spatial distribution of stress in systems treated with this novel class of empirical potentials. We briefly discuss the properties of the obtained decomposition and highlight further computational techniques that can be expected to benefit from the results of this work. To demonstrate the practicability of the derived expressions, we apply them to calculate stress fields due to an edge dislocation in bcc Mo, comparing their predictions to those of linear elasticity theory.
Pitfalls in VAR based return decompositions: A clarification
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...... in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid....... In a properly specified VAR, it makes no difference whether return news and dividend news are both computed directly or one of them is backed out as a residual....
Multiplying decomposition of stress/strain, constitutive/compliance relations, and strain energy
Lee, HyunSuk
2012-01-01
To account for phenomenological theories and a set of invariants, stress and strain are usually decomposed into a pair of pressure and deviatoric stress and a pair of volumetric strain and deviatoric strain. However, the conventional decomposition method only focuses on individual stress and strain, so that cannot be directly applied to either formulation in Finite Element Method (FEM) or Boundary Element Method (BEM). In this paper, a simpler, more general, and widely applicable decomposition is suggested. A new decomposition method adopts multiplying decomposition tensors or matrices to not only stress and strain but also constitutive and compliance relation. With this, we also show its practical usage on FEM and BEM in terms of tensors and matrices.
Subspace decomposition-based correlation matrix multiplication
Cheng Hao; Guo Wei; Yu Jingdong
2008-01-01
The correlation matrix, which is widely used in eigenvalue decomposition (EVD) or singular value decomposition (SVD), usually can be denoted by R = E[yiy'i]. A novel method for constructing the correlation matrix R is proposed. The proposed algorithm can improve the resolving power of the signal eigenvalues and overcomes the shortcomings of the traditional subspace methods, which cannot be applied to low SNR. Then the proposed method is applied to the direct sequence spread spectrum (DSSS) signal's signature sequence estimation.The performance of the proposed algorithm is analyzed, and some illustrative simulation results are presented.
Modal Decomposition of Synthetic Jet Flow Based on CFD Computation
Hyhlík Tomáš
2015-01-01
Full Text Available The article analyzes results of numerical simulation of synthetic jet flow using modal decomposition. The analyzes are based on the numerical simulation of axisymmetric unsteady laminar flow obtained using ANSYS Fluent CFD code. Three typical laminar regimes are compared from the point of view of modal decomposition. The first regime is without synthetic jet creation with Reynolds number Re = 76 and Stokes number S = 19.7. The second studied regime is defined by Re = 145 and S = 19.7. The third regime of synthetic jet work is regime with Re = 329 and S = 19.7. Modal decomposition of obtained flow fields is done using proper orthogonal decomposition (POD where energetically most important modes are identified. The structure of POD modes is discussed together with classical approach based on phase averaged velocities.
Kinetic energy decomposition scheme based on information theory.
Imamura, Yutaka; Suzuki, Jun; Nakai, Hiromi
2013-12-15
We proposed a novel kinetic energy decomposition analysis based on information theory. Since the Hirshfeld partitioning for electron densities can be formulated in terms of Kullback-Leibler information deficiency in information theory, a similar partitioning for kinetic energy densities was newly proposed. The numerical assessments confirm that the current kinetic energy decomposition scheme provides reasonable chemical pictures for ionic and covalent molecules, and can also estimate atomic energies using a correction with viral ratios.
Decomposition Techniques and Effective Algorithms in Reliability-Based Optimization
Enevoldsen, I.; Sørensen, John Dalsgaard
1995-01-01
The common problem of an extensive number of limit state function calculations in the various formulations and applications of reliability-based optimization is treated. It is suggested to use a formulation based on decomposition techniques so the nested two-level optimization problem can be solved...
A decomposition method based on a model of continuous change.
Horiuchi, Shiro; Wilmoth, John R; Pletcher, Scott D
2008-11-01
A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period.
Splitting extrapolation based on domain decomposition for finite element approximations
吕涛; 冯勇
1997-01-01
Splitting extrapolation based on domain decomposition for finite element approximations is a new technique for solving large scale scientific and engineering problems in parallel. By means of domain decomposition, a large scale multidimensional problem is turned to many discrete problems involving several grid parameters The multi-variate asymptotic expansions of finite element errors on independent grid parameters are proved for linear and nonlin ear second order elliptic equations as well as eigenvalue problems. Therefore after solving smaller problems with similar sizes in parallel, a global fine grid approximation with higher accuracy is computed by the splitting extrapolation method.
Noise reduction method based on weighted manifold decomposition
Gan Jian-Chao; Xiao Xian-Ci
2004-01-01
A noise reduction method based on weighted manifold decomposition is proposed in this paper, which does not need knowledge of the chaotic dynamics and choosing number of eigenvalues. The simulation indicates that the performance of this method can increase the signal-to-noise ratio of noisy chaotic time series.
Asynchronous Task-Based Polar Decomposition on Manycore Architectures
Sukkari, Dalal
2016-10-25
This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.
Distributed Prognostics Based on Structural Model Decomposition
National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...
Gesture Based Control and EMG Decomposition
Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.
2005-01-01
This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.
Ab initio modeling of decomposition in iron based alloys
Gorbatov, O. I.; Gornostyrev, Yu. N.; Korzhavyi, P. A.; Ruban, A. V.
2016-12-01
This paper reviews recent progress in the field of ab initio based simulations of structure and properties of Fe-based alloys. We focus on thermodynamics of these alloys, their decomposition kinetics, and microstructure formation taking into account disorder of magnetic moments with temperature. We review modern theoretical tools which allow a consistent description of the electronic structure and energetics of random alloys with local magnetic moments that become totally or partially disordered when temperature increases. This approach gives a basis for an accurate finite-temperature description of alloys by calculating all the relevant contributions to the Gibbs energy from first-principles, including a configurational part as well as terms due to electronic, vibrational, and magnetic excitations. Applications of these theoretical approaches to the calculations of thermodynamics parameters at elevated temperatures (solution energies and effective interatomic interactions) are discussed including atomistic modeling of decomposition/clustering in Fe-based alloys. It provides a solid basis for understanding experimental data and for developing new steels for modern applications. The precipitation in Fe-Cu based alloys, the decomposition in Fe-Cr, and the short-range order formation in iron alloys with s-p elements are considered as examples.
Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology
Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul
2004-01-01
Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...... is able to overcome most of the difficulties associated with the solution of mixture design problems. The new methodology has been illustrated with the help of a case study involving the design of solvent-anti solvent binary mixtures for crystallization of Ibuprofen.......Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach...
Training for Retrieval of Knowledge under Stress through Algorithmic Decomposition
1986-10-01
explained the base- rate problem and the way of solution. This natural language mediation is a verbal strategy for learning process. Imagery can be used...Light Bulb and Dyslexia problems used by Lichtenstein & MacGregor (1985). The problems are presented in Appendix D. All aspects of the problems were...algorithm was composed for the Dyslexia version. The algorithm and the tutorial are presented in Appendix E. Problems’ type. As in the original study
LTE/MVNO NETWORKS STRUCTURE OPTIMIZATION BASED ON TENSOR DECOMPOSITION
Strelkovskaya, Iryna; Solovskaya, Iryna
2015-01-01
The usage of tensor methods on the decomposition basis is offered for the tasks solution of structure optimization for LTE/MVNO networks mobile communication. The choice problem of optimum topology of e-Node B base stations connectionsin the radio access of E-UTRAN/LTE network was solved. The assessment problem of QoS quality characteristics of complex LTE/MVNO network architecture was solved.
Chaos-Based Image Encryption Algorithm Using Decomposition
Xiuli Song
2013-07-01
Full Text Available The proposed chaos-based image encryption algorithm consists of four stages: decomposition, shuffle, diffusion and combination. Decomposition is that an original image is decomposed to components according to some rule. The purpose of the shuffle is to mask original organization of the pixels of the image, and the diffusion is to change their values. Combination is not necessary in the sender. To improve the efficiency, the parallel architecture is taken to process the shuffle and diffusion. To enhance the security of the algorithm, firstly, a permutation of the labels is designed. Secondly, two Logistic maps are used in diffusion stage to encrypt the components. One map encrypts the odd rows of the component and another map encrypts the even rows. Experiment results and security analysis demonstrate that the encryption algorithm not only is robust and flexible, but also can withstand common attacks such as statistical attacks and differential attacks.
Signal overcomplete representation and sparse decomposition based on redundant dictionaries
ZHANG Chunmei; YIN Zhongke; CHEN Xiangdong; XIAO Mingxia
2005-01-01
Decomposing a signal based upon redundant dictionaries is a new method for data representation on signal processing. It approximates a signal with an overcomplete system instead of an orthonormal basis to provide a sufficient choice for adaptive sparse decompositions. Replacing the original data with a sparse approximation can result in not only a higher compression ratio, but also greater flexibility in capturing the inherent structure of the natural signals with the redundancy of dictionaries. This paper gives an overview of a series of recent results in this field, and deals with the relationship between sparsity of signal decomposition and incoherence of dictionaries with BP and MP algorithms. The current and future challenges of the dictionary construction are discussed.
Image Fakery Detection Based on Singular Value Decomposition
T. Basaruddin
2009-11-01
Full Text Available The growing of image processing technology nowadays make it easier for user to modify and fake the images. Image fakery is a process to manipulate part or whole areas of image either in it content or context with the help of digital image processing techniques. Image fakery is barely unrecognizable because the fake image is looking so natural. Yet by using the numerical computation technique it is able to detect the evidence of fake image. This research is successfully applied the singular value decomposition method to detect image fakery. The image preprocessing algorithm prior to the detection process yields two vectors orthogonal to the singular value vector which are important to detect fake image. The result of experiment to images in several conditions successfully detects the fake images with threshold value 0.2. Singular value decomposition-based detection of image fakery can be used to investigate fake image modified from original image accurately.
Shape classification based on singular value decomposition transform
SHAABAN Zyad; ARIF Thawar; BABA Sami; KREKOR Lala
2009-01-01
In this paper, a new shape classification system based on singular value decomposition (SVD) transform using nearest neighbour classifier was proposed. The gray scale image of the shape object was converted into a black and white image. The squared Euclidean distance transform on binary image was applied to extract the boundary image of the shape. SVD transform features were extracted from the the boundary of the object shapes. In this paper, the proposed classification system based on SVD transform feature extraction method was compared with classifier based on moment invariants using nearest neighbour classifier. The experimental results showed the advantage of our proposed classification system.
Satellite Image Time Series Decomposition Based on EEMD
Yun-long Kong
2015-11-01
Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.
Decomposition During Search for Propagation-Based Constraint Solvers
Mann, Martin; Will, Sebastian
2007-01-01
We describe decomposition during search (DDS), an integration of and/or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have implemented DDS for the Gecode constraint programming library. Two applications, solution counting in graph coloring and protein structure prediction, exemplify the benefits of DDS in practice.
Content Based Image Retrieval Using Singular Value Decomposition
K. Harshini
2012-10-01
Full Text Available A computer application which automatically identifies or verifies a person from a digital image or a video frame from a video source, one of the ways to do this is by com-paring selected facial features from the image and a facial database. Content based image retrieval (CBIR, a technique for retrieving images on the basis of automatically derived features. This paper focuses on a low-dimensional feature based indexing technique for achieving efficient and effective retrieval performance. An appearance based face recognition method called singular value decomposition (SVD is proposed in this paper and is different from principal component analysis (PCA, which effectively considers only Euclidean structure of face space for analysis which lead to poor classification performance in case of great facial variations such as expression, lighting, occlusion and so on, due to the fact the image gray value matrices on which they manipulate are very sensitive to these facial variations. We consider the fact that every image matrix can always have the well known singular value decomposition (SVD and can be regarded as a composition of a set of base images generated by SVD and we further point out that base images are sensitive to the composition of face image. Finally our experimental results show that SVD has the advantage of providing a better representation and achieves lower error rates in face recognition but it has the disadvantage that it drags the performance evaluation. So, in order to overcome that, we conducted experiments by introducing a controlling parameter ‘α’, which ranges from 0 to 1, and we achieved better results for α=0.4 when compared with the other values of ‘α’. Key words: Singular value decomposition (SVD, Euclidean distance, original gray value matrix (OGVM.
Adaptive Fourier Decomposition Based Time-Frequency Analysis
Li-Ming Zhang
2014-01-01
The attempt to represent a signal simultaneously in time and frequency domains is full of challenges. The recently proposed adaptive Fourier decomposition (AFD) offers a practical approach to solve this problem. This paper presents the principles of the AFD based time-frequency analysis in three aspects: instantaneous frequency analysis, frequency spectrum analysis, and the spectrogram analysis. An experiment is conducted and compared with the Fourier transform in convergence rate and short-time Fourier transform in time-frequency distribution. The proposed approach performs better than both the Fourier transform and short-time Fourier transform.
Geometric derivation of the microscopic stress: A covariant central force decomposition
Torres-Sánchez, Alejandro; Vanegas, Juan M.; Arroyo, Marino
2016-08-01
We revisit the derivation of the microscopic stress, linking the statistical mechanics of particle systems and continuum mechanics. The starting point in our geometric derivation is the Doyle-Ericksen formula, which states that the Cauchy stress tensor is the derivative of the free-energy with respect to the ambient metric tensor and which follows from a covariance argument. Thus, our approach to define the microscopic stress tensor does not rely on the statement of balance of linear momentum as in the classical Irving-Kirkwood-Noll approach. Nevertheless, the resulting stress tensor satisfies balance of linear and angular momentum. Furthermore, our approach removes the ambiguity in the definition of the microscopic stress in the presence of multibody interactions by naturally suggesting a canonical and physically motivated force decomposition into pairwise terms, a key ingredient in this theory. As a result, our approach provides objective expressions to compute a microscopic stress for a system in equilibrium and for force-fields expanded into multibody interactions of arbitrarily high order. We illustrate the proposed methodology with molecular dynamics simulations of a fibrous protein using a force-field involving up to 5-body interactions.
Convex Decomposition Based Cluster Labeling Method for Support Vector Clustering
Yuan Ping; Ying-Jie Tian; Ya-Jian Zhou; Yi-Xian Yang
2012-01-01
Support vector clustering (SVC) is an important boundary-based clustering algorithm in multiple applications for its capability of handling arbitrary cluster shapes. However,SVC's popularity is degraded by its highly intensive time complexity and poor label performance.To overcome such problems,we present a novel efficient and robust convex decomposition based cluster labeling (CDCL) method based on the topological property of dataset.The CDCL decomposes the implicit cluster into convex hulls and each one is comprised by a subset of support vectors (SVs).According to a robust algorithm applied in the nearest neighboring convex hulls,the adjacency matrix of convex hulls is built up for finding the connected components; and the remaining data points would be assigned the label of the nearest convex hull appropriately.The approach's validation is guaranteed by geometric proofs.Time complexity analysis and comparative experiments suggest that CDCL improves both the efficiency and clustering quality significantly.
Edge-Preserving Decomposition-Based Single Image Haze Removal.
Li, Zhengguo; Zheng, Jinghong
2015-12-01
Single image haze removal is under-constrained, because the number of freedoms is larger than the number of observations. In this paper, a novel edge-preserving decomposition-based method is introduced to estimate transmission map for a haze image so as to design a single image haze removal algorithm from the Koschmiedars law without using any prior. In particular, weighted guided image filter is adopted to decompose simplified dark channel of the haze image into a base layer and a detail layer. The transmission map is estimated from the base layer, and it is applied to restore the haze-free image. The experimental results on different types of images, including haze images, underwater images, and normal images without haze, show the performance of the proposed algorithm.
Problem decomposition by mutual information and force-based clustering
Otero, Richard Edward
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an
Tensor decomposition and nonlocal means based spectral CT reconstruction
Zhang, Yanbo; Yu, Hengyong
2016-10-01
As one of the state-of-the-art detectors, photon counting detector is used in spectral CT to classify the received photons into several energy channels and generate multichannel projection simultaneously. However, the projection always contains severe noise due to the low counts in each energy channel. How to reconstruct high-quality images from photon counting detector based spectral CT is a challenging problem. It is widely accepted that there exists self-similarity over the spatial domain in a CT image. Moreover, because a multichannel CT image is obtained from the same object at different energy, images among channels are highly correlated. Motivated by these two characteristics of the spectral CT, we employ tensor decomposition and nonlocal means methods for spectral CT iterative reconstruction. Our method includes three basic steps. First, each channel image is updated by using the OS-SART. Second, small 3D volumetric patches (tensor) are extracted from the multichannel image, and higher-order singular value decomposition (HOSVD) is performed on each tensor, which can help to enhance the spatial sparsity and spectral correlation. Third, in order to employ the self-similarity in CT images, similar patches are grouped to reduce noise using the nonlocal means method. These three steps are repeated alternatively till the stopping criteria are met. The effectiveness of the developed algorithm is validated on both numerically simulated and realistic preclinical datasets. Our results show that the proposed method achieves promising performance in terms of noise reduction and fine structures preservation.
Imperceptible of Watermarking in Digital Image Based Singular Value Decomposition
Cahyana
2006-11-01
Full Text Available Watermarking is a commonly used technique to protect digital image from unintended used such as counterfeiting. This paper will address one of the techniques to embed a watermark to digital image which is based on the singular value decomposition. The primary target to be achieved by a good watermarking technique is that the watermarked image is imperceptible and that the inserted image can still be perfectly retrieved even though various transformations are done to the watermarked image. Our works show that the SVD-based watermarking demonstrates both imperceptibility as well as robustness of the watermarking scheme as indicated by significantly high value of correlation between the inserted and retrieved logo after some transformation such as PSNR, RML and Compression.
Quantitative Analysis of Polarimetric Model-Based Decomposition Methods
Qinghua Xie
2016-11-01
Full Text Available In this paper, we analyze the robustness of the parameter inversion provided by general polarimetric model-based decomposition methods from the perspective of a quantitative application. The general model and algorithm we have studied is the method proposed recently by Chen et al., which makes use of the complete polarimetric information and outperforms traditional decomposition methods in terms of feature extraction from land covers. Nevertheless, a quantitative analysis on the retrieved parameters from that approach suggests that further investigations are required in order to fully confirm the links between a physically-based model (i.e., approaches derived from the Freeman–Durden concept and its outputs as intermediate products before any biophysical parameter retrieval is addressed. To this aim, we propose some modifications on the optimization algorithm employed for model inversion, including redefined boundary conditions, transformation of variables, and a different strategy for values initialization. A number of Monte Carlo simulation tests for typical scenarios are carried out and show that the parameter estimation accuracy of the proposed method is significantly increased with respect to the original implementation. Fully polarimetric airborne datasets at L-band acquired by German Aerospace Center’s (DLR’s experimental synthetic aperture radar (E-SAR system were also used for testing purposes. The results show different qualitative descriptions of the same cover from six different model-based methods. According to the Bragg coefficient ratio (i.e., β , they are prone to provide wrong numerical inversion results, which could prevent any subsequent quantitative characterization of specific areas in the scene. Besides the particular improvements proposed over an existing polarimetric inversion method, this paper is aimed at pointing out the necessity of checking quantitatively the accuracy of model-based PolSAR techniques for a
Lossless Join Decomposition for Extended Possibility-Based Fuzzy Relational Databases
Liu, Julie Yu-Chih
2014-01-01
.... However, the problem of achieving lossless join decomposition occurs when employing the fuzzy functional dependencies to database normalization in an extended possibility-based fuzzy data models...
Lubineau, Gilles
2015-03-01
We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.
Railway Wheel Flat Detection Based on Improved Empirical Mode Decomposition
Yifan Li
2016-01-01
Full Text Available This study explores the capacity of the improved empirical mode decomposition (EMD in railway wheel flat detection. Aiming at the mode mixing problem of EMD, an EMD energy conservation theory and an intrinsic mode function (IMF superposition theory are presented and derived, respectively. Based on the above two theories, an improved EMD method is further proposed. The advantage of the improved EMD is evaluated by a simulated vibration signal. Then this method is applied to study the axle box vibration response caused by wheel flats, considering the influence of both track irregularity and vehicle running speed on diagnosis results. Finally, the effectiveness of the proposed method is verified by a test rig experiment. Research results demonstrate that the improved EMD can inhibit mode mixing phenomenon and extract the wheel fault characteristic effectively.
Three—Dimensional Vector Field Visualization Based on Tensor Decomposition
梁训东; 李斌; 等
1996-01-01
This paper presents a visualization method called the deformed cube for visualizing 3D velocity vector field.Based on the decomposition of the tensor which describes the changes of the velocity,it provides a technique for visualizing local flow.A deformed cube,a cube transformed by a tensor in a local coordinate frame,shows the local stretch,shear and rigid body rotation of the local flow corresponding to the decomposed component of the tensor.Users can interactively view the local deformation or any component of the changes.The animation of the deformed cube moving along a streamline achieves a more global impression of the flow field.This method is intended as a complement to global visualization methods.
Earth Observation Satellites Scheduling Based on Decomposition Optimization Algorithm
Feng Yao
2010-11-01
Full Text Available A decomposition-based optimization algorithm was proposed for solving Earth Observation Satellites scheduling problem. The problem was decomposed into task assignment main problem and single satellite scheduling sub-problem. In task assignment phase, the tasks were allocated to the satellites, and each satellite would schedule the task respectively in single satellite scheduling phase. We adopted an adaptive ant colony optimization algorithm to search the optimal task assignment scheme. Adaptive parameter adjusting strategy and pheromone trail smoothing strategy were introduced to balance the exploration and the exploitation of search process. A heuristic algorithm and a very fast simulated annealing algorithm were proposed to solve the single satellite scheduling problem. The task assignment scheme was valued by integrating the observation scheduling result of multiple satellites. The result was responded to the ant colony optimization algorithm, which can guide the search process of ant colony optimization. Computation results showed that the approach was effective to the satellites observation scheduling problem.
InfTucker: t-Process based Infinite Tensor Decomposition
Xu, Zenglin; Yuan,; Qi,
2011-01-01
Tensor decomposition is a powerful tool for multiway data analysis. Many popular tensor decomposition approaches---such as the Tucker decomposition and CANDECOMP/PARAFAC (CP)---conduct multi-linear factorization. They are insufficient to model (i) complex interactions between data entities, (ii) various data types (e.g. missing data and binary data), and (iii) noisy observations and outliers. To address these issues, we propose a tensor-variate latent $t$ process model, InfTucker, for robust multiway data analysis: it conducts robust Tucker decomposition in an infinite feature space. Unlike classical tensor decomposition models, it handles both continuous and binary data in a probabilistic framework. Unlike previous nonparametric Bayesian models on matrices and tensors, our latent $t$-process model focuses on multiway analysis and uses nonlinear covariance functions. To efficiently learn InfTucker from data, we develop a novel variational inference technique on tensors. Compared with classical implementation,...
Vision-Based Faint Vibration Extraction Using Singular Value Decomposition
Xiujun Lei
2015-01-01
Full Text Available Vibration measurement is important for understanding the behavior of engineering structures. Unlike conventional contact-type measurements, vision-based methodologies have attracted a great deal of attention because of the advantages of remote measurement, nonintrusive characteristic, and no mass introduction. It is a new type of displacement sensor which is convenient and reliable. This study introduces the singular value decomposition (SVD methods for video image processing and presents a vibration-extracted algorithm. The algorithms can successfully realize noncontact displacement measurements without undesirable influence to the structure behavior. SVD-based algorithm decomposes a matrix combined with the former frames to obtain a set of orthonormal image bases while the projections of all video frames on the basis describe the vibration information. By means of simulation, the parameters selection of SVD-based algorithm is discussed in detail. To validate the algorithm performance in practice, sinusoidal motion tests are performed. Results indicate that the proposed technique can provide fairly accurate displacement measurement. Moreover, a sound barrier experiment showing how the high-speed rail trains affect the sound barrier nearby is carried out. It is for the first time to be realized at home and abroad due to the challenge of measuring environment.
Identifying key nodes in multilayer networks based on tensor decomposition
Wang, Dingjie; Wang, Haitao; Zou, Xiufen
2017-06-01
The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.
Mutually Unbiased Bases and Orthogonal Decompositions of Lie Algebras
Boykin, P O; Tiep, P H; Wocjan, P; Sitharam, Meera; Tiep, Pham Huu; Wocjan, Pawel
2005-01-01
We establish a connection between the problem of constructing maximal collections of mutually unbiased bases (MUBs) and an open problem in the theory of Lie algebras. More precisely, we show that a collection of m MUBs in K^n gives rise to a collection of m Cartan subalgebras of the special linear Lie algebra sl_n(K) that are pairwise orthogonal with respect to the Killing form, where K=R or K=C. In particular, a complete collection of MUBs in C^n gives rise to a so-called orthogonal decomposition (OD) of sl_n(C). The converse holds if the Cartan subalgebras in the OD are also *-closed, i.e., closed under the adjoint operation. In this case, the Cartan subalgebras have unitary bases, and the above correspondence becomes equivalent to a result relating collections of MUBs to collections of maximal commuting classes of unitary error bases, i.e., orthogonal unitary matrices. It is a longstanding conjecture that ODs of sl_n(C) can only exist if n is a prime power. This corroborates further the general belief that...
Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas
Christiansen, Asger Nyman; Carstensen, Jens Michael
We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....
Block Cipher Involving Key Based Random Interlacing and Key Based Random Decomposition
K. A. Kumar
2010-01-01
Full Text Available Problem statement: The strength of the block ciphers depend on the degree of confusion and diffusion induced in the cipher. Most of the transformations used for this purpose are well known to every one and can be broken by a crypt analyzer. Therefore, in order to counter attack the crypt analyzer, there is a need for better transformations in addition to the existing one. Approach: We tried to use key based random interlacing and key based random decomposition for this purpose. So that, a crypt analyzer cannot understand how interlacing and decomposition is done in every round unless the key is known. Results: The strength of the cipher is assessed by avalanche effect which is proved to be satisfactory. Conclusion/Recommendations: Key based random interlacing and decomposition can be used for introducing confusion and diffusion in block ciphers. The cryptanalysis carried out in this regard shows that the cipher cannot be broken by any cryptanalytic attack.
Decomposition-based transfer distance metric learning for image classification.
Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao
2014-09-01
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
Reverse time migration based on normalized wavefield decomposition imaging condition
YU Jianglong; HAN Liguo; ZHOU Yan; ZHANG Yongsheng
2016-01-01
With the increasing complexity of prospecting objectives,reverse time migration (RTM)has attracted more and more attention due to its outstanding imaging quality.RTMis based on two-way wave equation,so it can avoid the limits of angle in traditional one-way wave equation migration,image reverse branch,prism waves and multi-reflected wave precisely and obtain accurate dynamic information.However,the huge demands for storage and computation as well as low frequency noises restrict its wide application.The normalized cross-correlation ima-ging conditions based on wave field decomposition are derived from traditional cross-correlation imaging condition, and it can eliminate the low-frequency noises effectively and improve the imaging resolution.The practical proce-dure includes separating source and receiver wave field into one-way components respectively,and conducting cross-correlation imaging condition to the post-separated wave field.In this way,the resolution and precision of the imaging result will be promoted greatly.
WANG Bing; SHU Jiwu; ZHENG Weimin; WANG Jinzhao; CHEN Min
2005-01-01
A hybrid decomposition method for molecular dynamics simulations was presented, using simultaneously spatial decomposition and force decomposition to fit the architecture of a cluster of symmetric multi-processor (SMP) nodes. The method distributes particles between nodes based on the spatial decomposition strategy to reduce inter-node communication costs. The method also partitions particle pairs within each node using the force decomposition strategy to improve the load balance for each node. Simulation results for a nucleation process with 4 000 000 particles show that the hybrid method achieves better parallel performance than either spatial or force decomposition alone, especially when applied to a large scale particle system with non-uniform spatial density.
Zhao, Weichen; Sun, Zhuo; Kong, Song
2016-10-01
Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.
Behrens, R.; Minier, L.
1998-03-24
The thermal decomposition of ammonium perchlorate (AP) and ammonium-perchlorate-based composite propellants is studied using the simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) technique. The main objective of the present work is to evaluate whether the STMBMS can provide new data on these materials that will have sufficient detail on the reaction mechanisms and associated reaction kinetics to permit creation of a detailed model of the thermal decomposition process. Such a model is a necessary ingredient to engineering models of ignition and slow-cookoff for these AP-based composite propellants. Results show that the decomposition of pure AP is controlled by two processes. One occurs at lower temperatures (240 to 270 C), produces mainly H{sub 2}O, O{sub 2}, Cl{sub 2}, N{sub 2}O and HCl, and is shown to occur in the solid phase within the AP particles. 200{micro} diameter AP particles undergo 25% decomposition in the solid phase, whereas 20{micro} diameter AP particles undergo only 13% decomposition. The second process is dissociative sublimation of AP to NH{sub 3} + HClO{sub 4} followed by the decomposition of, and reaction between, these two products in the gas phase. The dissociative sublimation process occurs over the entire temperature range of AP decomposition, but only becomes dominant at temperatures above those for the solid-phase decomposition. AP-based composite propellants are used extensively in both small tactical rocket motors and large strategic rocket systems.
Statistical Analysis of the Ionosphere based on Singular Value Decomposition
Demir, Uygar; Arikan, Feza; Necat Deviren, M.; Toker, Cenk
2016-07-01
Ionosphere is made up of a spatio-temporally varying trend structure and secondary variations due to solar, geomagnetic, gravitational and seismic activities. Hence, it is important to monitor the ionosphere and acquire up-to-date information about its state in order both to better understand the physical phenomena that cause the variability and also to predict the effect of the ionosphere on HF and satellite communications, and satellite-based positioning systems. To charaterise the behaviour of the ionosphere, we propose to apply Singular Value Decomposition (SVD) to Total Electron Content (TEC) maps obtained from the TNPGN-Active (Turkish National Permanent GPS Network) CORS network. TNPGN-Active network consists of 146 GNSS receivers spread over Turkey. IONOLAB-TEC values estimated from each station are spatio-temporally interpolated using a Universal Kriging based algorithm with linear trend, namely IONOLAB-MAP, with very high spatial resolution. It is observed that the dominant singular value of TEC maps is an indicator of the trend structure of the ionosphere. The diurnal, seasonal and annual variability of the most dominant value is the representation of solar effect on ionosphere in midlatitude range. Secondary and smaller singular values are indicators of secondary variation which can have significance especially during geomagnetic storms or seismic disturbances. The dominant singular values are related to the physical basis vectors where ionosphere can be fully reconstructed using these vectors. Therefore, the proposed method can be used both for the monitoring of the current state of a region and also for the prediction and tracking of future states of ionosphere using singular values and singular basis vectors. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
Decomposition Criterion-based Redundancy Removal in Mechanical Structures
A. N. Bozhko
2014-01-01
Full Text Available The most important design solutions of production engineering for the assembly operation are an assembly sequence and assembly chart. Both are closely linked with each other and therefore are recorded in the single process flow sheet that is an assembly chart.Capability for successive order assembling and splitting into assembly units depends on a set of the product design properties from which the main ones are position mechanical connections used to locate details within a product. An adequate mathematical model of the mechanical connections of technical system is a hyper graph. It allows us to give the correct description of the location relation of variable-locality.The analysis of the array of drawings shows that many designs contain redundant mechanical connections. The inequality is a criterion of redundancy, where |X| is the number of tops of the hyper graph (details, and |R| is the number of hyper edges (full assembly bases. Excess of mutual coordination is a harmful phenomenon which at designing stage exhibits as unsolvable dimension chains, while at the assembly stage it shows as relocation. Redundant connections should be removed from a design at the earliest design-for-manufacturing stages. Removal of connections generates mechanical structures with different assembly properties. The work offers some important criteria of generation of irredundant mechanical structures. The paper considers in detail a maximum decomposition criterion, which allows us to receive structures with the greatest capability to split into assembly units. It shows that such structures exhibit high flexibility in assembling and are adaptable to various specifications and production processes.
Decomposition-Based Decision Making for Aerospace Vehicle Design
Borer, Nicholas K.; Mavris, DImitri N.
2005-01-01
reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches.
Salloum, Maher N.; Gharagozloo, Patricia E.
2013-10-01
Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
Hansen, Per Christian; Jensen, Søren Holdt
2007-01-01
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....
Q-bosonization of the quantum group GL$_{q}$(2) based on the Gauss decomposition
Damaskinsky, E V; Damaskinsky, E V; Sokolov, M A
1995-01-01
The new method of q-bosonization for quantum groups based on the Gauss decomposition of a transfer matrix of generators is suggested. The simplest example of the quantum group GL_q(2) is considered in some details.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng
2017-08-01
Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.
2012-01-01
This paper describes details of an automatic matrix decomposition approach for a reaction-based stream water quality model. The method yields a set of equilibrium equations, a set of kinetic-variable transport equations involving kinetic reactions only, and a set of component transport equations involving no reactions. Partial decomposition of the system of water quality constituent transport equations is performed via Gauss-Jordan column reduction of the reaction network by pivoting on equil...
Formation Pattern Based on Modified Cell Decomposition Algorithm
Iswanto Iswanto
2017-06-01
Full Text Available The purpose of this paper is to present the shortest path algorithm for Quadrotor to make a formation quickly and avoid obstacles in an unknown area. There are three algorithms proposed in this paper namely fuzzy, cell decomposition, and potential field algorithms. Cell decomposition algorithm is an algorithm derived from graph theory used to create maps of robot formations. Fuzzy algorithm is an artificial intelligence control algorithm used for robot navigation. The merger of these two algorithms are not able to form an optimum formation because some Quadrotors which have been hovering should wait for the other Quadrotors which are unable to find the shortest distance to reach the formation quickly. The problem is that the longer time the multi Quadrotors take to make a formation, the more energy they use. It can be overcome by adding potential field algorithm. The algorithm is used to give values of weight to the path planning taken by the Quadrotors. The proposed algorithms have shown that multi Quadrotors can quickly make a formation because they are able to avoid various obstacles and find the shortest path so that the time required to get to the goal position is fast.
A weighted polynomial based material decomposition method for spectral x-ray CT imaging
Wu, Dufan; Zhang, Li; Zhu, Xiaohua; Xu, Xiaofei; Wang, Sen
2016-05-01
Currently in photon counting based spectral x-ray computed tomography (CT) imaging, pre-reconstruction basis materials decomposition is an effective way to reconstruct densities of various materials. The iterative maximum-likelihood method requires precise spectrum information and is time-costly. In this paper, a novel non-iterative decomposition method based on polynomials is proposed for spectral CT, whose aim was to optimize the noise performance when there is more energy bins than the number of basis materials. Several subsets were taken from all the energy bins and conventional polynomials were established for each of them. The decomposition results from each polynomial were summed with pre-calculated weighting factors, which were designed to minimize the overall noises. Numerical studies showed that the decomposition noise of the proposed method was close to the Cramer-Rao lower bound under Poisson noises. Furthermore, experiments were carried out with an XCounter Filte X1 photon counting detector for two-material decomposition and three-material decomposition for validation.
Kohei Arai
2013-06-01
Full Text Available Category decomposition method based on matched filter for un-mixing of mixed pixels: mixels which are acquired with spaceborne based hyperspectral radiometers is proposed. Through simulation studies with simulated mixed pixels which are created with spectral reflectance data derived from USGS spectral library as well as actual airborne based hyperspectral radiometer imagery data, it is found that the proposed method works well with acceptable decomposition accuracy.
GEARBOX FAULT DIAGNOSIS BASED ON EMPIRICAL MODE DECOMPOSITION
Shen Guoji; Tao Limin; Chen Zhongsheng
2004-01-01
Time synchronous averaging of vibration data is a fundament technique for gearbox diagnosis. Currently, this technique relies on hardware tachometer to give phase synchronous information. Empirical mode decomposition (EMD) is introduced to replace time synchronous averaging of gearbox vibration signal. With it, any complicated dataset can be decomposed into a finite and often small number of intrinsic mode functions (IMF). The key problem is how to assure that vibration signals deduced by gear defects could be sifted out by EMD. The characteristic vibration signals of gear defects are proved IMFs, which makes it possible to utilize EMD for the diagnosis of gearbox faults. The method is validated by data from recordings of the vibration of a single-stage spiral bevel gearbox with fatigue pitting. The results show EMD is powerful to extract characteristic information from noisy vibration signals.
Asymmetric color image encryption based on singular value decomposition
Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping
2017-02-01
A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.
Non-linear scalable TFETI domain decomposition based contact algorithm
Dobiáš, J.; Pták, S.; Dostál, Z.; Vondrák, V.; Kozubek, T.
2010-06-01
The paper is concerned with the application of our original variant of the Finite Element Tearing and Interconnecting (FETI) domain decomposition method, called the Total FETI (TFETI), to solve solid mechanics problems exhibiting geometric, material, and contact non-linearities. The TFETI enforces the prescribed displacements by the Lagrange multipliers, so that all the subdomains are 'floating', the kernels of their stiffness matrices are known a priori, and the projector to the natural coarse grid is more effective. The basic theory and relationships of both FETI and TFETI are briefly reviewed and a new version of solution algorithm is presented. It is shown that application of TFETI methodology to the contact problems converts the original problem to the strictly convex quadratic programming problem with bound and equality constraints, so that the effective, in a sense optimal algorithms is to be applied. Numerical experiments show that the method exhibits both numerical and parallel scalabilities.
Advances in audio watermarking based on singular value decomposition
Dhar, Pranab Kumar
2015-01-01
This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications. · Features new methods of audio watermarking for copyright protection and ownership protection · Outl...
Chen, Linfei; Gao, Xiong; Chen, Xudong; He, Bingyu; Liu, Jingyu; Li, Dan
2016-04-01
In this paper, a new optical image cryptosystem is proposed based on two-beam coherent superposition and unequal modulus decomposition. Different from the equal modulus decomposition or unit vector decomposition, the proposed method applies common vector decomposition to accomplish encryption process. In the proposed method, the original image is firstly Fourier transformed and the complex function in spectrum domain will be obtained. The complex distribution is decomposed into two vector components with unequal amplitude and phase by the common vector decomposition method. Subsequently, the two components are modulated by two random phases and transformed from spectrum domain to spatial domain, and amplitude parts are extracted as encryption results and phase parts are extracted as private keys. The advantages of the proposed cryptosystem are: four different phase and amplitude information created by the method of common vector decomposition strengthens the security of the cryptosystem, and it fully solves the silhouette problem. Simulation results are presented to show the feasibility and the security of the proposed cryptosystem.
Decomposition and aromatization of ethanol on ZSM-based catalysts.
Barthos, R; Széchenyi, A; Solymosi, F
2006-11-01
The adsorption, desorption, and reactions of ethanol have been investigated on pure and promoted ZSM-5 catalysts. FTIR spectroscopy indicated the formation of a strongly bonded ethoxy species on ZSM-5(80) at 300 K. TPD experiments following the adsorption of ethanol on both ZSM-5 and Mo2C/ZSM-5 have shown desorption profiles corresponding to unreacted ethanol and decomposition products (H2O, H2, CH3CHO, C4H10O, and C2H4). The main reaction pathway of ethanol on pure ZSM-5 is the dehydration reaction yielding ethylene, small amounts of hydrocarbons, and aromatics. Deposition of different additives, such as Mo2C, ZnO, and Ga2O3 on zeolite, greatly promoted the formation of benzene and toluene at 773-973 K, very likely by catalyzing the aromatization of ethylene formed in the dehydration process of ethanol. Separate studies of the reaction of ethylene revealed that the previous additives markedly enhanced the selectivity and the yield of aromatics on ZSM-5.
Sparse time-frequency decomposition based on dictionary adaptation.
Hou, Thomas Y; Shi, Zuoqiang
2016-04-13
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions.
Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm
Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab
2013-07-01
The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.
Surface stress-based biosensors.
Sang, Shengbo; Zhao, Yuan; Zhang, Wendong; Li, Pengwei; Hu, Jie; Li, Gang
2014-01-15
Surface stress-based biosensors, as one kind of label-free biosensors, have attracted lots of attention in the process of information gathering and measurement for the biological, chemical and medical application with the development of technology and society. This kind of biosensors offers many advantages such as short response time (less than milliseconds) and a typical sensitivity at nanogram, picoliter, femtojoule and attomolar level. Furthermore, it simplifies sample preparation and testing procedures. In this work, progress made towards the use of surface stress-based biosensors for achieving better performance is critically reviewed, including our recent achievement, the optimally circular membrane-based biosensors and biosensor array. The further scientific and technological challenges in this field are also summarized. Critical remark and future steps towards the ultimate surface stress-based biosensors are addressed.
Decomposition method of complex optimization model based on global sensitivity analysis
Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu
2014-07-01
The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.
Batakliev Todor
2014-06-01
Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
Logic synthesis strategy based on BDD decomposition and PAL-oriented optimization
Opara, Adam; Kania, Dariusz
2015-12-01
A new strategy of logic synthesis for PAL-based CPLDs is presented in the paper. This approach consists of an original method of two-stage BDD-based decomposition and a two-level PAL-oriented optimization. The aim of the proposed approach is oriented towards balanced (speed/area) optimization. The first element of the strategy is original PAL-oriented decomposition. This decomposition consists in the sequential search for an input partition providing the feasibility for implementation of the free block in one PAL-based logic block containing a predefined number of product terms. The presented non-standard decomposition provides a means to minimize the area of the implemented circuit and to reduce of the necessary logic blocks in the programmable structure. The second element of the proposed logic synthesis strategy is oriented towards speed optimization. This optimization is based on utilizing tri-state buffers. Results of experiments prove that the presented synthesis strategy is especially effective for CPLD structures, which consist of PAL-based logic blocks containing a low number of product terms.
González, Paula Fernández; Presno, Mª José
2014-01-01
This book addresses several index decomposition analysis methods to assess progress made by EU countries in the last decade in relation to energy and climate change concerns. Several applications of these techniques are carried out in order to decompose changes in both energy and environmental aggregates. In addition to this, a new methodology based on classical spline approximations is introduced, which provides useful mathematical and statistical properties. Once a suitable set of determinant factors has been identified, these decomposition methods allow the researcher to quantify the respec
Ammonia synthesis and decomposition on a Ru-based catalyst modeled by first-principles
Hellman, A.; Honkala, Johanna Karoliina; Remediakis, Ioannis
2009-01-01
A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro......-properties, such as apparent activation energies and reaction orders are provided. All observed trends in activity are captured by the model and the absolute value of ammonia synthesis/decomposition productivity is predicted to within a factor of 1-100 depending on the experimental conditions. Moreover it is shown: (i...
First-principle based modeling of urea decomposition kinetics in aqueous solutions
Nicolle, André; Cagnina, Stefania; de Bruin, Theodorus
2016-11-01
This study aims at validating a multi-scale modeling methodology based on an implicit solvent model for urea thermal decomposition pathways in aqueous solutions. The influence of the number of cooperative water molecules on kinetics was highlighted. The obtained kinetic model is able to accurately reproduce urea decomposition in aqueous phase under a variety of experimental conditions from different research groups. The model also highlights the competition between HNCO desorption to gas phase and hydrolysis in aqueous phase, which may influence SCR depollution process operation.
ECG baseline wander correction based on mean-median filter and empirical mode decomposition.
Xin, Yi; Chen, Yu; Hao, Wei Tuo
2014-01-01
A novel approach of ECG baseline wander correction based on mean-median filter and empirical mode decomposition is presented in this paper. The low frequency parts of the original signals were removed by the mean median filter in a nonlinear way to obtain the baseline wander estimation, then its series of IMFs were sifted by t-test after empirical mode decomposition. The proposed method, tested by the ECG signals in MIT-BIH Arrhythmia database and European ST_T database, is more effective compared with other baseline wander removal methods.
Sang-Eun Park
2012-05-01
Full Text Available In this paper, the three-component power decomposition for polarimetric SAR (PolSAR data with an adaptive volume scattering model is proposed. The volume scattering model is assumed to be reflection-symmetric but parameterized. For each image pixel, the decomposition first starts with determining the adaptive parameter based on matrix similarity metric. Then, a respective scattering power component is retrieved with the established procedure. It has been shown that the proposed method leads to complete elimination of negative powers as the result of the adaptive volume scattering model. Experiments with the PolSAR data from both the NASA/JPL (National Aeronautics and Space Administration/Jet Propulsion Laboratory Airborne SAR (AIRSAR and the JAXA (Japan Aerospace Exploration Agency ALOS-PALSAR also demonstrate that the proposed method not only obtains similar/better results in vegetated areas as compared to the existing Freeman-Durden decomposition but helps to improve discrimination of the urban regions.
A Nitsche-based domain decomposition method for hypersingular integral equations
Chouly, Franz
2011-01-01
We introduce and analyze a Nitsche-based domain decomposition method for the solution of hypersingular integral equations. This method allows for discretizations with non-matching grids without the necessity of a Lagrangian multiplier, as opposed to the traditional mortar method. We prove its almost quasi-optimal convergence and underline the theory by a numerical experiment.
HARMONIC COMPONENT EXTRACTION FROM A CHAOTIC SIGNAL BASED ON EMPIRICAL MODE DECOMPOSITION METHOD
LI Hong-guang; MENG Guang
2006-01-01
A novel approach of signal extraction of a harmonic component from a chaotic signal generated by a Duffing oscillator was proposed. Based on empirical mode decomposition (EMD) and concept that any signal is composed of a series of the simple intrinsic modes, the harmonic components were extracted from the chaotic signals. Simulation results show the approach is satisfactory.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
Hansen, Per Christian; Jensen, Søren Holdt
2007-01-01
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using bo...... with working Matlab code and applications in speech processing....
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
Hansen, Per Christian; Jensen, Søren Holdt
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using bo...... with working Matlab code and applications in speech processing....
Efficient GOCE satellite gravity field recovery based on least-squares using QR decomposition
Baur, O.; Austen, G.; Kusche, J.
2007-01-01
We develop and apply an efficient strategy for Earth gravity field recovery from satellite gravity gradiometry data. Our approach is based upon the Paige-Saunders iterative least-squares method using QR decomposition (LSQR). We modify the original algorithm for space-geodetic applications: firstly,
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
National Aeronautics and Space Administration — Model-based prognostics approaches capture system knowl- edge in the form of physics-based models of components that include how they fail. These methods consist of...
Li, Biyuan; Tang, Chen; Zhu, Xinjun; Chen, Xia; Su, Yonggang; Cai, Yuanxue
2016-11-01
The orthogonal fringe projection technique has as wide as long practical application nowadays. In this paper, we propose a 3D shape retrieval method for orthogonal composite fringe projection based on a combination of variational image decomposition (VID) and variational mode decomposition (VMD). We propose a new image decomposition model to extract the orthogonal fringe. Then we introduce the VMD method to separate the horizontal and vertical fringe from the orthogonal fringe. Lastly, the 3D shape information is obtained by the differential 3D shape retrieval method (D3D). We test the proposed method on a simulated pattern and two actual objects with edges or abrupt changes in height, and compare with the recent, related and advanced differential 3D shape retrieval method (D3D) in terms of both quantitative evaluation and visual quality. The experimental results have demonstrated the validity of the proposed method.
Michelson interferometer based interleaver design using classic IIR filter decomposition.
Cheng, Chi-Hao; Tang, Shasha
2013-12-16
An elegant method to design a Michelson interferometer based interleaver using a classic infinite impulse response (IIR) filter such as Butterworth, Chebyshev, and elliptic filters as a starting point are presented. The proposed design method allows engineers to design a Michelson interferometer based interleaver from specifications seamlessly. Simulation results are presented to demonstrate the validity of the proposed design method.
Morphology of residually stressed tubular tissues: Beyond the elastic multiplicative decomposition
Ciarletta, P.; Destrade, M.; Gower, A. L.; Taffetani, M.
2016-05-01
Many interesting shapes appearing in the biological world are formed by the onset of mechanical instability. In this work we consider how the build-up of residual stress can cause a solid to buckle. In all past studies a fictitious (virtual) stress-free state was required to calculate the residual stress. In contrast, we use a model which is simple and allows the prescription of any residual stress field. We specialize the analysis to an elastic tube subject to a two-dimensional residual stress, and find that incremental wrinkles can appear on its inner or its outer face, depending on the location of the highest value of the residual hoop stress. We further validate the predictions of the incremental theory with finite element simulations, which allow us to go beyond this threshold and predict the shape, number and amplitude of the resulting creases.
Kernel based eigenvalue-decomposition methods for analysing ham
Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming
2010-01-01
conditions and finding useful additives to hinder the color to change rapidly. To be able to prove which methods of storing and additives work, Danisco wants to monitor the development of the color of meat in a slice of ham as a function of time, environment and ingredients. We have chosen to use multi...... methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...
Performance of tensor decomposition-based modal identification under nonstationary vibration
Friesen, P.; Sadhu, A.
2017-03-01
Health monitoring of civil engineering structures is of paramount importance when they are subjected to natural hazards or extreme climatic events like earthquake, strong wind gusts or man-made excitations. Most of the traditional modal identification methods are reliant on stationarity assumption of the vibration response and posed difficulty while analyzing nonstationary vibration (e.g. earthquake or human-induced vibration). Recently tensor decomposition based methods are emerged as powerful and yet generic blind (i.e. without requiring a knowledge of input characteristics) signal decomposition tool for structural modal identification. In this paper, a tensor decomposition based system identification method is further explored to estimate modal parameters using nonstationary vibration generated due to either earthquake or pedestrian induced excitation in a structure. The effects of lag parameters and sensor densities on tensor decomposition are studied with respect to the extent of nonstationarity of the responses characterized by the stationary duration and peak ground acceleration of the earthquake. A suite of more than 1400 earthquakes is used to investigate the performance of the proposed method under a wide variety of ground motions utilizing both complete and partial measurements of a high-rise building model. Apart from the earthquake, human-induced nonstationary vibration of a real-life pedestrian bridge is also used to verify the accuracy of the proposed method.
Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems
Weeraddana Chathuranga
2010-01-01
Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.
Tree Decomposition based Steiner Tree Computation over Large Graphs
2013-01-01
In this paper, we present an exact algorithm for the Steiner tree problem. The algorithm is based on certain pre-computed index structures. Our algorithm offers a practical solution for the Steiner tree problems on graphs of large size and bounded number of terminals.
Classification of Underwater Signals Using Wavelet-Based Decompositions
1998-06-01
proposed by Learned and Willsky [21], uses the SVD information obtained from the power mapping, the second one selects the most within-a-class...34 SPIE, Vol. 2242, pp. 792-802, Wavelet Applications, 1994 [14] R. Coifman and D. Donoho, "Translation-Invariant Denoising ," Internal Report...J. Barsanti, Jr., Denoising of Ocean Acoustic Signals Using Wavelet-Based Techniques, MSEE Thesis, Naval Postgraduate School, Monterey, California
Decomposition-based recovery of absorbers in turbid media
Campbell, S. D.; Goodin, I. L.; Grobe, S. D.; Su, Q.; Grobe, R.
2007-12-01
We suggest that the concept of the point-spread function traditionally used to predict the blurred image pattern of various light sources embedded inside turbid media can be generalized under certain conditions to predict also the presence and location of spatially localized absorbing inhomogeneities based on shadow point-spread functions associated with each localized absorber in the medium. The combined image obtained from several absorbers can then be decomposed approximately into the arithmetic sums of these individual shadow point-spread functions with suitable weights that can be obtained from multiple-regression analysis. This technique permits the reconstruction of the location of absorbers.
Decomposition based recovery of absorbers in turbid media
Goodin, Isaac; Rogers, Ben; Su, Q.; Grobe, R.
2009-11-01
We suggest that the concept of the point-spread function traditionally used to predict the blurred image pattern of various light sources embedded inside turbid media can be generalized under certain conditions to predict also the presence and location of spatially localized absorbing inhomogeneities based on shadow point spread functions associated with each localized absorber in the medium. The combined image obtained from several absorbers can then be decomposed approximately into the arithmetic sums of these individual shadow point spread functions with suitable weights that can be obtained from multiple regression analysis. This technique permits the reconstruction of the location of absorbers.
Hua, Wei; Qi, Ji; Jia, Meng
2017-05-01
Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.
Process-based network decomposition reveals backbone motif structure.
Wang, Guanyu; Du, Chenghang; Chen, Hao; Simha, Rahul; Rong, Yongwu; Xiao, Yi; Zeng, Chen
2010-06-08
A central challenge in systems biology today is to understand the network of interactions among biomolecules and, especially, the organizing principles underlying such networks. Recent analysis of known networks has identified small motifs that occur ubiquitously, suggesting that larger networks might be constructed in the manner of electronic circuits by assembling groups of these smaller modules. Using a unique process-based approach to analyzing such networks, we show for two cell-cycle networks that each of these networks contains a giant backbone motif spanning all the network nodes that provides the main functional response. The backbone is in fact the smallest network capable of providing the desired functionality. Furthermore, the remaining edges in the network form smaller motifs whose role is to confer stability properties rather than provide function. The process-based approach used in the above analysis has additional benefits: It is scalable, analytic (resulting in a single analyzable expression that describes the behavior), and computationally efficient (all possible minimal networks for a biological process can be identified and enumerated).
Communication-Based Decomposition Mechanisms for Decentralized MDPs
Goldman, Claudia V; 10.1613/jair.2466
2011-01-01
Multi-agent planning in stochastic environments can be framed formally as a decentralized Markov decision problem. Many real-life distributed problems that arise in manufacturing, multi-robot coordination and information gathering scenarios can be formalized using this framework. However, finding the optimal solution in the general case is hard, limiting the applicability of recently developed algorithms. This paper provides a practical approach for solving decentralized control problems when communication among the decision makers is possible, but costly. We develop the notion of communication-based mechanism that allows us to decompose a decentralized MDP into multiple single-agent problems. In this framework, referred to as decentralized semi-Markov decision process with direct communication (Dec-SMDP-Com), agents operate separately between communications. We show that finding an optimal mechanism is equivalent to solving optimally a Dec-SMDP-Com. We also provide a heuristic search algorithm that converges...
CamShift Tracking Method Based on Target Decomposition
Chunbo Xiu
2015-01-01
Full Text Available In order to avoid the inaccurate location or the failure tracking caused by the occlusion or the pose variation, a novel tracking method is proposed based on CamShift algorithm by decomposing the target into multiple subtargets for location separately. Distance correlation matrices are constructed by the subtarget sets in the template image and the scene image to evaluate the correctness of the location results. The error locations of the subtargets can be corrected by resolving the optimization function constructed according to the relative positions among the subtargets. The directions and sizes of the correctly located subtargets with CamShift algorithm are updated to reduce the disturbance of the background in the tracking progress. Simulation results show that the method can perform the location and tracking of the target and has better adaptability to the scaling, translation, rotation, and occlusion. Furthermore, the computational cost of the method increases slightly, and its average tracking computational time of the single frame is less than 25 ms, which can meet the real-time requirement of the TV tracking system.
Predicting the reference evapotranspiration based on tensor decomposition
Misaghian, Negin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Mohammadi, Kasra
2016-09-01
Most of the available models for reference evapotranspiration (ET0) estimation are based upon only an empirical equation for ET0. Thus, one of the main issues in ET0 estimation is the appropriate integration of time information and different empirical ET0 equations to determine ET0 and boost the precision. The FAO-56 Penman-Monteith, adjusted Hargreaves, Blaney-Criddle, Priestley-Taylor, and Jensen-Haise equations were utilized in this study for estimating ET0 for two stations of Belgrade and Nis in Serbia using collected data for the period of 1980 to 2010. Three-order tensor is used to capture three-way correlations among months, years, and ET0 information. Afterward, the latent correlations among ET0 parameters were found by the multiway analysis to enhance the quality of the prediction. The suggested method is valuable as it takes into account simultaneous relations between elements, boosts the prediction precision, and determines latent associations. Models are compared with respect to coefficient of determination (R 2), mean absolute error (MAE), and root-mean-square error (RMSE). The proposed tensor approach has a R 2 value of greater than 0.9 for all selected ET0 methods at both selected stations, which is acceptable for the ET0 prediction. RMSE is ranged between 0.247 and 0.485 mm day-1 at Nis station and between 0.277 and 0.451 mm day-1 at Belgrade station, while MAE is between 0.140 and 0.337 mm day-1 at Nis and between 0.208 and 0.360 mm day-1 at Belgrade station. The best performances are achieved by Priestley-Taylor model at Nis station (R 2 = 0.985, MAE = 0.140 mm day-1, RMSE = 0.247 mm day-1) and FAO-56 Penman-Monteith model at Belgrade station (MAE = 0.208 mm day-1, RMSE = 0.277 mm day-1, R 2 = 0.975).
Nguyen van Ye, Romain; Del-Castillo-Negrete, Diego; Spong, D.; Hirshman, S.; Farge, M.
2008-11-01
A limitation of particle-based transport calculations is the noise due to limited statistical sampling. Thus, a key element for the success of these calculations is the development of efficient denoising methods. Here we discuss denoising techniques based on Proper Orthogonal Decomposition (POD) and Wavelet Decomposition (WD). The goal is the reconstruction of smooth (denoised) particle distribution functions from discrete particle data obtained from Monte Carlo simulations. In 2-D, the POD method is based on low rank truncations of the singular value decomposition of the data. For 3-D we propose the use of a generalized low rank approximation of matrices technique. The WD denoising is based on the thresholding of empirical wavelet coefficients [Donoho et al., 1996]. The methods are illustrated and tested with Monte-Carlo particle simulation data of plasma collisional relaxation including pitch angle and energy scattering. As an application we consider guiding-center transport with collisions in a magnetically confined plasma in toroidal geometry. The proposed noise reduction methods allow to achieve high levels of smoothness in the particle distribution function using significantly less particles in the computations.
Enhanced Singular Value Decomposition based Fusion for Super Resolution Image Reconstruction
K. Joseph Abraham Sundar
2015-11-01
Full Text Available The singular value decomposition (SVD plays a very important role in the field of image processing for applications such as feature extraction, image compression, etc. The main objective is to enhance the resolution of the image based on Singular Value Decomposition. The original image and the subsequent sub-pixel shifted image, subjected to image registration is transferred to SVD domain. An enhanced method of choosing the singular values from the SVD domain images to reconstruct a high resolution image using fusion techniques is proposesed. This technique is called as enhanced SVD based fusion. Significant improvement in the performance is observed by applying enhanced SVD method preceding the various interpolation methods which are incorporated. The technique has high advantage and computationally fast which is most needed for satellite imaging, high definition television broadcasting, medical imaging diagnosis, military surveillance, remote sensing etc.
Adaptive Aggregation-based Domain Decomposition Multigrid for Twisted Mass Fermions
Alexandrou, Constantia; Finkenrath, Jacob; Frommer, Andreas; Kahl, Karsten; Rottmann, Matthias
2016-01-01
The Adaptive Aggregation-based Domain Decomposition Multigrid method (arXiv:1303.1377) is extended for two degenerate flavors of twisted mass fermions. By fine-tuning the parameters we achieve a speed-up of the order of hundred times compared to the conjugate gradient algorithm for the physical value of the pion mass. A thorough analysis of the aggregation parameters is presented, which provides a novel insight into multigrid methods for lattice QCD independently of the fermion discretization.
Steering laws analysis of SGCMGs based on singular value decomposition theory
ZHANG Jing-rui
2008-01-01
The steering laws of single gimbal control moment gyros (SGCMGs) are analyzed and compared in this paper for a spacecraft attitude control system based on singular value decomposition (SVD) theory. The mechanism of steering laws escaping singularity, especially how the steering laws affect singularity of gimbal configuration and the output torque error, is studied using SVD theory. Performance of various steering laws are analyzed and compared quantitatively by simulation. The obtained results can be used as a reference for designers.
Layer Decomposition: An Effective Structure-based Approach for Scientific Workflow Similarity
Starlinger, Johannes; Cohen-Boulakia, Sarah; Khanna, Sanjeev; Davidson, Susan; Leser, Ulf
2014-01-01
International audience; Scientific workflows have become a valuable tool for large-scale data processing and analysis. This has led to the creation of specialized online repositories to facilitate workflow sharing and reuse. Over time, these repositories have grown to sizes that call for advanced methods to support workflow discovery, in particular for effective similarity search. Here, we present a novel and intuitive workflow similarity measure that is based on layer decomposition. Layer de...
Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition
Chunfu Wu
2015-01-01
Full Text Available For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
CPUF - a chemical-structure-based polyurethane foam decomposition and foam response model.
Fletcher, Thomas H. (Brigham Young University, Provo, UT); Thompson, Kyle Richard; Erickson, Kenneth L.; Dowding, Kevin J.; Clayton, Daniel (Brigham Young University, Provo, UT); Chu, Tze Yao; Hobbs, Michael L.; Borek, Theodore Thaddeus III
2003-07-01
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the
Khaled Loukhaoukha
2013-01-01
Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.
Hu, Yiqun; Xie, Xinwen; Liu, Xingbin; Zhou, Nanrun
2017-07-01
A novel quantum multi-image encryption algorithm based on iteration Arnold transform with parameters and image correlation decomposition is proposed, and a quantum realization of the iteration Arnold transform with parameters is designed. The corresponding low frequency images are obtained by performing 2-D discrete wavelet transform on each image respectively, and then the corresponding low frequency images are spliced randomly to one image. The new image is scrambled by the iteration Arnold transform with parameters, and the gray-level information of the scrambled image is encoded by quantum image correlation decomposition. For the encryption algorithm, the keys are iterative times, added parameters, classical binary and orthonormal basis states. The key space, the security and the computational complexity are analyzed, and all of the analyses show that the proposed encryption algorithm could encrypt multiple images simultaneously with lower computational complexity compared with its classical counterparts.
Benders' Decomposition Based Heuristics for Large-Scale Dynamic Quadratic Assignment Problems
Sirirat Muenvanichakul
2009-01-01
Full Text Available Problem statement: Dynamic Quadratic Assignment Problem (DQAP is NP hard problem. Benders decomposition based heuristics method is applied to the equivalent mixed-integer linear programming problem of the original DQAP. Approach: Approximate Benders Decomposition (ABD generates the ensemble of a subset of feasible layout for Approximate Dynamic Programming (ADP to determine the sub-optimal optimal solution. A Trust-Region Constraint (TRC for the master problem in ABD and a Successive Adaptation Procedure (SAP were implemented to accelerate the convergence rate of the method. Results: The sub-optimal solutions of large-scales DQAPs from the method and its variants were compared well with other metaheuristic methods. Conclusion: Overall performance of the method is comparable to other metaheuristic methods for large-scale DQAPs.
Design of tailor-made chemical blend using a decomposition-based computer-aided approach
Yunus, Nor Alafiza; Gernaey, Krist; Manan, Z.A.
2011-01-01
method reduces the search space in a systematic manner and the general blend design problem is decomposed into two stages. The first stage investigates the mixture stability where all unstable mixtures are eliminated and the stable blend candidates are retained for further testing (note that all blends...... attributes (properties).The systematic computer-aided technique first establishes the search space, and then narrows it down in subsequent steps until a small number of feasible and promising candidates remain. At this point, experimental work may be conducted to verify if any or all the candidates satisfy...... and their compositions and a set of desired target properties of the blended product as design constraints. This blend design problem is solved using a decomposition approach, which eliminates infeasible and/or redundant candidates gradually through a hierarchy of (property) model based constraints. This decomposition...
Jesmin F. Khan
2008-08-01
Full Text Available A novel approach for bidimensional empirical mode decomposition (BEMD is proposed in this paper. BEMD decomposes an image into multiple hierarchical components known as bidimensional intrinsic mode functions (BIMFs. In each iteration of the process, two-dimensional (2D interpolation is applied to a set of local maxima (minima points to form the upper (lower envelope. But, 2D scattered data interpolation methods cause huge computation time and other artifacts in the decomposition. This paper suggests a simple, but effective, method of envelope estimation that replaces the surface interpolation. In this method, order statistics filters are used to get the upper and lower envelopes, where filter size is derived from the data. Based on the properties of the proposed approach, it is considered as fast and adaptive BEMD (FABEMD. Simulation results demonstrate that FABEMD is not only faster and adaptive, but also outperforms the original BEMD in terms of the quality of the BIMFs.
YI Jian-hua; ZHAO Feng-qi; XU Si-yu; GAO Hong-xu; HU Rong-zu
2008-01-01
The thermal decomposition behavior and nonisothermal reaction kinetics of the double-base gun propellants containing the mixed ester of triethyleneglycol dinitrate(TEGDN) and nitroglycerin(NG) were investigated by thermogravimetry(TG) and differential thermogravimetry(DTG),and differential scanning calorimetry(DSC) under the high-pressure dynamic ambience.The results show that the thermal decomposition processes of the mixed nitric ester gun propellants have two mass-loss stages.Nitric ester evaporates and decomposes in the first stage,and nitrocellulose and centralite Ⅱ(C2) decompose in the second stage.The mass loss,the DTG peak points,and the terminated temperatures of the two stages are changeable with the difference of the mass ratio of TEGDN to NG.There is only one obvious exothermic peak in the DSC curves under the different pressures.With the increase in the furnace pressure,the peak temperature decreases,and the decomposition heat increases.With the increase in the content of TEGDN,the decomposition heat decreases at 0.1 Mpa and rises at high pressure.The variety of mass ratio of TEGDN to NG makes few effect on the exothermic peak temperatures in the DSC curves at different pressures.The kinetic equation of the main exothermal decomposition reaction of the gun propellant TG0601 was determined as:da/dt-=1021.59(1-a)3e-2.60×104/T The reaction mechanism of the process can be classified as chemical reaction.The critical temperatures of the thermal explosion(Tbe and Tbp) obtained from the onset temperature(Te) and the peak temperature(Tp) are 456.46 and 473.40 K,respectively.△S≠,△H≠,and △G≠of the decomposition reaction are 163.57 J·mol-1·K-1,209.54 kJ·mol-1,and 133.55kJ·mol-1,respectively.
Søren Holdt Jensen
2007-01-01
Full Text Available We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both diagonal (eigenvalue and singular value decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV, and ULLIV. In addition, we show how the subspace-based algorithms can be analyzed and compared by means of simple FIR filter interpretations. The algorithms are illustrated with working Matlab code and applications in speech processing.
Abuturab, Muhammad Rafiq
2014-06-01
A new color image security system based on singular value decomposition (SVD) in gyrator transform (GT) domains is proposed. In the encryption process, a color image is decomposed into red, green and blue channels. Each channel is independently modulated by random phase masks and then separately gyrator transformed at different parameters. The three gyrator spectra are joined by multiplication to get one gray ciphertext. The ciphertext is separated into U, S, and V parts by SVD. All the three parts are individually gyrator transformed at different transformation angles. The three encoded information can be assigned to different authorized users for highly secure verification. Only when all the authorized users place the U, S, and V parts in correct multiplication order in the verification system, the correct information can be obtained with all the right keys. In the proposed method, SVD offers one-way asymmetrical decomposition algorithm and it is an optimal matrix decomposition in a least-square sense. The transformation angles of GT provide very sensitive additional keys. The pre-generated keys for red, green and blue channels are served as decryption (private) keys. As all the three encrypted parts are the gray scale ciphertexts with stationary white noise distributions, which have camouflage property to some extent. These advantages enhance the security and robustness. Numerical simulations are presented to support the viability of the proposed verification system.
Fan Zhang
2012-01-01
Full Text Available This paper describes details of an automatic matrix decomposition approach for a reaction-based stream water quality model. The method yields a set of equilibrium equations, a set of kinetic-variable transport equations involving kinetic reactions only, and a set of component transport equations involving no reactions. Partial decomposition of the system of water quality constituent transport equations is performed via Gauss-Jordan column reduction of the reaction network by pivoting on equilibrium reactions to decouple equilibrium and kinetic reactions. This approach minimizes the number of partial differential advective-dispersive transport equations and enables robust numerical integration. Complete matrix decomposition by further pivoting on linearly independent kinetic reactions allows some rate equations to be formulated individually and explicitly enforces conservation of component species when component transport equations are solved. The methodology is demonstrated for a case study involving eutrophication reactions in the Des Moines River in Iowa, USA and for two hypothetical examples to illustrate the ability of the model to simulate sediment and chemical transport with both mobile and immobile water phases and with complex reaction networks involving both kinetic and equilibrium reactions.
Moving object detection based on on-line block-robust principal component analysis decomposition
Yang, Biao; Cao, Jinmeng; Zou, Ling
2017-07-01
Robust principal component analysis (RPCA) decomposition is widely applied in moving object detection due to its ability in suppressing environmental noises while separating sparse foreground from low rank background. However, it may suffer from constant punishing parameters (resulting in confusion between foreground and background) and holistic processing of all input frames (leading to bad real-time performance). Improvements to these issues are studied in this paper. A block-RPCA decomposition approach was proposed to handle the confusion while separating foreground from background. Input frame was initially separated into blocks using three-frame difference. Then, punishing parameter of each block was computed by its motion saliency acquired based on selective spatio-temporal interesting points. Aiming to improve the real-time performance of the proposed method, an on-line solution to block-RPCA decomposition was utilized. Both qualitative and quantitative tests were implemented and the results indicate the superiority of our method to some state-of-the-art approaches in detection accuracy or real-time performance, or both of them.
Low-Rank Decomposition Based Restoration of Compressed Images via Adaptive Noise Estimation.
Zhang, Xinfeng; Lin, Weisi; Xiong, Ruiqin; Liu, Xianming; Ma, Siwei; Gao, Wen
2016-07-07
Images coded at low bit rates in real-world applications usually suffer from significant compression noise, which significantly degrades the visual quality. Traditional denoising methods are not suitable for the content-dependent compression noise, which usually assume that noise is independent and with identical distribution. In this paper, we propose a unified framework of content-adaptive estimation and reduction for compression noise via low-rank decomposition of similar image patches. We first formulate the framework of compression noise reduction based upon low-rank decomposition. Compression noises are removed by soft-thresholding the singular values in singular value decomposition (SVD) of every group of similar image patches. For each group of similar patches, the thresholds are adaptively determined according to compression noise levels and singular values. We analyze the relationship of image statistical characteristics in spatial and transform domains, and estimate compression noise level for every group of similar patches from statistics in both domains jointly with quantization steps. Finally, quantization constraint is applied to estimated images to avoid over-smoothing. Extensive experimental results show that the proposed method not only improves the quality of compressed images obviously for post-processing, but are also helpful for computer vision tasks as a pre-processing method.
Duro Moreno, Juan Antonio; Teixidó-Figueras, Jordi; Padilla, Emilio
2014-01-01
This paper uses the possibilities provided by the regression-based inequality decomposition (Fields, 2003) to explore the contribution of different explanatory factors to international inequality in CO2 emissions per capita. In contrast to previous emissions inequality decompositions, which were based on identity relationships (Duro and Padilla, 2006), this methodology does not impose any a priori specific relationship. Thus, it allows an assessment of the contribution to inequality of differ...
An improved convergence bound for aggregation-based domain decomposition preconditioners.
Shadid, John Nicolas; Sala, Marzio; Tuminaro, Raymond Stephen
2005-06-01
In this paper we present a two-level overlapping domain decomposition preconditioner for the finite-element discretization of elliptic problems in two and three dimensions. The computational domain is partitioned into overlapping subdomains, and a coarse space correction, based on aggregation techniques, is added. Our definition of the coarse space does not require the introduction of a coarse grid. We consider a set of assumptions on the coarse basis functions to bound the condition number of the resulting preconditioned system. These assumptions involve only geometrical quantities associated with the aggregates and the subdomains. We prove that the condition number using the two-level additive Schwarz preconditioner is O(H/{delta} + H{sub 0}/{delta}), where H and H{sub 0} are the diameters of the subdomains and the aggregates, respectively, and {delta} is the overlap among the subdomains and the aggregates. This extends the bounds presented in [C. Lasser and A. Toselli, Convergence of some two-level overlapping domain decomposition preconditioners with smoothed aggregation coarse spaces, in Recent Developments in Domain Decomposition Methods, Lecture Notes in Comput. Sci. Engrg. 23, L. Pavarino and A. Toselli, eds., Springer-Verlag, Berlin, 2002, pp. 95-117; M. Sala, Domain Decomposition Preconditioners: Theoretical Properties, Application to the Compressible Euler Equations, Parallel Aspects, Ph.D. thesis, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland, 2003; M. Sala, Math. Model. Numer. Anal., 38 (2004), pp. 765-780]. Numerical experiments on a model problem are reported to illustrate the performance of the proposed preconditioner.
A route-based decomposition for the Multi-Commodity k-splittable Maximum Flow Problem
Gamst, Mette
2012-01-01
The Multi-Commodity k-splittable Maximum Flow Problem routes flow through a capacitated graph such that each commodity uses at most k paths and such that the total amount of routedflow is maximized. This paper proposes a branch-and-price algorithm based on a route-based Dantzig-Wolfe decomposition......, where a route consists of up to k paths. Computational results show that the new algorithm has best performance on seven benchmark instances and is capable of solving two previously unsolved instances....
A solution approach to the ROADEF/EURO 2010 challenge based on Benders' Decomposition
Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn
We present a Benders’ decomposition based framework for solving a large scale energy management problem with varied constraints posed as the ROADEF/EURO 2010 challenge. Because of the nature of the problem, not all constraints can be modeled satisfactorily as linear constraints and the approach...... is therefore divided into two stages: in the first stage Benders feasibility and optimality cuts are added based on the linear programming relaxation of the Benders Master problem, and in the second stage feasible integer solutions are enumerated and procedure is applied to each solution in an attempt to make...
Arjun Singh
2017-02-01
Full Text Available This work describes thermal decomposition behaviour of plastic bonded explosives (PBXs based on mixture of l,3,5,7-tetranitro- 1,3,5,7-tetrazocane (HMX and 2,4,6- triamino-1,3,5-trinitrobenzene (TATB with Viton A as polymer binder. Thermal decomposition of PBXs was undertaken by applying simultaneous thermal analysis (STA and differential scanning calorimetry (DSC to investigate influence of the HMX amount on thermal behavior and its kinetics. Thermogravimetric analysis (TGA indicated that the thermal decomposition of PBXs based on mixture of HMX and TATB was occurred in a three-steps. The first step was mainly due to decomposition of HMX. The second step was ascribed due to decomposition of TATB, while the third step was occurred due to decomposition of the polymer matrices. The thermal decomposition % was increased with increasing HMX amount. The kinetics related to thermal decomposition were investigated under non-isothermal for a single heating rate measurement. The variation in the activation energy of PBXs based on mixture of HMX and TATB was observed with varying the HMX amount. The kinetics from the results of TGA data at various heating rates under non-isothermal conditions were also calculated by Flynn–Wall–Ozawa (FWO and Kissinger-Akahira-Sunose (KAS methods. The activation energies calculated by employing FWO method were very close to those obtained by KAS method. The mean activation energy calculated by FWO and KAS methods was also a good agreement with the activation energy obtained from single heating rate measurement in the first step decomposition.
Hua-Qing Wang
2014-01-01
Full Text Available Vibration signals of rolling element bearings faults are usually immersed in background noise, which makes it difficult to detect the faults. Wavelet-based methods being used commonly can reduce some types of noise, but there is still plenty of room for improvement due to the insufficient sparseness of vibration signals in wavelet domain. In this work, in order to eliminate noise and enhance the weak fault detection, a new kind of peak-based approach combined with multiscale decomposition and envelope demodulation is developed. First, to preserve effective middle-low frequency signals while making high frequency noise more significant, a peak-based piecewise recombination is utilized to convert middle frequency components into low frequency ones. The newly generated signal becomes so smoother that it will have a sparser representation in wavelet domain. Then a noise threshold is applied after wavelet multiscale decomposition, followed by inverse wavelet transform and backward peak-based piecewise transform. Finally, the amplitude of fault characteristic frequency is enhanced by means of envelope demodulation. The effectiveness of the proposed method is validated by rolling bearings faults experiments. Compared with traditional wavelet-based analysis, experimental results show that fault features can be enhanced significantly and detected easily by the proposed method.
Improved EEMD Denoising Method Based on Singular Value Decomposition for the Chaotic Signal
Xiulei Wei
2016-01-01
Full Text Available Chaotic data analysis is important in many areas of science and engineering. However, the chaotic signals are inevitably contaminated by complicated noise in the collection process which greatly interferes with the analysis of chaos identification. The chaotic vibration is extremely nonlinear and has a broad range of frequencies; linear filtering methods are not effective for chaotic signal noise reduction. Then an improved ensemble empirical mode decomposition (EEMD based on singular value decomposition (SVD and Savitzky-Golay (SG filtering method was proposed. Firstly, the noise energy of first level intrinsic mode function (IMF was estimated by “3σ” criterion, and then SVD was used to extract the signal details from first IMF, and the singular value was selected to reconstruct the IMF according to noise energy of the first IMF. Secondly, the remaining IMFs are divided into high frequency and low frequency components based on consecutive mean square error (CMSE, and the useful signals of high frequency components and low frequency components are extracted based on SVD and SG filtering method, respectively. The superiority of the proposed method is demonstrated with simulated signal, two-degree-of-freedom chaotic vibration signals, and the experimental signals based on double potential well theory.
Shah, Ghafoor; Koch, Peter; Papadias, Constantinos B.
2014-01-01
. A novel method based on hierarchical decomposition of the single channel mixture using various nonnegative matrix factorization techniques is proposed, which provides unsupervised clustering of the underlying component signals. HRV is determined over the recovered normal cardiac acoustic signals....... This novel decomposition technique is compared against the state-of-the-art techniques; experiments are performed using real-world clinical data, which show the potential significance of the proposed technique....
Adaptive aggregation-based domain decomposition multigrid for twisted mass fermions
Alexandrou, Constantia; Bacchio, Simone; Finkenrath, Jacob; Frommer, Andreas; Kahl, Karsten; Rottmann, Matthias
2016-12-01
The adaptive aggregation-base domain decomposition multigrid method [A. Frommer et al., SIAM J. Sci. Comput. 36, A1581 (2014)] is extended for two degenerate flavors of twisted mass fermions. By fine-tuning the parameters we achieve a speed-up of the order of a hundred times compared to the conjugate gradient algorithm for the physical value of the pion mass. A thorough analysis of the aggregation parameters is presented, which provides a novel insight into multigrid methods for lattice quantum chromodynamics independently of the fermion discretization.
An improved proximal-based decomposition method for structured monotone variational inequalities
无
2007-01-01
The proximal-based decomposition method was originally proposed by Chen and Teboulle (Math. Programming, 1994, 64:81-101 for solving convex minimization problems. This paper extends it to solving monotone variational inequalities associated with separable structures with the improvements that the restrictive assumptions on the involved parameters are much relaxed, and thus makes it practical to solve the subproblems easily. Without additional assumptions, global convergence of the new method is proved under the same mild assumptions on the problem's data as the original method.
Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots
Ching-Long Shih
2012-08-01
Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.
Penenko, Alexey; Antokhin, Pavel
2016-11-01
The performance of a variational data assimilation algorithm for a transport and transformation model of atmospheric chemical composition is studied numerically in the case where the emission inventories are missing while there are additional in situ indirect concentration measurements. The algorithm is based on decomposition and splitting methods with a direct solution of the data assimilation problems at the splitting stages. This design allows avoiding iterative processes and working in real-time. In numerical experiments we study the sensitivity of data assimilation to measurement data quantity and quality.
Wu Zhi-jian; Tang Zhi-long; Kang Li-shan
2003-01-01
This paper presents a parallel two level evolutionary algorithm based on domain decomposition for solving function optimization problem containing multiple solutions.By combining the characteristics of the global search and local search in each sub-domain, the former enables individual to draw closer to each optirma and keeps the diversity of individuals, while the latter selects local optimal solutions known as latent solutions in sub-domain. In the end, by selecting the global optimal solutions from latent solutions in each sub-domain, we can discover all the optimal solutions easily and quickly.
Ling, Zhao; Yeling, Wang; Guijun, Hu; Yunpeng, Cui; Jian, Shi; Li, Li
2013-07-01
Recursive least squares constant modulus algorithm based on QR decomposition (QR-RLS-CMA) is first proposed as the polarization demultiplexing method. We compare its performance with the stochastic gradient descent constant modulus algorithm (SGD-CMA) and the recursive least squares constant modulus algorithm (RLS-CMA) in a polarization-division-multiplexing system with coherent detection. It is demonstrated that QR-RLS-CMA is an efficient demultiplexing algorithm which can avoid the problem of step-length choice in SGD-CMA. Meanwhile, it also has better symbol error rate (SER) performance and more stable convergence property.
Single-Facility Scheduling over Long Time Horizons by Logic-Based Benders Decomposition
Coban, Elvin; Hooker, John N.
Logic-based Benders decomposition can combine mixed integer programming and constraint programming to solve planning and scheduling problems much faster than either method alone. We find that a similar technique can be beneficial for solving pure scheduling problems as the problem size scales up. We solve single-facility non-preemptive scheduling problems with time windows and long time horizons that are divided into segments separated by shutdown times (such as weekends). The objective is to find feasible solutions, minimize makespan, or minimize total tardiness.
Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots
Ching-Long Shih
2012-08-01
Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton‐Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on‐line numerical computations. Based on the decomposition approach and cross‐product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on‐line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed‐chain robot systems.
Direct decomposition of methane over SBA-15 supported Ni, Co and Fe based bimetallic catalysts
Pudukudy, Manoj; Yaakob, Zahira; Akmal, Zubair Shamsul
2015-03-01
Thermocatalytic decomposition of methane is an alternative route for the production of COx-free hydrogen and carbon nanomaterials. In this work, a set of novel Ni, Co and Fe based bimetallic catalysts supported over mesoporous SBA-15 was synthesized by a facile wet impregnation route, characterized for their structural, textural and reduction properties and were successfully used for the methane decomposition. The fine dispersion of metal oxide particles on the surface of SBA-15, without affecting its mesoporous texture was clearly shown in the low angle X-ray diffraction patterns and the transmission electron microscopy (TEM) images. The nitrogen sorption analysis showed the reduced specific surface area and pore volume of SBA-15, after metal loading due to the partial filling of hexagonal mesopores by metal species. The results of methane decomposition experiments indicated that all of the bimetallic catalysts were highly active and stable for the reaction at 700 °C even after 300 min of time on stream (TOS). However, a maximum hydrogen yield of ∼56% was observed for the NiCo/SBA-15 catalyst within 30 min of TOS. A high catalytic stability was shown by the CoFe/SBA-15 catalyst with 51% of hydrogen yield during the course of reaction. The catalytic stability of the bimetallic catalysts was attributed to the formation of bimetallic alloys. Moreover, the deposited carbons were found to be in the form of a new set of hollow multi-walled nanotubes with open tips, indicating a base growth mechanism, which confirm the selectivity of SBA-15 supported bimetallic catalysts for the formation of open tip carbon nanotubes. The Raman spectroscopic and thermogravimetric analysis of the deposited carbon nanotubes over the bimetallic catalysts indicated their higher graphitization degree and oxidation stability.
Direct decomposition of methane over SBA-15 supported Ni, Co and Fe based bimetallic catalysts
Pudukudy, Manoj, E-mail: manojpudukudy@gmail.com [Fuel Cell Institute, Universiti Kebangsaan Malaysia, UKM, Bangi 43600, Selangor (Malaysia); Department of Chemical and Process Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, UKM, Bangi 43600, Selangor (Malaysia); Yaakob, Zahira, E-mail: zahirayaakob65@gmail.com [Department of Chemical and Process Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, UKM, Bangi 43600, Selangor (Malaysia); Akmal, Zubair Shamsul [Department of Chemical and Process Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, UKM, Bangi 43600, Selangor (Malaysia)
2015-03-01
Graphical abstract: - Highlights: • Synthesis and characterization of Ni, Co and Fe based bimetallic catalysts supported over SBA-15. • Thermocatalytic decomposition of methane over the SBA-15 supported bimetallic catalysts. • Enhanced catalytic efficiency of the bimetallic catalysts for the production of CO{sub x} free hydrogen and nanocarbon. • Production of value added open tip hollow multi-walled carbon nanotubes. • Crystalline characterization of carbon nanotubes by XRD, Raman and thermogravimetric analysis. - Abstract: Thermocatalytic decomposition of methane is an alternative route for the production of CO{sub x}-free hydrogen and carbon nanomaterials. In this work, a set of novel Ni, Co and Fe based bimetallic catalysts supported over mesoporous SBA-15 was synthesized by a facile wet impregnation route, characterized for their structural, textural and reduction properties and were successfully used for the methane decomposition. The fine dispersion of metal oxide particles on the surface of SBA-15, without affecting its mesoporous texture was clearly shown in the low angle X-ray diffraction patterns and the transmission electron microscopy (TEM) images. The nitrogen sorption analysis showed the reduced specific surface area and pore volume of SBA-15, after metal loading due to the partial filling of hexagonal mesopores by metal species. The results of methane decomposition experiments indicated that all of the bimetallic catalysts were highly active and stable for the reaction at 700 °C even after 300 min of time on stream (TOS). However, a maximum hydrogen yield of ∼56% was observed for the NiCo/SBA-15 catalyst within 30 min of TOS. A high catalytic stability was shown by the CoFe/SBA-15 catalyst with 51% of hydrogen yield during the course of reaction. The catalytic stability of the bimetallic catalysts was attributed to the formation of bimetallic alloys. Moreover, the deposited carbons were found to be in the form of a new set of hollow
S. Sakthivel
2011-01-01
Full Text Available Problem statement: Recognizing a face based attributes is an easy task for a human to perform; it is closely automated and requires little mental effort. A computer, on the other hand, has no innate ability to recognize a face or a facial feature and must be programmed with an algorithm to do so. Generally, to recognize a face, different kinds of the facial features were used separately or in a combined manner. In the previous work, we have developed a machine learning based multi attribute face recognition algorithm and evaluated it different set of weights to each input attribute and performance wise it is low compared to proposed wavelet decomposition technique. Approach: In this study, wavelet decomposition technique has been applied as a preprocessing technique to enhance the input face images in order to reduce the loss of classification performance due to changes in facial appearance. The Experiment was specifically designed to investigate the gain in robustness against illumination and facial expression changes. Results: In this study, a wavelet based image decomposition technique has been proposed to enhance the performance by 8.54 percent of the previously designed system. Conclusion: The proposed model has been tested on face images with difference in expression and illumination condition with a dataset obtained from face image databases from Olivetti Research Laboratory.
Hui Lu
2014-01-01
Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.
Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping
2016-04-01
Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.
Jingjing Ma
2014-01-01
Full Text Available Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Chen, Yangkang
2016-07-01
The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. One only needs to specify the number of dip components to be separated. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information.
Chenhua, Shen; Yani, Yan
2017-02-01
We present a new tool for spatiotemporal pattern decomposition and utilize this new tool to decompose spatiotemporal patterns of monthly mean precipitation from January 1957 to May 2015 in Taihu Lake Basin, China. Our goal is to show that this new tool can mine more hidden information than empirical orthogonal function (EOF). First, based on EOF and empirical mode decomposition (EMD), the time series which is an average over the study region is decomposed into a variety of intrinsic mode functions (IMFs) and a residue by means of EMD. Then, these IMFs are supposed to be explanatory variables and a time series of precipitation in every station is considered as a dependent variable. Next, a linear multivariate regression equation is derived and corresponding coefficients are estimated. These estimated coefficients are physically interpreted as spatial coefficients and their physical meaning is an orthogonal projection between IMF and a precipitation time series in every station. Spatial patterns are presented depending on spatial coefficients. The spatiotemporal patterns include temporal patterns and spatial patterns at various timescales. Temporal pattern is obtained by means of EMD. Based on this temporal pattern, spatial patterns at various timescales will be gotten. The proposed tool has been applied in decomposition of spatiotemporal pattern of monthly mean precipitation in Taihu Lake Basin, China. Since spatial patterns are associated with intrinsic frequency, the new and individual spatial patterns are detected and explained physically. Our analysis shows that this new tool is reliable and applicable for geophysical data in the presence of nonstationarity and long-range correlation and can handle nonstationary spatiotemporal series and has the capacity to extract more hidden time-frequency information on spatiotemporal patterns.
Hui, Xiaonan; Zhang, Weite; Jin, Xiaofeng; Chi, Hao; Zhang, Xianmin
2015-01-01
The topological charge of an electromagnetic vortex beam depends on its wavefront helicity. For mixed vortex beams composed of several different coaxial vortices, the topological charge spectrum can be obtained by Fourier transform. However, the vortex beam is generally divergent and imperfect. It makes it significant to investigate the local topological charges, especially in radio frequency regime. Fourier transform based methods are restrained by the uncertainty principle and cannot achieve high angular resolution and mode resolution simultaneously. In this letter, an analysis method for local topological charges of vortex beams is presented based on the empirical mode decomposition (EMD). From EMD, the intrinsic mode functions (IMFs) can be obtained to construct the bases of the electromagnetic wave, and each local topological charge can be respectively defined. With this method the local value achieves both high resolution of azimuth angle and topological charge, meanwhile the amplitudes of each OAM mode...
Alsharoa, Ahmad M.
2015-05-01
In this paper, the problem of radio and power resource management in long term evolution heterogeneous networks (LTE HetNets) is investigated. The goal is to minimize the total power consumption of the network while satisfying the user quality of service determined by each target data rate. We study the model where one macrocell base station is placed in the cell center, and multiple small cell base stations and femtocell access points are distributed around it. The dual decomposition technique is adopted to jointly optimize the power and carrier allocation in the downlink direction in addition to the selection of turned off small cell base stations. Our numerical results investigate the performance of the proposed scheme versus different system parameters and show an important saving in terms of total power consumption. © 2015 IEEE.
On the decomposition mechanisms of new imidazole-based energetic materials.
Yu, Zijun; Bernstein, Elliot R
2013-02-28
New imidazole-based energetic molecules (1,4-dinitroimidazole, 2,4-dinitroimidazole, 1-methyl-2,4-dinitroimidazole, and 1-methyl-2,4,5-trinitroimidazole) are studied both experimentally and theoretically. The NO molecule is observed as a main decomposition product from the above nitroimidazole energetic molecules excited at three UV wavelengths (226, 236, and 248 nm). Resolved rotational spectra related to three vibronic bands (0-0), (0-1), and (0-2) of the NO (A (2)Σ(+) ← X (2)Π) electronic transition have been obtained. A unique excitation wavelength independent dissociation channel is characterized for these four nitroimidazole energetic molecules: this pathway generates the NO product with a rotationally cold (10-60 K) and vibrationally hot (1300-1600 K) internal energy distribution. The predicted reaction mechanism for the nitroimidazole energetic molecule decomposition subsequent to electronic excitation is the following: electronically excited nitroimidazole energetic molecules descend to their ground electronic states through a series of conical intersections, dissociate on their ground electronic states subsequent to a nitro-nitrite isomerization, and produce NO molecules. Different from PETN, HMX, and RDX, the thermal dissociation process (ground electronic state decomposition from the Franck-Condon equilibrium point) of multinitroimidazoles is predicted to be a competition between NO(2) elimination and nitro-nitrite isomerization followed by NO elimination for all multinitroimidazoles except 1,4-dinitroimidazole. In this latter instance, N-NO(2) homolysisis becomes the dominant decomposition channel on the ground electronic state, as found for HMX and RDX. Comparison of the stability of nitro-containing energetic materials with R-NO(2) (R = C, N, O) moieties is also discussed. Energetic materials with C-NO(2) are usually more thermally stable and impact/shock insensitive than are other energetic materials with N-NO(2) and O-NO(2) moieties. The
Jianchang Lu
2015-04-01
Full Text Available Based on the international community’s analysis of the present CO2 emissions situation, a Log Mean Divisia Index (LMDI decomposition model is proposed in this paper, aiming to reflect the decomposition of carbon productivity. The model is designed by analyzing the factors that affect carbon productivity. China’s contribution to carbon productivity is analyzed from the dimensions of influencing factors, regional structure and industrial structure. It comes to the conclusions that: (a economic output, the provincial carbon productivity and energy structure are the most influential factors, which are consistent with China’s current actual policy; (b the distribution patterns of economic output, carbon productivity and energy structure in different regions have nothing to do with the Chinese traditional sense of the regional economic development patterns; (c considering the regional protectionism, regional actual situation need to be considered at the same time; (d in the study of the industrial structure, the contribution value of industry is the most prominent factor for China’s carbon productivity, while the industrial restructuring has not been done well enough.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
CT image sequence restoration based on sparse and low-rank decomposition.
Shuiping Gou
Full Text Available Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA, Linearized Alternating Direction Method with Adaptive Penalty (LADMAP and Go Decomposition (GoDec. Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Naveed ur Rehman
2015-05-01
Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
De-jie YU; Jie-si LUO; Mei-li SHI
2010-01-01
An approach based an multi-scale chirplet sparse signal de-composition is proposed to separate the multi-component polynomial phase signals, and estimate their instantaneous frequencies. In this paper, we have generated a family of multi-scale chirplet functions which provide good local correlations of chirps over shorter time inter-val. At every decomposition stage, we build the so-called family of chirplets and our idea is to use a structured algorithm which exploits information in the family to chain chirplets together adaptively as to form the polynomial phase signal component whose correlation with the current residue signal is largest. Simultaneously, the polynomial instantaneous frequency is estimated by connecting the linear frequen-cy of the chirplet fixations adopted in the current separation. Simula-tion experiment demonstrated that this method can separate the com-ponents of the multi-component polynomial phase signals effectively even in the low signal-to-noise ratio condition, and estimate its in-stantaneous frequency accurately.
Characterization and Thermal Decomposition Kinetics of Kapok (Ceiba pentandra L.–Based Cellulose
Sarifah Fauziah Syed Draman
2013-11-01
Full Text Available nterest in using kapok (Ceiba pentandra L.–based cellulose in composite preparation is growing due to its advantages, including cost- effectiveness, light weight, non-toxicity, and biodegradability. In this study, chloroform, sodium chlorite, and sodium hydroxide were used for wax removal, delignification, and hemicellulose removal, respectively. It was observed that the air entrapment inside kapok fiber disappeared after it was treated with alkali. The structure became completely flattened and similar to a flat ribbon-like shape when examined using a vapour pressure scanning electron microscope (VPSEM. Fourier transform infrared (FTIR spectroscopy was used to characterize the untreated and treated kapok fibers. The peak at 898 cm−1, which is attributed to the glucose ring stretching in cellulose, was observed for the obtained cellulose samples. Peaks corresponding to lignin (1505 and 1597 cm−1 and hemicellulose (1737 and 1248 cm−1 disappeared. The results of differential scanning colorimetry (DSC indicated that the degradation of cellulose appeared as an exothermic peak at about 300 to 350 °C. The activation energy for thermal decomposition of kapok cellulose and its hemicelluloses was 185 kJ/mol and 110 kJ/mol, respectively. The activation energy for thermal decomposition can be used as an alternative approach to determine the purity of cellulose.
Huang, X. Y.; Zhou, J. Q.; Wang, Z.; Deng, L. C.; Hong, S.
2017-05-01
China is now at a stage of accelerated industrialization and urbanization, with energy-intensive industries contributing a large proportion of economic growth. In this study, we examined industrial energy consumption by decomposition analysis to describe the driving factors of energy consumption in China. Based on input-output (I-O) tables from the World Input-Output Database (WIOD) website and China’s energy use data from 1995 to 2011, we studied the sectorial changes of energy efficiency during the examined period. The results showed that all industries increased their energy efficiency. Energy consumption was decomposed into three factors by the logarithmic mean Divisia index (LMDI) method. The increase in production output was the leading factor that drives up China’s energy consumption. World Trade Organization accession and financial crises had great impact on the energy consumption. Based on these results, a series of energy policy suggestions for decision-makers has been proposed.
An infrared target detection algorithm based on lateral inhibition and singular value decomposition
Li, Yun; Song, Yong; Zhao, Yufei; Zhao, Shangnan; Li, Xu; Li, Lin; Tang, Songyuan
2017-09-01
This paper proposes an infrared target detection algorithm based on lateral inhibition (LI) and singular value decomposition (SVD). Firstly, a local structure descriptor based on SVD of gradient domain is constructed, which reflects basic structures of the local regions of an infrared image. Then, LI network is modified by combining LI with the local structure descriptor for enhancing target and suppressing background. Meanwhile, to calculate lateral inhibition coefficients adaptively, the direction parameters are determined by the dominant orientations obtained from SVD. Experimental results show that, compared with the typical algorithms, the proposed algorithm not only can detect small target or area target under complex backgrounds, but also has excellent abilities of background suppression and target enhancement.
Thiele, Uwe; Frastia, Lubor
2007-01-01
A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surface film of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological non-equilibrium thermodynamics for a general non-isothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analysed.
Imaouchen Yacine
2015-01-01
Full Text Available To detect rolling element bearing defects, many researches have been focused on Motor Current Signal Analysis (MCSA using spectral analysis and wavelet transform. This paper presents a new approach for rolling element bearings diagnosis without slip estimation, based on the wavelet packet decomposition (WPD and the Hilbert transform. Specifically, the Hilbert transform first extracts the envelope of the motor current signal, which contains bearings fault-related frequency information. Subsequently, the envelope signal is adaptively decomposed into a number of frequency bands by the WPD algorithm. Two criteria based on the energy and correlation analyses have been investigated to automate the frequency band selection. Experimental studies have confirmed that the proposed approach is effective in diagnosing rolling element bearing faults for improved induction motor condition monitoring and damage assessment.
Elharrouss, Omar; Moujahid, Driss; Elkah, Samah; Tairi, Hamid
2016-11-01
A particular algorithm for moving object detection using a background subtraction approach is proposed. We generate the background model by combining quad-tree decomposition with entropy theory. In general, many background subtraction approaches are sensitive to sudden illumination change in the scene and cannot update the background image in scenes. The proposed background modeling approach analyzes the illumination change problem. After performing the background subtraction based on the proposed background model, the moving targets can be accurately detected at each frame of the image sequence. In order to produce high accuracy for the motion detection, the binary motion mask can be computed by the proposed threshold function. The experimental analysis based on statistical measurements proves the efficiency of our proposed method in terms of quality and quantity. And it even outperforms substantially existing methods by perceptional evaluation.
Nuclear power plant sensor fault detection using singular value decomposition-based method
SHYAMAPADA MANDAL; N SAIRAM; S SRIDHAR; P SWAMINATHAN
2017-09-01
In a nuclear power plant, periodic sensor calibration is necessary to ensure the correctness of measurements. Those sensors which have gone out of calibration can lead to malfunction of the plant, possibly causing a loss in revenue or damage to equipment. Continuous sensor status monitoring is desirable to assure smooth running of the plant and reduce maintenance costs associated with unnecessary manual sensor calibrations.In this paper, a method is proposed to detect and identify any degradation of sensor performance. The validation process consists of two steps: (i) residual generation and (ii) fault detection by residual evaluation.Singular value decomposition (SVD) and Euclidean distance (ED) methods are used to generate the residual and evaluate the fault on the residual space, respectively. This paper claims that SVD-based fault detection method isbetter than the well-known principal component analysis-based method. The method is validated using data from fast breeder test reactor.
Yu Dejie; Cheng Junsheng; Yang Yu
2005-01-01
Based upon empirical mode decomposition (EMD) method and Hilbert spectrum, a method for fault diagnosis of roller bearing is proposed. The orthogonal wavelet bases are used to translate vibration signals of a roller bearing into time-scale representation, then, an envelope signal can be obtained by envelope spectrum analysis of wavelet coefficients of high scales. By applying EMD method and Hilbert transform to the envelope signal, we can get the local Hilbert marginal spectrum from which the faults in a roller bearing can be diagnosed and fault patterns can be identified. Practical vibration signals measured from roller bearings with out-race faults or inner-race faults are analyzed by the proposed method. The results show that the proposed method is superior to the traditional envelope spectrum method in extracting the fault characteristics of roller bearings.
L. Soriano-Equigua
2011-12-01
Full Text Available Coordinated beamforming based on singular value decomposition is an iterative method to jointly optimize thetransmit beamformers and receive combiners, to achieve high levels of sum rates in the downlink of multiusersystems, by exploiting the multi-dimensional wireless channel created by multiple transmit and receive antennas. The optimization is done at the base station and the quantized beamformers are sent to the users through a low rate link.In this work, we propose to optimize this algorithm by reducing the number of iterations and improving its uncoded bit error rate performance. Simulation results show that our proposal achieves a better bit error rate with a lower number of iterations than the original algorithm.
蔡跃洲
2009-01-01
Based on the decomposition of China’s rural household income,we made quantitative analyses of the factors affecting rural consumption by using co-integration and other econometric tools.By comparing the results with the ongoing economic stimulus package rolled out by the central government,we analyzed the effects of different policies on rural consumption.The empirical study and policy analysis show that:(1) income from household business operation, wages,and fiscal relief funds are the three main factors affecting rural household consumption;(2) the ongoing stimulus package,which includes both short-term measures like consumption subsidies and long-term policies aiming to increase rural household income and improve the rural consumption environment,are effective in promoting rural consumption;(3) in boosting rural consumption,emphasis should be put on various long-term policies.Fiscal expenditure should put more weight on consumption than on agriculture,forestry and irrigation;and(4) intra-county economies are crucial in kicking off rural consumption.Policies should be stressed for integrating rural consumption and the development of local economies.
A decomposition based on path sets for the Multi-Commodity k-splittable Maximum Flow Problem
Gamst, Mette
Switching. In the literature, the problem is solved to optimality using branch-and-price algorithms built on path-based Dantzig-Wolfe decompositions. This paper proposes a new branch-and-price algorithm built on a path set-based Dantzig-Wolfe decomposition. A path set consists of up to k paths, each...... carrying a certain amount of flow. The new branch-and-price algorithm is implemented and compared to the leading algorithms in the literature. Results for the proposed method are competitive and the method even has best performance on some instances. However, the results also indicate some scaling issues....
Hall, R. B.; Rao, I. J.; Qi, H. J.
2014-05-01
The present effort provides a 3-D thermodynamic framework generalizing the 1-D modeling of 2-way shape memory materials described by Westbrook et al. (J. Eng. Mater. Technol. 312:041010, 2010) and Chung et al. (Macromolecules 41:184-192, 2008), while extending the strain-induced crystallization and shape memory approaches of Rao and Rajagopal (Interfaces Free Bound. 2:73-94, 2000; Int. J. Solids Struct. 38:1149-1167, 2001), Barot and Rao (Z. Angew. Math. Phys. 57:652-681, 2006), and Barot et al. (Int. J. Eng. Sci. 46:325-351, 2008) to include finite thermal expansion within a logarithmic strain basis. The free energy of newly-formed orthotropic crystallites is assumed additive, with no strains in their respective configurations of formation. A multiplicative decomposition is assumed for the assumed thermoelastic orthotropic expansional strains of the respective crystallites. The properties of the crystallites are allowed to depend both on current temperature and their respective temperatures of formation. The entropy production rate relation is written in the frame rotating with the logarithmic spin and produces stress and entropy relations incorporating the integrated configurational free energies, and a driving term for the crystallization analogous to that obtained by the previous studies of Rao et al. The salient attributes of the 1-D modeling of Westbrook et al. are recovered, and applications are discussed.
Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection
Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing
2016-04-01
Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local
Optical image encryption based on multi-beam interference and common vector decomposition
Chen, Linfei; He, Bingyu; Chen, Xudong; Gao, Xiong; Liu, Jingyu
2016-02-01
Based on multi-beam interference and common vector decomposition, we propose a new method for optical image encryption. In encryption process, the information of an original image is encoded into n amplitude masks and n phase masks which are regarded as a ciphertext and many keys. In decryption process, parallel light irradiates the amplitude masks and phase masks, then passes through lens that takes place Fourier transform, and finally we obtain the original image at the output plane after interference. The security of the encryption system is also discussed in the paper, and we find that only when all the keys are correct, can the information of the original image be recovered. Computer simulation results are presented to verify the validity and the security of the proposed method.
Wu, Zhizhang; Huang, Zhongyi
2016-07-01
In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.
T. Lukas
2014-12-01
Full Text Available The combined finite–discrete element method (FDEM belongs to a family of methods of computational mechanics of discontinua. The method is suitable for problems of discontinua, where particles are deformable and can fracture or fragment. The applications of FDEM have spread over a number of disciplines including rock mechanics, where problems like mining, mineral processing or rock blasting can be solved by employing FDEM. In this work, a novel approach for the parallelization of two-dimensional (2D FDEM aiming at clusters and desktop computers is developed. Dynamic domain decomposition based parallelization solvers covering all aspects of FDEM have been developed. These have been implemented into the open source Y2D software package and have been tested on a PC cluster. The overall performance and scalability of the parallel code have been studied using numerical examples. The results obtained confirm the suitability of the parallel implementation for solving large scale problems.
Junjun Yin; Jian Yang
2014-01-01
An improved algorithm for multi-polarization recon-struction from compact polarimetry (CP) is proposed. According to two fundamental assumptions in compact polarimetric reconstruc-tion, two improvements are proposed. Firstly, the four-component model-based decomposition algorithm is modified with a new vol-ume scattering model. The decomposed helix scattering compo-nent is then used to deal with the non-reflection symmetry con-dition in compact polarimetric measurements. Using the decom-posed power and considering the scattering mechanism of each component, an average relationship between co-polarized and cross-polarized channels is developed over the original polariza-tion state extrapolation model. E-SAR polarimetric data acquired over the Oberpfaffenhofen area and JPL/AIRSAR polarimetric data acquired over San Francisco are used for verification, and good re-construction results are obtained, demonstrating the effectiveness of the proposed algorithm.
Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.
Hendler, R W; Shrager, R I
1994-01-01
Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.
New method for signal encryption using blind source separation based on subband decomposition
Zuyuan Yang; Guoxu Zhou; Zongze Wu; Jinlong Zhang
2008-01-01
A novel cryptosystem based on subband decomposition independent component analysis (SDICA) is proposed in this work, where no assumption of independence for the ciphers and the plaintexts is required. In the proposed cryptosystem, the encryption is asynchronous, i.e. the plaintexts are mixed mutually firstly and then mixed with the ciphers. In addition, the decryption is asynchronous, such that the decryption accuracy of the plaintexts can be enhanced. Some special information about the original mixing matrix is used for solving the indeterminacy of the permutation and scale of columns of the recovered mixing matrix in SDICA, instead of the characteristics of the plaintexts. Simulations are given to illustrate security and availability of our cryptosystem.
A Human ECG Identification System Based on Ensemble Empirical Mode Decomposition
Yi Luo
2013-05-01
Full Text Available In this paper, a human electrocardiogram (ECG identification system based on ensemble empirical mode decomposition (EEMD is designed. A robust preprocessing method comprising noise elimination, heartbeat normalization and quality measurement is proposed to eliminate the effects of noise and heart rate variability. The system is independent of the heart rate. The ECG signal is decomposed into a number of intrinsic mode functions (IMFs and Welch spectral analysis is used to extract the significant heartbeat signal features. Principal component analysis is used reduce the dimensionality of the feature space, and the K-nearest neighbors (K-NN method is applied as the classifier tool. The proposed human ECG identification system was tested on standard MIT-BIH ECG databases: the ST change database, the long-term ST database, and the PTB database. The system achieved an identification accuracy of 95% for 90 subjects, demonstrating the effectiveness of the proposed method in terms of accuracy and robustness.
Micro-motion Signature Extraction Method for Wideband Radar Based on Complex Image OMP Decomposition
Luo Ying
2012-12-01
Full Text Available In order to extract the micro-motion signatures in condition of Migration Through Range Cells (MTRC of micro-motional scatterers and azimuthal undersampling in wideband radar, a method based on the Orthogonal Matching Pursuit (OMP decomposition of the complex image is proposed. By making use of the amplitude and phase information of “range-slow-time image”, a set of micro-Doppler signal atoms is constructed in the complex image space. The OMP algorithm in vector space is then extend to the complex image space to obtain the micro-motion parameters. Simulations demonstrate the proposed method can extract the micro-motion signatures when MTRC of micro-motional scatterers is occurred, and can also work well when the sampling rate is lower than the Nyquist sampling rate.
A novel moving mesh method based on the domain decomposition for traveling singular sources problems
Zhou, Xiaoyan; Liang, Keiwei
2012-01-01
This paper studies the numerical solution of traveling singular sources problems. A big challenge is the sources move with different speeds. Our work focus on a moving mesh method based on the domain decomposition. A predictor-corrector algorithm is derived to simulate the position of singular sources, which are described by some ordinary differential equations. The whole domain is splitted into several subdomains according to the positions of the sources. The endpoints of each subdomain are two adjacent sources. In each subdomain, moving mesh method is respectively applied. Moreover, the computation of jump $[\\dot{u}]$ is avoided and there are only two different cases discussed in the discretization of the PDE. Furthermore, the new method has a desired second-order of the spacial convergence. Numerical examples are presented to illustrate the convergence rates and the efficiency of the method. Blow-up phenomenon is also investigated for various motions of the sources.
Li, Hongguang; Li, Ming; Li, Cheng; Li, Fucai; Meng, Guang
2017-09-01
This paper dedicates on the multi-faults decoupling of turbo-expander rotor system using Differential-based Ensemble Empirical Mode Decomposition (DEEMD). DEEMD is an improved version of DEMD to resolve the imperfection of mode mixing. The nonlinear behaviors of the turbo-expander considering temperature gradient with crack, rub-impact and pedestal looseness faults are investigated respectively, so that the baseline for the multi-faults decoupling can be established. DEEMD is then utilized on the vibration signals of the rotor system with coupling faults acquired by numerical simulation, and the results indicate that DEEMD can successfully decouple the coupling faults, which is more efficient than EEMD. DEEMD is also applied on the vibration signal of the misalignment coupling with rub-impact fault obtained during the adjustment of the experimental system. The conclusion shows that DEEMD can decompose the practical multi-faults signal and the industrial prospect of DEEMD is verified as well.
Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market
Cui, Ling-xiao; Long, Wen
2016-11-01
Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.
Method for signal decomposition and denoising based on nonuniform cosine-modulated filter banks
Xuemei Xie; Li Li; Guangming Shi; Bin Peng
2008-01-01
In this paper,a novel method for signal decomposition and denoising is proposed based on a nonuniform filter bank (NUFB),which is derived from a uniform filter bank.With this method,the signal is firstly decomposed into M subbands using a uniform filter bank.Then according to their energy distribution,the corresponding consecutive filters are merged to compose the nonuniform filters.With the resulting NUFB,the signal can be readily matched and flexibly decomposed according to its power spectrum distribution.As another advantage,this method can be used to detect and remove the narrow-band noise from the corrupted signal.To verify the proposed method,a simulation of extracting the main information of an audio signal and removing its glitch is given.
Huan Zhao
2011-06-01
Full Text Available According to the distribution characteristic of noise and clean speech signal in the frequency domain, a new speech enhancement method based on teager energy operator (TEO and perceptual wavelet packet decomposition (PWPD is proposed. Firstly, a modified Mask construction method is made to protect the acoustic cues at the low frequencies. Then a level-dependent parameter is introduced to further adjust the thresholds in light of the noise distribution feature. At last the sub-bands which have very little influence are set directly 0 to improve the signal-to-noise ratio (SNR and reduce the computation load. Simulation results show that, under different kinds of noise environments, this new method not only enhances the signal-to-noise ratio (SNR and perceptual evaluation of speech quality (PESQ, but also reduces the computation load, which is very advantageous for real-time realizing.
Bi Caifeng; Xiao Yan; Fan Yuhua; Xie Sitan; Xu Jiakun
2007-01-01
A new unsymmetrical solid Schiff base (LLi) was synthesized using L-lysine. o-vanillin and 2-hydroxy-l-naph-thaldehyde. Solid La (Ⅲ) complex of this ligand [LaL(NO3)](NO3)·2H2O was prepared and characterized by elemental analyses, IR, UV and molar conductance. The thermal decomposition kinetics of the complex for the second stage were studied under non-isothermal condition by TG and DTG methods. The kinetic equation may be expressed as: dα/dt=A·e-E/RT·(1-α)2. The kinetic parameters (E, A), activation entropy ΔS≠ and activation free-energy ΔG≠ were also gained.
Huang, Zhongyi; Markowich, Peter; Sparber, Christof
2012-01-01
We present a new numerical method for accurate computations of solutions to (linear) one dimensional Schr\\"odinger equations with periodic potentials. This is a prominent model in solid state physics where we also allow for perturbations by non-periodic potentials describing external electric fields. Our approach is based on the classical Bloch decomposition method which allows to diagonalize the periodic part of the Hamiltonian operator. Hence, the dominant effects from dispersion and periodic lattice potential are computed together, while the non-periodic potential acts only as a perturbation. Because the split-step communicator error between the periodic and non-periodic parts is relatively small, the step size can be chosen substantially larger than for the traditional splitting of the dispersion and potential operators. Indeed it is shown by the given examples, that our method is unconditionally stable and more efficient than the traditional split-step pseudo spectral schemes. To this end a particular fo...
T. Lukas; G.G. Schiava D’Albano; A. Munjiza
2014-01-01
The combined finiteediscrete element method (FDEM) belongs to a family of methods of computational mechanics of discontinua. The method is suitable for problems of discontinua, where particles are deformable and can fracture or fragment. The applications of FDEM have spread over a number of dis-ciplines including rock mechanics, where problems like mining, mineral processing or rock blasting can be solved by employing FDEM. In this work, a novel approach for the parallelization of two-dimensional (2D) FDEM aiming at clusters and desktop computers is developed. Dynamic domain decomposition based parallelization solvers covering all aspects of FDEM have been developed. These have been implemented into the open source Y2D software package and have been tested on a PC cluster. The overall performance and scalability of the parallel code have been studied using numerical examples. The results obtained confirm the suitability of the parallel implementation for solving large scale problems.
Opinion Mining Classification Using Key Word Summarization Based on Singular Value Decomposition
B Valarmathi
2011-01-01
Full Text Available With the popularity of online shopping it is increasingly becoming important for manufacturers and service providers to ask customers to review their product and associated service. Typically the number of customer reviews that a product receives grows rapidly and can be in hundreds or even thousands. This makes it difficult for a potential customer to decide whether to buy the product or not. It is also difficult for the manufacturer of the product to keep track and manage customer opinions. Opinion mining is an emerging field that classifies a user opinion into positive and negative reviews. In this paper it is proposed to develop a methodology using word score based on Singular Value Decomposition by modeling a custom corpus for a given topic in which opinion mining has to be performed. Bayes Net and decision tree induction algorithms are used to classify the opinions.
Riemann Liouvelle Fractional Integral based Empirical Mode Decomposition for ECG Denoising.
Jain, Shweta; Bajaj, Varun; Kumar, Anil
2017-09-18
Electrocardiograph (ECG) denoising is the most important step in diagnosis of heart related diseases, as the diagnosis gets influenced with noises. In this paper, a new method for ECG denoising is proposed, which incorporates empirical mode decomposition algorithm and Riemann Liouvelle (RL) fractional integral filtering. In the proposed method, noisy ECG signal is decomposed into its intrinsic mode functions (IMFs); from which noisy IMFs are identified by proposed noisy-IMFs identification methodology. RL fractional integral filtering is applied on noisy IMFs to get denoised IMFs; ECG signal is reconstructed with denoised IMFs and remaining signal dominant IMFs to obtain noise-free ECG signal. Proposed methodology is tested with MIT-BIH arrhythmia database. Its performance, in terms of signal to noise ratio (SNR) and mean square error (MSE), is compared with other related fractional integral and EMD based ECG denoising methods. The obtained results by proposed method prove that the proposed method gives efficient noise removal performance.
Quantum Game Theory Based on the Schmidt Decomposition: Can Entanglement Resolve Dilemmas?
Ichikawa, T; Tsutsui, I; Cheon, Taksu; Ichikawa, Tsubasa; Tsutsui, Izumi
2007-01-01
We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree.
Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong
2017-09-01
Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.
A Comparative Study of Empirical Mode Decomposition-Based Filtering for Impact Signal
Liwei Zhan
2016-12-01
Full Text Available The Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN has been used to propose a new method for filtering time series originating from nonlinear systems. The filtering method is based on fuzzy entropy and a new waveform. A new waveform is defined wherein Intrinsic Mode Functions (IMFs—which are obtained by CEEMDAN algorithm—are firstly sorted in ascending order (the sorted IMFs is symmetric about center point, because at any point, the mean value of the envelope line defined by the local maxima and the local minima is zero, and the energy of the sorted IMFs are calculated, respectively. Finally, the new waveform with axial symmetry can be obtained. The complexity of the new waveform can be quantified by fuzzy entropy. The relevant modes (noisy signal modes and useful signal modes can be identified by the difference between the fuzzy entropy of the new waveform and the next adjacent new waveform. To evaluate the filter performance, CEEMDAN and sample entropy, Ensemble Empirical Mode Decomposition (EEMD and fuzzy entropy, and EEMD and sample entropy were used to filter the synthesizing signals with various levels of input signal-to-noise ratio (SNRin. In particular, this approach is successful in filtering impact signal. The results of the filtering are evaluated by a de-trended fluctuation analysis (DFA algorithm, revised mean square error (RMSE, and revised signal-to-noise ratio (RSNR, respectively. The filtering results of simulated and impact signal show that the filtering method based on CEEMDAN and fuzzy entropy outperforms other signal filtering methods.
Bhutani, Hemant; Singh, Saranjit; Vir, Sanjay; Bhutani, K K; Kumar, Raj; Chakraborti, Asit K; Jindal, K C
2007-03-12
Isoniazid was subjected to different ICH prescribed stress conditions of thermal stress, hydrolysis, oxidation and photolysis. The drug was stable to dry heat (50 and 60 degrees C). It showed extensive decomposition under hydrolytic conditions, while it was only moderately sensitive to oxidation stress. The solid drug turned intense yellow on exposure to light under accelerated conditions of temperature (40 degrees C) and humidity (75% RH). In total, three major degradation products were detected by LC. For establishment of stability-indicating assay, the reaction solutions in which different degradation products were formed were mixed, and the separation was optimized by varying the LC conditions. An acceptable separation was achieved using a C-18 column and a mobile phase comprising of water:acetonitrile (96:4, v/v), with flow rate and detection wavelength being 0.5 ml min(-1) and 254 nm, respectively. The degradation products appeared at relative retention times (RR(T)) of 0.71, 1.34 and 4.22. The validation studies established a linear response of the drug at concentrations between 50 and 1000 microg ml(-1). The mean values (+/-R.S.D.) of slope, intercept and correlation coefficient were 35,199 (+/-0.88), 114,310 (+/-4.70) and 0.9998 (+/-0.01), respectively. The mean R.S.D. values for intra- and inter-day precision were 0.24 and 0.90, respectively. The recovery of the drug ranged between 99.42 and 100.58%, when it was spiked to a mixture of solutions in which sufficient degradation was observed. The specificity was established through peak purity testing using a photodiode array detector. The method worked well on application to marketed formulation of isoniazid, and a fixed-dose combination containing isoniazid and ethambutol HCl. It was even extendable to LC-MS studies, which were carried out to identify the three degradation products. The m/z values of the peaks at RR(T) 0.71 and RR(T) 1.34 matched with isonicotinic acid and isonicotinamide, respectively
Performance-Based Rewards and Work Stress
Ganster, Daniel C.; Kiersch, Christa E.; Marsh, Rachel E.; Bowen, Angela
2011-01-01
Even though reward systems play a central role in the management of organizations, their impact on stress and the well-being of workers is not well understood. We review the literature linking performance-based reward systems to various indicators of employee stress and well-being. Well-controlled experiments in field settings suggest that certain…
Performance-Based Rewards and Work Stress
Ganster, Daniel C.; Kiersch, Christa E.; Marsh, Rachel E.; Bowen, Angela
2011-01-01
Even though reward systems play a central role in the management of organizations, their impact on stress and the well-being of workers is not well understood. We review the literature linking performance-based reward systems to various indicators of employee stress and well-being. Well-controlled experiments in field settings suggest that certain…
Kawabe, Yutaka; Yoshikawa, Toshio; Chida, Toshifumi; Tada, Kazuhiro; Kawamoto, Masuki; Fujihara, Takashi; Sassa, Takafumi; Tsutsumi, Naoto
2015-10-01
In order to analyze the spectra of inseparable chemical mixtures, many mathematical methods have been developed to decompose them into the components relevant to species from series of spectral data obtained under different conditions. We formulated a method based on singular value decomposition (SVD) of linear algebra, and applied it to two example systems of organic dyes, being successful in reproducing absorption spectra assignable to cis/trans azocarbazole dyes from the spectral data after photoisomerization and to monomer/dimer of cyanine dyes from those during photodegaradation process. For the example of photoisomerization, polymer films containing the azocarbazole dyes were prepared, which have showed updatable holographic stereogram for real images with high performance. We made continuous monitoring of absorption spectrum after optical excitation and found that their spectral shapes varied slightly after the excitation and during recovery process, of which fact suggested the contribution from a generated photoisomer. Application of the method was successful to identify two spectral components due to trans and cis forms of azocarbazoles. Temporal evolution of their weight factors suggested important roles of long lifetimed cis states in azocarbazole derivatives. We also applied the method to the photodegradation of cyanine dyes doped in DNA-lipid complexes which have shown efficient and durable optical amplification and/or lasing under optical pumping. The same SVD method was successful in the extraction of two spectral components presumably due to monomer and H-type dimer. During the photodegradation process, absorption magnitude gradually decreased due to decomposition of molecules and their decaying rates strongly depended on the spectral components, suggesting that the long persistency of the dyes in DNA-complex related to weak tendency of aggregate formation.
Chandranath R. N. Athaudage
2003-09-01
Full Text Available A dynamic programming-based optimization strategy for a temporal decomposition (TD model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%Ã¢Â€Â“60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Qinghua Xie
2017-01-01
Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there
Incerti, Guido; Bonanomi, Giuliano; Sarker, Tushar Chandra; Giannino, Francesco; Cartenì, Fabrizio; Peressotti, Alessandro; Spaccini, Riccardo; Piccolo, Alessandro; Mazzoleni, Stefano
2017-04-01
Modelling organic matter decomposition is fundamental to predict biogeochemical cycling in terrestrial ecosystems. Current models use C/N or Lignin/N ratios to describe susceptibility to decomposition, or implement separate C pools decaying with different rates, disregarding biomolecular transformations and interactions and their effect on decomposition dynamics. We present a new process-based model of decomposition that includes a description of biomolecular dynamics obtained by 13C-CPMAS NMR spectroscopy. Baseline decay rates for relevant molecular classes and intermolecular protection were calibrated by best fitting of experimental data from leaves of 20 plant species decomposing for 180 days in controlled optimal conditions. The model was validated against field data from leaves of 32 plant species decomposing for 1-year at four sites in Mediterranean ecosystems. Our innovative approach accurately predicted decomposition of a wide range of litters across different climates. Simulations correctly reproduced mass loss data and variations of selected molecular classes both in controlled conditions and in the field, across different plant molecular compositions and environmental conditions. Prediction accuracy emerged from the species-specific partitioning of molecular types and from the representation of intermolecular interactions. The ongoing model implementation and calibration are oriented at representing organic matter dynamics in soil, including processes of interaction between mineral and organic soil fractions as a function of soil texture, physical aggregation of soil organic particles, and physical protection of soil organic matter as a function of aggregate size and abundance. Prospectively, our model shall satisfactorily reproduce C sequestration as resulting from experimental data of soil amended with a range of organic materials with different biomolecular quality, ranging from biochar to crop residues. Further application is also planned based on
Snippe, E.; Dziak, J.J.; Lanza, S.T.; Nyklicek, I.; Wichers, M.
2017-01-01
Both daily stress and the tendency to react to stress with heightened levels of negative affect (i.e., stress sensitivity) are important vulnerability factors for adverse mental health outcomes. Mindfulness-based stress reduction (MBSR) may help to reduce perceived daily stress and stress
Snippe, Evelien; Dziak, John J.; Lanza, Stephanie T.; Nykliek, Ivan; Wichers, Marieke
Both daily stress and the tendency to react to stress with heightened levels of negative affect (i.e., stress sensitivity) are important vulnerability factors for adverse mental health outcomes. Mindfulness-based stress reduction (MBSR) may help to reduce perceived daily stress and stress
Kaijian He
2016-11-01
Full Text Available The electricity market has experienced an increasing level of deregulation and reform over the years. There is an increasing level of electricity price fluctuation, uncertainty, and risk exposure in the marketplace. Traditional risk measurement models based on the homogeneous and efficient market assumption no longer suffice, facing the increasing level of accuracy and reliability requirements. In this paper, we propose a new Empirical Mode Decomposition (EMD-based Value at Risk (VaR model to estimate the downside risk measure in the electricity market. The proposed model investigates and models the inherent multiscale market risk structure. The EMD model is introduced to decompose the electricity time series into several Intrinsic Mode Functions (IMF with distinct multiscale characteristics. The Exponential Weighted Moving Average (EWMA model is used to model the individual risk factors across different scales. Experimental results using different models in the Australian electricity markets show that EMD-EWMA models based on Student’s t distribution achieves the best performance, and outperforms the benchmark EWMA model significantly in terms of model reliability and predictive accuracy.
Honglu Zhu
2015-12-01
Full Text Available The power prediction for photovoltaic (PV power plants has significant importance for their grid connection. Due to PV power’s periodicity and non-stationary characteristics, traditional power prediction methods based on linear or time series models are no longer applicable. This paper presents a method combining the advantages of the wavelet decomposition (WD and artificial neural network (ANN to solve this problem. With the ability of ANN to address nonlinear relationships, theoretical solar irradiance and meteorological variables are chosen as the input of the hybrid model based on WD and ANN. The output power of the PV plant is decomposed using WD to separated useful information from disturbances. The ANNs are used to build the models of the decomposed PV output power. Finally, the outputs of the ANN models are reconstructed into the forecasted PV plant power. The presented method is compared with the traditional forecasting method based on ANN. The results shows that the method described in this paper needs less calculation time and has better forecasting precision.
Understanding wealth-based inequalities in child health in India: a decomposition approach.
Chalasani, Satvika
2012-12-01
India experienced tremendous economic growth since the mid-1980s but this growth was paralleled by sharp rises in economic inequality. Urban areas experienced greater economic growth as well as greater increases in economic inequality than rural areas. During the same period, child health improved on average but socioeconomic differentials in child health persisted. This paper attempts to explain wealth-based inequalities in child mortality and malnutrition using a regression-based decomposition approach. Data for the analysis come from the 1992/93, 1998/99, and 2005/06 Indian National Family Health Surveys. Inequalities in child health are measured using the concentration index. The concentration index for each outcome is then decomposed into the contributions of wealth-based inequality in the observed determinants of child health. Results indicate that mortality inequality declined in urban areas but remained unchanged or increased in rural areas. Malnutrition inequality increased dramatically both in urban and rural areas. The two largest individual/household-level sources of disparities in child health are (i) inequality in the distribution of wealth itself, and (ii) inequality in maternal education. The contributions of observed determinants (i) to neonatal mortality inequality remained unchanged, (ii) to child mortality inequality increased, and (ii) to malnutrition inequality increased. It is possible that the increases in child health inequality reflect urban biases in economic growth, and the mixed performance of public programs that could have otherwise offset the impacts of unequal growth.
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au [Department of Mechanical Engineering, The University of Melbourne, Melbourne, Victoria 3010 (Australia); Tinney, Charles E. [Center for Aeromechanics Research, The University of Texas at Austin, Austin, Texas 78712 (United States)
2014-05-15
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimation (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.
Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2012-01-01
Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.
Minei, A. J.; Cooke, S. A.
2013-06-01
A singular value decomposition (SVD) signal processing method is newly applied to molecular free induction decays (FIDs) obtained using a time domain, broadband rotational spectrometer. It is demonstrated that for the strongest spectral transitions the SVD method can determine transition frequencies with a precision matching that of the fast Fourier transform method. Furthermore, the SVD-based analysis produces information concerning transition phase, amplitude, damping, and frequency for the strongest molecular signals. These parameters are shown as useful in regards to time-domain signal filtering. The computational expense of the SVD method is high and therefore this approach has the disadvantage that with our present computers the full molecular FID must be considerably truncated. The effects of FID truncation on the determined transition frequencies have been examined. Conversely, this truncation method illustrates that broadband spectra may be recovered from fragments as small as 1 % of the complete FID. The success of the SVD-based method is further examined in regards to weak signal detection, and frequency dependent detection. The pure rotational spectrum of 1H,1H,2H-perfluorocyclobutane is used for illustrative purposes in this study.
Fengrong Bi
2016-01-01
Full Text Available In spark ignition engines, knock onset limits the maximum spark advance. An inaccurate identification of this limit penalises the fuel conversion efficiency. Thus knock feature extraction is the key of closed-loop control of ignition in spark ignition engine. This paper reports an investigation of knock detection in spark ignition (SI engines using CEEMD-Hilbert transform based on the engine cylinder pressure signals and engine cylinder block vibration signals. Complementary Ensemble Empirical Mode Decomposition (CEEMD was used to decompose the signal and detect knock characteristic. Hilbert transform was used to analyze the frequency information of knock characteristic. The result shows that, for both of cylinder pressure signals and vibration signals, the CEEMD algorithm could extract the knock characteristic, and the Hilbert transform result shows that the energy of knock impact areas has the phenomenon of frequency concentration in both cylinder pressure signal and cylinder block vibration signal. At last, the knock window is then determined, based on which a new knock intensity evaluation factor K is propose, and it can accurately distinguish between heavy knock, light knock, and normal combustion three states.
Jiang, Shouyong; Yang, Shengxiang
2016-02-01
The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.
Wen, Qiaonong; Wan, Suiren
2013-01-01
Ultrasound image deconvolution involves noise reduction and image feature enhancement, denoising need equivalent the low-pass filtering, image feature enhancement is to strengthen the high-frequency parts, these two requirements are often combined together. It is a contradictory requirement that we must be reasonable balance between these two basic requirements. Image deconvolution method of partial differential equation model is the method based on diffusion theory, and sparse decomposition deconvolution is image representation-based method. The mechanisms of these two methods are not the same, effect of these two methods own characteristics. In contourlet transform domain, we combine the strengths of the two deconvolution method together by image fusion, and introduce the entropy of local orientation energy ratio into fusion decision-making, make a different treatment according to the actual situation on the low-frequency part of the coefficients and the high-frequency part of the coefficient. As deconvolution process is inevitably blurred image edge information, we fusion the edge gray-scale image information to the deconvolution results in order to compensate the missing edge information. Experiments show that our method is better than the effect separate of using deconvolution method, and restore part of the image edge information.
Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R. [Pacific Northwest National Lab., Richland, WA (United States); Kim, B.C.; Gavaskar, A.R. [Battelle Columbus Div., OH (United States)
1996-02-01
Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.
Region quad-tree decomposition based edge detection for medical images.
Dua, Sumeet; Kandiraju, Naveen; Chowriappa, Pradeep
2010-05-28
Edge detection in medical images has generated significant interest in the medical informatics community, especially in recent years. With the advent of imaging technology in biomedical and clinical domains, the growth in medical digital images has exceeded our capacity to analyze and store them for efficient representation and retrieval, especially for data mining applications. Medical decision support applications frequently demand the ability to identify and locate sharp discontinuities in an image for feature extraction and interpretation of image content, which can then be exploited for decision support analysis. However, due to the inherent high dimensional nature of the image content and the presence of ill-defined edges, edge detection using classical procedures is difficult, if not impossible, for sensitive and specific medical informatics-based discovery. In this paper, we propose a new edge detection technique based on the regional recursive hierarchical decomposition using quadtree and post-filtration of edges using a finite difference operator. We show that in medical images of common origin, focal and/or penumbral blurred edges can be characterized by an estimable intensity gradient. This gradient can further be used for dismissing false alarms. A detailed validation and comparison with related works on diabetic retinopathy images and CT scan images show that the proposed approach is efficient and accurate.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.
Park, Chulhee; Kang, Moon Gi
2016-05-18
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R. [Pacific Northwest National Lab., Richland, WA (United States); Kim, B.C.; Gavaskar, A.R. [Battelle Columbus Div., OH (United States)
1996-02-01
Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.
Iris identification system based on Fourier coefficients and singular value decomposition
Somnugpong, Sawet; Phimoltares, Suphakant; Maneeroj, Saranya
2011-12-01
Nowadays, both personal identification and classification are very important. In order to identify the person for some security applications, physical or behavior-based characteristics of individuals with high uniqueness might be analyzed. Biometric becomes the mostly used in personal identification purpose. There are many types of biometric information currently used. In this work, iris, one kind of personal characteristics is considered because of its uniqueness and collectable. Recently, the problem of various iris recognition systems is the limitation of space to store the data in a variety of environments. This work proposes the iris recognition system with small-size of feature vector causing a reduction in space complexity term. For this experiment, each iris is presented in terms of frequency domain, and based on neural network classification model. First, Fast Fourier Transform (FFT) is used to compute the Discrete Fourier Coefficients of iris data in frequency domain. Once the iris data was transformed into frequency-domain matrix, Singular Value Decomposition (SVD) is used to reduce a size of the complex matrix to single vector. All of these vectors would be input for neural networks for the classification step. With this approach, the merit of our technique is that size of feature vector is smaller than that of other techniques with the acceptable level of accuracy when compared with other existing techniques.
Multidisciplinary Product Decomposition and Analysis Based on Design Structure Matrix Modeling
Habib, Tufail
2014-01-01
Design structure matrix (DSM) modeling in complex system design supports to define physical and logical configuration of subsystems, components, and their relationships. This modeling includes product decomposition, identification of interfaces, and structure analysis to increase the architectural...
Sharma, K. K.; Jain, Heena
2013-01-01
The security of digital data including images has attracted more attention recently, and many different image encryption methods have been proposed in the literature for this purpose. In this paper, a new image encryption method using wavelet packet decomposition and discrete linear canonical transform is proposed. The use of wavelet packet decomposition and DLCT increases the key size significantly making the encryption more robust. Simulation results of the proposed technique are also presented.
Dyson, Mark
2003-01-01
This PhD is based on constructing and resolving a set of modular problems. Each problem exists as a separate entity. Each has its own characteristics, yet when combined with other, related problems, provides a dimension to a story. The relationships and order between problems has priority over....... Not only have design tools changed character, but also the processes associated with them. Today, the composition of problems and their decomposition into parcels of information, calls for a new paradigm. This paradigm builds on the networking of agents and specialisations, and the paths of communication...... that are necessary to make sense out of any design situation. The hypothesis of this project, is that Design organisation, communication and CAD-information processes must be jointly reengineered to create the dynamic structures needed for the forward projection of design knowledge into this expanding Design network....
骆勇
2015-01-01
In the process human sports training, the human motion force decomposition modeling is used for analyzing the characteristics of split force in quantitative, and it can provides the base of human force action recognition with multi sensor information fusion. In the traditional method, the entropy clustering functional analysis method is used for human motor neuron stress behavior analysis, but it cannot achieve the simulation and analysis of missing force data in complex motion state. An improved human motion force behavior decomposition method is proposed based on RFID missing data probabili⁃ty functional. The acceleration sensor is combined with RFID. The missing data probability functional algorithm is used for perception of the movement joint node stress behavior decomposition, and the different sensor output signal correlation characteristics are extracted. The improved decomposition algorithm and the model of human body motion characteristic be⁃havior is completed. The experimental results show that the model can accurately capture to the exercise stress data of body with upper body and lower limbs. Force behavior decomposition is accurate and concrete. The quantitative description of stress characteristics of human body in motion model is obtained, so as to guide the movement training, so it has good appli⁃cation value.%对人体体育运动训练过程中的受力行为进行分解建模，可以定量分析人体受力特征，为实现多传感器信息融合的运动受力行为识别奠定基础。传统方法中采用熵聚类泛函的人体运动神经元受力行为分析方法，无法实现对复杂运动状态下的人体受力缺失数据的模拟和分析。提出一种基于RFID缺失数据概率泛函的人体运动受力行为分解，将加速度传感器与RFID结合，采用缺失数据概率泛函算法对感知的运动关节节点进行受力行为分解，提取了不同传感器节点输出信号关联特征，实现对人体
Oil spill detection by a support vector machine based on polarization decomposition characteristics
ZOU Yarong; SHI Lijian; ZHANG Shengli; LIANG Chao; ZENG Tao
2016-01-01
Marine oil spills have caused major threats to marine environment over the past few years. The early detection of the oil spill is of great significance for the prevention and control of marine disasters. At present, remote sensing is one of the major approaches for monitoring the oil spill. Full polarization synthetic aperture radarc SAR data are employed to extract polarization decomposition parameters including entropy (H) and reflection entropy (A). The characteristic spectrum of the entropy and reflection entropy combination has analyzed and the polarization characteristic spectrum of the oil spill has developed to support remote sensing of the oil spill. The findings show that the information extracted from (1–A)×(1–H) and (1–H)×A parameters is relatively evident effects. The results of extraction of the oil spill information based onH×A parameter are relatively not good. The combination of the two has something to do withH andA values. In general, whenH>0.7,A value is relatively small. Here, the extraction of the oil spill information using (1–A)×(1–H) and (1–H)×A parameters obtains evident effects. Whichever combined parameter is adopted, oil well data would cause certain false alarm to the extraction of the oil spill information. In particular the false alarm of the extracted oil spill information based on (1–A)×(1–H) is relatively high, while the false alarm based on (1–A)×H and (1–H)×A parameters is relatively small, but an image noise is relatively big. The oil spill detection employing polarization characteristic spectrum support vector machine can effectively identify the oil spill information with more accuracy than that of the detection method based on single polarization feature.
Dong Cui
2015-09-01
Full Text Available EEG characteristics that correlate with the cognitive functions are important in detecting mild cognitive impairment (MCI in T2DM. To investigate the complexity between aMCI group and age-matched non-aMCI control group in T2DM, six entropies combining empirical mode decomposition (EMD, including Approximate entropy (ApEn, Sample entropy (SaEn, Fuzzy entropy (FEn, Permutation entropy (PEn, Power spectrum entropy (PsEn and Wavelet entropy (WEn were used in the study. A feature extraction technique based on maximization of the area under the curve (AUC and a support vector machine (SVM were subsequently used to for features selection and classification. Finally, Pearson's linear correlation was employed to study associations between these entropies and cognitive functions. Compared to other entropies, FEn had a higher classification accuracy, sensitivity and specificity of 68%, 67.1% and 71.9%, respectively. Top 43 salient features achieved classification accuracy, sensitivity and specificity of 73.8%, 72.3% and 77.9%, respectively. P4, T4 and C4 were the highest ranking salient electrodes. Correlation analysis showed that FEn based on EMD was positively correlated to memory at electrodes F7, F8 and P4, and PsEn based on EMD was positively correlated to Montreal cognitive assessment (MoCA and memory at electrode T4. In sum, FEn based on EMD in right-temporal and occipital regions may be more suitable for early diagnosis of the MCI with T2DM.
He, Zhi; Liu, Lin
2016-11-01
Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.
Singular value decomposition-based 2D image reconstruction for computed tomography.
Liu, Rui; He, Lu; Luo, Yan; Yu, Hengyong
2017-01-01
Singular value decomposition (SVD)-based 2D image reconstruction methods are developed and evaluated for a broad class of inverse problems for which there are no analytical solutions. The proposed methods are fast and accurate for reconstructing images in a non-iterative fashion. The multi-resolution strategy is adopted to reduce the size of the system matrix to reconstruct large images using limited memory capacity. A modified high-contrast Shepp-Logan phantom, a low-contrast FORBILD head phantom, and a physical phantom are employed to evaluate the proposed methods with different system configurations. The results show that the SVD methods can accurately reconstruct images from standard scan and interior scan projections and that they outperform other benchmark methods. The general SVD method outperforms the other SVD methods. The truncated SVD and Tikhonov regularized SVD methods accurately reconstruct a region-of-interest (ROI) from an internal scan with a known sub-region inside the ROI. Furthermore, the SVD methods are much faster and more flexible than the benchmark algorithms, especially in the ROI reconstructions in our experiments.
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Methane Decomposition into Carbon Fibers over Coprecipitated Nickel-Based Catalysts
Yan Ju; Fengyi Li; Renzhong Wei
2005-01-01
Decomposition of methane in the presence of coprecipitated nickel-based catalysts to produce carbon fibers was investigated. The reaction was studied in the temperature range of 773 K to 1073 K.At 1023 K, the catalytic activities of three catalysts kept high at the initial period and then decreased with the reaction time. The lifetimes of Ni-Cu-Al and Ni-La-Al catalysts are longer than that of Ni-Al catalyst. With three catalysts, the yield of carbon fibers was very low at 773 K. The yield of carbon fibers for Ni-La-Al catalyst was more than those for Ni-Al and Ni-Cu-Al catalysts. For Ni-La-Al catalyst, the elevation of temperature from 873 K up to 1073 K led gradually to an increase in the yield of carbon fibers.XRD studies on the Ni-La-Al catalyst indicate that La2NiO4 was formed. The formation of La2NiO4 is responsible for the increase in the catalytic lifetime and the yield of carbon fibers synthesized on Ni-La-Al at 773-1073 K. Carbon fibers synthesized on Ni-Al catalyst are thin, long carbon nanotubes. There are bamboo-shaped carbon fibers synthesized on Ni-Cu-Al catalyst. Carbon fibers synthesized on Ni-La-Al catalyst have large hollow core, thin wall and good graphitization.
A novel image watermarking method based on singular value decomposition and digital holography
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
Ruoyang Chen
2016-01-01
Full Text Available Since the CSI 300 index futures officially began trading on April 15, 2010, analysis and predictions of the price fluctuations of Chinese stock index futures prices have become a popular area of active research. In this paper, the Complementary Ensemble Empirical Mode Decomposition (CEEMD method is used to decompose the sequences of Chinese stock index futures prices into residue terms, low-frequency terms, and high-frequency terms to reveal the fluctuation characteristics over different time scales of the sequences. Then, the CEEMD method is combined with the Particle Swarm Optimization (PSO algorithm-based Support Vector Machine (SVM model to forecast Chinese stock index futures prices. The empirical results show that the residue term determines the long-term trend of stock index futures prices. The low-frequency term, which represents medium-term price fluctuations, is mainly affected by policy regulations under the analysis of the Iterated Cumulative Sums of Squares (ICSS algorithm, whereas short-term market disequilibrium, which is represented by the high-frequency term, plays an important local role in stock index futures price fluctuations. In addition, in forecasting the daily or even intraday price data of Chinese stock index futures, the combination prediction model is superior to the single SVM model, which implies that the accuracy of predicting Chinese stock index futures prices will be improved by considering fluctuation characteristics in different time scales.
Mode decomposition based on crystallographic symmetry in the band-unfolding method
Ikeda, Yuji; Carreras, Abel; Seko, Atsuto; Togo, Atsushi; Tanaka, Isao
2017-01-01
The band-unfolding method is widely used to calculate the effective band structures of a disordered system from its supercell model. The unfolded band structures show the crystallographic symmetry of the underlying structure, where the difference of chemical components and the local atomic relaxation are ignored. However, it has still been difficult to decompose the unfolded band structures into the modes based on the crystallographic symmetry of the underlying structure, and therefore detailed analyses of the unfolded band structures have been restricted. In this study, a procedure to decompose the unfolded band structures according to the small representations (SRs) of the little groups is developed. The decomposition is performed using the projection operators for SRs derived from the group representation theory. The current method is employed to investigate the phonon band structure of disordered face-centered-cubic Cu0.75Au0.25 , which has large variations of atomic masses and force constants among the atomic sites due to the chemical disorder. In the unfolded phonon band structure, several peculiar behaviors such as discontinuous and split branches are found in the decomposed modes corresponding to specific SRs. They are found to occur because different combinations of the chemical elements contribute to different regions of frequency.
BEHRENS JR.,RICHARD; MINIER,LEANNA M.G.
1999-10-25
A study to characterize the low-temperature reactive processes for o-AP and an AP/HTPB-based propellant (class 1.3) is being conducted in the laboratory using the techniques of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and scanning electron microscopy (SEM). The results presented in this paper are a follow up of the previous work that showed the overall decomposition to be complex and controlled by both physical and chemical processes. The decomposition is characterized by the occurrence of one major event that consumes up to {approx}35% of the AP, depending upon particle size, and leaves behind a porous agglomerate of AP. The major gaseous products released during this event include H{sub 2}O, O{sub 2}, Cl{sub 2}, N{sub 2}O and HCl. The recent efforts provide further insight into the decomposition processes for o-AP. The temporal behaviors of the gas formation rates (GFRs) for the products indicate that the major decomposition event consists of three chemical channels. The first and third channels are affected by the pressure in the reaction cell and occur at the surface or in the gas phase above the surface of the AP particles. The second channel is not affected by pressure and accounts for the solid-phase reactions characteristic of o-AP. The third channel involves the interactions of the decomposition products with the surface of the AP. SEM images of partially decomposed o-AP provide insight to how the morphology changes as the decomposition progresses. A conceptual model has been developed, based upon the STMBMS and SEM results, that provides a basic description of the processes. The thermal decomposition characteristics of the propellant are evaluated from the identities of the products and the temporal behaviors of their GFRs. First, the volatile components in the propellant evolve from the propellant as it is heated. Second, the hot AP (and HClO{sub 4}) at the AP-binder interface oxidize the binder through reactions that
Yuan, Bing; Bernstein, Elliot R
2017-01-07
Unimolecular decomposition of energetic molecules, 3,3'-diamino-4,4'-bisfuroxan (labeled as A) and 4,4'-diamino-3,3'-bisfuroxan (labeled as B), has been explored via 226/236 nm single photon laser excitation/decomposition. These two energetic molecules, subsequent to UV excitation, create NO as an initial decomposition product at the nanosecond excitation energies (5.0-5.5 eV) with warm vibrational temperature (1170 ± 50 K for A, 1400 ± 50 K for B) and cold rotational temperature (energetic barrier is that for which the furoxan ring opens on the S1 state via the breaking of the N1-O1 bond. Subsequently, the molecule moves to the ground S0 state through related ring-opening conical intersections, and an NO product is formed on the ground state surface with little rotational excitation at the last NO dissociation step. For the ground state ring opening decomposition mechanism, the N-O bond and C-N bond break together in order to generate dissociated NO. With the MP2 correction for the CASSCF(12,12) surface, the potential energies of molecules with dissociated NO product are in the range from 2.04 to 3.14 eV, close to the theoretical result for the density functional theory (B3LYP) and MP2 methods. The CASMP2(12,12) corrected approach is essential in order to obtain a reasonable potential energy surface that corresponds to the observed decomposition behavior of these molecules. Apparently, highly excited states are essential for an accurate representation of the kinetics and dynamics of excited state decomposition of both of these bisfuroxan energetic molecules. The experimental vibrational temperatures of NO products of A and B are about 800-1000 K lower than previously studied energetic molecules with NO as a decomposition product.
Robust x-ray based material identification using multi-energy sinogram decomposition
Yuan, Yaoshen; Tracey, Brian; Miller, Eric
2016-05-01
There is growing interest in developing X-ray computed tomography (CT) imaging systems with improved ability to discriminate material types, going beyond the attenuation imaging provided by most current systems. Dual- energy CT (DECT) systems can partially address this problem by estimating Compton and photoelectric (PE) coefficients of the materials being imaged, but DECT is greatly degraded by the presence of metal or other materials with high attenuation. Here we explore the advantages of multi-energy CT (MECT) systems based on photon-counting detectors. The utility of MECT has been demonstrated in medical applications where photon- counting detectors allow for the resolution of absorption K-edges. Our primary concern is aviation security applications where K-edges are rare. We simulate phantoms with differing amounts of metal (high, medium and low attenuation), both for switched-source DECT and for MECT systems, and include a realistic model of detector energy 0 resolution. We extend the DECT sinogram decomposition method of Ying et al. to MECT, allowing estimation of separate Compton and photoelectric sinograms. We furthermore introduce a weighting based on a quadratic approximation to the Poisson likelihood function that deemphasizes energy bins with low signal. Simulation results show that the proposed approach succeeds in estimating material properties even in high-attenuation scenarios where the DECT method fails, improving the signal to noise ratio of reconstructions by over 20 dB for the high-attenuation phantom. Our work demonstrates the potential of using photon counting detectors for stably recovering material properties even when high attenuation is present, thus enabling the development of improved scanning systems.
Afra, Sardar; Gildin, Eduardo
2016-09-01
Parameter estimation through robust parameterization techniques has been addressed in many works associated with history matching and inverse problems. Reservoir models are in general complex, nonlinear, and large-scale with respect to the large number of states and unknown parameters. Thus, having a practical approach to replace the original set of highly correlated unknown parameters with non-correlated set of lower dimensionality, that captures the most significant features comparing to the original set, is of high importance. Furthermore, de-correlating system's parameters while keeping the geological description intact is critical to control the ill-posedness nature of such problems. We introduce the advantages of a new low dimensional parameterization approach for reservoir characterization applications utilizing multilinear algebra based techniques like higher order singular value decomposition (HOSVD). In tensor based approaches like HOSVD, 2D permeability images are treated as they are, i.e., the data structure is kept as it is, whereas in conventional dimensionality reduction algorithms like SVD data has to be vectorized. Hence, compared to classical methods, higher redundancy reduction with less information loss can be achieved through decreasing present redundancies in all dimensions. In other words, HOSVD approximation results in a better compact data representation with respect to least square sense and geological consistency in comparison with classical algorithms. We examined the performance of the proposed parameterization technique against SVD approach on the SPE10 benchmark reservoir model as well as synthetic channelized permeability maps to demonstrate the capability of the proposed method. Moreover, to acquire statistical consistency, we repeat all experiments for a set of 1000 unknown geological samples and provide comparison using RMSE analysis. Results prove that, for a fixed compression ratio, the performance of the proposed approach
Optimization of dual-energy CT acquisitions for proton therapy using projection-based decomposition.
Vilches-Freixas, Gloria; Létang, Jean Michel; Ducros, Nicolas; Rit, Simon
2017-09-01
Dual-energy computed tomography (DECT) has been presented as a valid alternative to single-energy CT to reduce the uncertainty of the conversion of patient CT numbers to proton stopping power ratio (SPR) of tissues relative to water. The aim of this work was to optimize DECT acquisition protocols from simulations of X-ray images for the treatment planning of proton therapy using a projection-based dual-energy decomposition algorithm. We have investigated the effect of various voltages and tin filtration combinations on the SPR map accuracy and precision, and the influence of the dose allocation between the low-energy (LE) and the high-energy (HE) acquisitions. For all spectra combinations, virtual CT projections of the Gammex phantom were simulated with a realistic energy-integrating detector response model. Two situations were simulated: an ideal case without noise (infinite dose) and a realistic situation with Poisson noise corresponding to a 20 mGy total central dose. To determine the optimal dose balance, the proportion of LE-dose with respect to the total dose was varied from 10% to 90% while keeping the central dose constant, for four dual-energy spectra. SPR images were derived using a two-step projection-based decomposition approach. The ranges of 70 MeV, 90 MeV, and 100 MeV proton beams onto the adult female (AF) reference computational phantom of the ICRP were analytically determined from the reconstructed SPR maps. The energy separation between the incident spectra had a strong impact on the SPR precision. Maximizing the incident energy gap reduced image noise. However, the energy gap was not a good metric to evaluate the accuracy of the SPR. In terms of SPR accuracy, a large variability of the optimal spectra was observed when studying each phantom material separately. The SPR accuracy was almost flat in the 30-70% LE-dose range, while the precision showed a minimum slightly shifted in favor of lower LE-dose. Photon noise in the SPR images (20 mGy dose
Adaptive Aggregation Based Domain Decomposition Multigrid for the Lattice Wilson Dirac Operator
Frommer, Andreas; Krieg, Stefan; Leder, Björn; Rottmann, Matthias
2013-01-01
In lattice QCD computations a substantial amount of work is spent in solving discretized versions of the Dirac equation. Conventional Krylov solvers show critical slowing down for large system sizes and physically interesting parameter regions. We present a domain decomposition adaptive algebraic multigrid method used as a precondtioner to solve the "clover improved" Wilson discretization of the Dirac equation. This approach combines and improves two approaches, namely domain decomposition and adaptive algebraic multigrid, that have been used seperately in lattice QCD before. We show in extensive numerical test conducted with a parallel production code implementation that considerable speed-up over conventional Krylov subspace methods, domain decomposition methods and other hierarchical approaches for realistic system sizes can be achieved.
BP-Neural-Network-Based Tool Wear Monitoring by Using Wavelet Decomposition of the Power Spectrum
ZHENG Jian-ming; XI Chang-qing; LI Yan; XIAO Ji-ming
2004-01-01
In a drilling process, the power spectrum of the drilling force is related to the tool wear and is widely applied in the monitoring of tool wear. But the feature extraction and identification of the power spectrum have always been an unresolved difficult problem. This paper solves it through decomposition of the power spectrum in multilayers using wavelet transform and extraction of the low frequency decomposition coefficient us the envelope information of the power spectrum. Intelligent identification of the tool wear status is achieved in the drilling process through fusing the wavelet decomposition coefficient of the power spectrum by using a BP ( Back Propagation) neural network. The experimental results show that the features of the power spectrum can be extracted efficiently through this method, and the trained neural networks show high identification precision and the ability of extension.
Djamil, John; Segler, Stefan A W; Bensch, Wolfgang; Schürmann, Ulrich; Deng, Mao; Kienle, Lorenz; Hansen, Sven; Beweries, Torsten; von Wüllen, Leo; Rosenfeldt, Sabine; Förster, Stephan; Reinsch, Helge
2015-06-08
Nanocomposites based on molybdenum disulfide (MoS2 ) and different carbon modifications are intensively investigated in several areas of applications due to their intriguing optical and electrical properties. Addition of a third element may enhance the functionality and application areas of such nanocomposites. Herein, we present a facile synthetic approach based on directed thermal decomposition of (Ph4 P)2 MoS4 generating MoS2 nanocomposites containing carbon and phosphorous. Decomposition at 250 °C yields a composite material with significantly enlarged MoS2 interlayer distances caused by in situ formation of Ph3 PS bonded to the MoS2 slabs through MoS bonds and (Ph4 P)2 S molecules in the van der Waals gap, as was evidenced by (31) P solid-state NMR spectroscopy. Visible-light-driven hydrogen generation demonstrates a high catalytic performance of the materials.
Yukun Bao
2012-01-01
Full Text Available With regard to the nonlinearity and irregularity along with implicit seasonality and trend in the context of air passenger traffic forecasting, this study proposes an ensemble empirical mode decomposition (EEMD based support vector machines (SVMs modeling framework incorporating a slope-based method to restrain the end effect issue occurring during the shifting process of EEMD, which is abbreviated as EEMD-Slope-SVMs. Real monthly air passenger traffic series including six selected airlines in USA and UK were collected to test the effectiveness of the proposed approach. Empirical results demonstrate that the proposed decomposition and ensemble modeling framework outperform the selected counterparts such as single SVMs (straightforward application of SVMs, Holt-Winters, and ARIMA in terms of RMSE, MAPE, GMRAE, and DS. Additional evidence is also shown to highlight the improved performance while compared with EEMD-SVM model not restraining the end effect.
Mishra Vinod
2016-01-01
Full Text Available Numerical Laplace transform method is applied to approximate the solution of nonlinear (quadratic Riccati differential equations mingled with Adomian decomposition method. A new technique is proposed in this work by reintroducing the unknown function in Adomian polynomial with that of well known Newton-Raphson formula. The solutions obtained by the iterative algorithm are exhibited in an infinite series. The simplicity and efficacy of method is manifested with some examples in which comparisons are made among the exact solutions, ADM (Adomian decomposition method, HPM (Homotopy perturbation method, Taylor series method and the proposed scheme.
Chunan Tang; Tianhui Ma; Xiaoli Ding
2009-01-01
Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR), used for monitoring crust deformation, are found to be very promising in earthquake prediction subject to stress-forecasting. However, it is rec-ognized that unless we can give reasonable explanations of these curious precursory phenomena that continue to be seren-dipitously observed fi'om time to time, such high technology of GPS or InSAR is difficult to be efficiently used. Therefore, a proper model revealing the relation between earthquake evolution and stress variation, such as the phenomena of stress buildup, stress shadow and stress transfer (SSS), is crucial to the GPS or lnSAR based earthquake prediction. Here we ad-dress this question through a numerical approach of earthquake development using an intuitive physical model with a map-like configuration of discontinuous fault system. The simulation provides a physical basis for the principle of stress-forecasting of earthquakes based on SSS and for the application of GPS or InSAR in earthquake prediction. The ob-served SSS associated phenomena with images of stress distribution during the failure process can be continuously simulated. It is shown that the SSS are better indicators of earthquake precursors than that of seismic foreshocks, suggesting a predict-ability of earthquakes based on stress-forecasting strategy.
Cai Yi; Jianhui Lin; Tengda Ruan; Yanping Li
2015-01-01
Due to the special location and structure of transmission system on high-speed train named CRH5, dynamic unbalance state of the cardan shaft will pose a threat to the train servicing safety, so effective methods that test the cardan shaft operating information and estimate the performance state in real time are needed. In this study a useful estimation method based on ensemble empirical mode decomposition (EEMD) is presented. By using this method, time-frequency characteristic of cardan shaft...
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Hu, Xintao; Zhu, Jianxin; Ding, Qiong
2011-07-15
Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and BCD were about 432.35 and 38.5 kg CO(2)-eq per ton PCB-containing soils, respectively. LCA results showed that the single score of BCD environmental impact was 1468.97 Pt while IHTI's score is 2785.15 Pt, which indicates BCD potentially has a lower environmental impact than IHTI technology in the PCB contaminated soil remediation process.
Daryl L Moorhead
2013-08-01
Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2016-08-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.; Balboa, Alex; Troya, Diego; Guo, Weiwei; Sharp, Conor H.; Senanayake, Sanjaya D.; Morris, John R.; Hill, Craig L.; Frenkel, Anatoly I. (BNL); (Virginia Tech); (ECBC); (Emory); (SBU)
2017-01-18
Zr-based metal organic frameworks (MOFs) have been recently shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. We report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination of DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. These experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.
Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan
2017-08-01
Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.
Spectral proper orthogonal decomposition
Sieber, Moritz; Paschereit, Christian Oliver
2015-01-01
The identification of coherent structures from experimental or numerical data is an essential task when conducting research in fluid dynamics. This typically involves the construction of an empirical mode base that appropriately captures the dominant flow structures. The most prominent candidates are the energy-ranked proper orthogonal decomposition (POD) and the frequency ranked Fourier decomposition and dynamic mode decomposition (DMD). However, these methods fail when the relevant coherent structures occur at low energies or at multiple frequencies, which is often the case. To overcome the deficit of these "rigid" approaches, we propose a new method termed Spectral Proper Orthogonal Decomposition (SPOD). It is based on classical POD and it can be applied to spatially and temporally resolved data. The new method involves an additional temporal constraint that enables a clear separation of phenomena that occur at multiple frequencies and energies. SPOD allows for a continuous shifting from the energetically ...
Md. Noor-E-Alam
2012-01-01
Full Text Available p-cycle networks have attracted a considerable interest in the network survivability literature in recent years. However, most of the existing work assumes a known network topology upon which to apply p-cycle restoration. In the present work, we develop an incremental topology optimization ILP for p-cycle network design, where a known topology can be amended with new fibre links selected from a set of eligible spans. The ILP proves to be relatively easy to solve for small test case instances but becomes computationally intensive on larger networks. We then follow with a relaxation-based decomposition approach to overcome this challenge. The decomposition approach significantly reduces computational complexity of the problem, allowing the ILP to be solved in reasonable time with no statistically significant impact on solution optimality.
Bhattacharya, A.; Guo, Y. Q.; Bernstein, E. R.
2009-11-01
Unimolecular excited electronic state decomposition of novel high nitrogen content energetic molecules, such as 3,3'-azobis(6-amino-1,2,4,5-tetrazine)-mixed N-oxides (DAATO3.5), 3-amino-6-chloro-1,2,4,5-tetrazine-2,4-dioxide (ACTO), and 3,6-diamino-1,2,4,5-tetrazine-1,4-dioxde (DATO), is investigated. Although these molecules are based on N-oxides of a tetrazine aromatic heterocyclic ring, their decomposition behavior distinctly differs from that of bare tetrazine, in which N2 and HCN are produced as decomposition products through a concerted dissociation mechanism. NO is observed to be an initial decomposition product from all tetrazine-N-oxide based molecules from their low lying excited electronic states. The NO product from DAATO3.5 and ACTO is rotationally cold (20 K) and vibrationally hot (1200 K), while the NO product from DATO is rotationally hot (50 K) and vibrationally cold [only the (0-0) vibronic transition of NO is observed]. DAATO3.5 and ACTO primarily differ from DATO with regard to molecular structure, by the relative position of oxygen atom attachment to the tetrazine ring. Therefore, the relative position of oxygen in tetrazine-N-oxides is proposed to play an important role in their energetic behavior. N2O is ruled out as an intermediate precursor of the NO product observed from all three molecules. Theoretical calculations at CASMP2/CASSCF level of theory predict a ring contraction mechanism for generation of the initial NO product from these molecules. The ring contraction occurs through an (S1/S0)CI conical intersection.
Duan, Yabo; Song, Chengtian
2016-10-01
Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.
Duan, Yabo; Song, Chengtian
2016-12-01
Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.
Dominant modal decomposition method
Dombovari, Zoltan
2017-03-01
The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.
Fichtner, S; Hofmann, J; Möller, A; Schrage, C; Giebelhausen, J M; Böhringer, B; Gläser, R
2013-11-15
For the decomposition of chemical warfare agents, a hybrid material concept was applied. This consists of a copper oxide-containing phase as a component with reactive functionality supported on polymer-based spherical activated carbon (PBSAC) as a component with adsorptive functionality. A corresponding hybrid material was prepared by impregnation of PBSAC with copper(II)nitrate and subsequent calcination at 673K. The copper phase exists predominantly as copper(I)oxide which is homogeneously distributed over the PBSAC particles. The hybrid material containing 16 wt.% copper on PBSAC is capable of self-detoxifying the mustard gas surrogate 2-chloroethylethylsulfide (CEES) at room temperature. The decomposition is related to the breakthrough behavior of the reactant CEES, which displaces the reaction product ethylvinylsulfide (EVS). This leads to a combined breakthrough of CEES and EVS. The decomposition of CEES is shown to occur catalytically over the copper-containing PBSAC material. Thus, the hybrid material can even be considered to be self-cleaning.
LI Li; ZHANG Mi-lin; YUAN Fu-long; SHI Ke-ying; ZHANG Guo; ZHANG Dan
2006-01-01
Iron-based perovskite-type compounds modified by Ru were prepared through sol-gel process to study its catalytic activity of NOx direct decomposition at low temperature and evaluate the conversion of NO under the experimental conditions. The catalytic activity of La0.9Ce0.1Fe0.8-nCo0.2RunO3 ( n = 0.01,0.03,0.05,0.07,0.09)series for the NO, NO-CO two components, CO-HC-NO three components were also analyzed. The catalytic investigation evidenced that the presence of Ru is necessary for making highly activity in decomposition of nitric oxide even at low temperature (400 ℃ ) and La0.9Ce0.9Fe0.75Co0.2Ru0.05O3( n = 0. 05 ) has better activity in all the samples, the conversion of it is 58.5%. With the reducing gas (CO, C3 H6 )added into the gas, the catalyst displayed very high activity in decomposition of NO and the conversion of it is 80% and 92. 5% separately.
Feng Dong
Full Text Available China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model-panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1 During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2 According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3 Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity.
Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang
2013-01-01
China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model-panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity.
A fast approach for overcomplete sparse decomposition based on smoothed L0 norm
Mohimani, Hossein; Jutten, Christian
2008-01-01
In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined Sparse Component Analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm tries to directly minimize the L0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.
Synthesis, Optical Characterization, and Thermal Decomposition of Complexes Based on Biuret Ligand
Mei-Ling Wang
2016-01-01
Full Text Available Four complexes were synthesized in methanol solution using nickel acetate or nickel chloride, manganese acetate, manganese chloride, and biuret as raw materials. The complexes were characterized by elemental analyses, UV, FTIR, Raman spectra, X-ray powder diffraction, and thermogravimetric analysis. The compositions of the complexes were [Ni(bi2(H2O2](Ac2·H2O (1, [Ni(bi2Cl2] (2, [Mn(bi2(Ac2]·1.5H2O (3, and [Mn(bi2Cl2] (4 (bi = NH2CONHCONH2, respectively. In the complexes, every metal ion was coordinated by oxygen atoms or chlorine ions and even both. The nickel and manganese ions were all hexacoordinated. The thermal decomposition processes of the complexes under air included the loss of water molecule, the pyrolysis of ligands, and the decomposition of inorganic salts, and the final residues were nickel oxide and manganese oxide, respectively.
Aijun Liu; Michele Pfund; John Fowler
2016-01-01
How to deal with the colaboration between task decomposition and task scheduling is the key problem of the integrated manufacturing system for complex products. With the development of manufacturing technology, we can probe a new way to solve this problem. Firstly, a new method for task granularity quantitative analysis is put forward, which can precisely evaluate the task granularity of complex product cooperation workflow in the integrated manufacturing system, on the above basis; this method is used to guide the coarse-grained task decomposition and recombine the sub-tasks with low cohesion coefficient. Then, a multi-objective optimieation model and an algorithm are set up for the scheduling optimization of task scheduling. Finaly, the appli-cation feasibility of the model and algorithm is ultimately vali-dated through an application case study.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition
Cécile Germain‐Renaud
1999-01-01
Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.
Finite element formulation based on proper orthogonal decomposition for parabolic equations
无
2009-01-01
A proper orthogonal decomposition (POD) method is applied to a usual finite element (FE) formulation for parabolic equations so that it is reduced into a POD FE formulation with lower dimensions and enough high accuracy. The errors between the reduced POD FE solution and the usual FE solution are analyzed. It is shown by numerical examples that the results of numerical computations are consistent with theoretical conclusions. Moreover, it is also shown that this validates the feasibility and efficiency of POD method.
Reweighted Low-Rank Tensor Decomposition based on t-SVD and its Applications in Video Denoising
Baburaj, M.; George, Sudhish N.
2016-01-01
The t-SVD based Tensor Robust Principal Component Analysis (TRPCA) decomposes low rank multi-linear signal corrupted by gross errors into low multi-rank and sparse component by simultaneously minimizing tensor nuclear norm and l 1 norm. But if the multi-rank of the signal is considerably large and/or large amount of noise is present, the performance of TRPCA deteriorates. To overcome this problem, this paper proposes a new efficient iterative reweighted tensor decomposition scheme based on t-...
Dynamic formant extraction of wa language based on adaptive variational mode decomposition
Fu, Meijun; Dong, Huazhen; Pan, Wenlin
2017-08-01
Wa language is one of Chinese minority languages spoken by the Wa nationality who lives in Yunnan Province, China. Until now, it has not been studied from the perspective of Engineering Phonetics. In this paper, for the above reason, by the adaptive variational mode decomposition (AVMD) we have investigated the dynamic formant characteristics of Wa language. Firstly, more precisely, use the synthetic dimension to split Wa language isolated words into voiceless and voiced segment, initials and finals. Secondly, use Linear Prediction Coding to estimate the first three formant frequencies and their bandwidths roughly. Thirdly, select the appropriate equilibrium constraint parameter and the number of decomposed layers so that Adaptive Variational Mode Decomposition (AVMD) can decompose the signal into some intrinsic mode functions (IMFs) without pattern aliasing. Fourthly, use the estimated formant frequencies and bandwidths to determine precisely the required IMFs. Fifthly, use the Hilbert transform to calculate the instantaneous frequency of the above determinate IMFs. Further, we implement the weight average operation on instantaneous frequencies to obtain the first three formant frequencies for each frame. Finally, comparing the first three formant frequencies obtained by the adaptive variance modal decomposition and by Praat software respectively, so we have drawn the conclusion that the relative correct rate of the former to the latter can reach 86% averagely in terms of the selected isolated words, which has shown that our method is effective on Wa language.
Huang, Shaoguang; Tian, Lan; Ma, Xiaojie; Wei, Ying
2016-04-01
Hearing impaired people have their own hearing loss characteristics and listening preferences. Therefore hearing aid system should become more natural, humanized and personalized, which requires the filterbank in hearing aids provides flexible sound wave decomposition schemes, so that patients are likely to use the most suitable scheme for their own hearing compensation strategy. In this paper, a reconfigurable sound wave decomposition filterbank is proposed. The prototype filter is first cosine modulated to generate uniform subbands. Then by non-linear transformation the uniform subbands are mapped to nonuniform subbands. By changing the control parameters, the nonlinear transformation changes which leads to different subbands allocations. It provides four different sound wave decomposition schemes without changing the structure of the filterbank. The performance of the proposed reconfigurable filterbank was compared with that of fixed filerbanks, fully customizable filterbanks and other existing reconfigurable filterbanks. It is shown that the proposed filterbank provides satisfactory matching performance as well as low complexity and delay, which make it suitable for real hearing aid applications.
Chiu, Chun-Huo; Chao, Anne
2014-01-01
Hill numbers (or the "effective number of species") are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally) equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of the
Chun-Huo Chiu
Full Text Available Hill numbers (or the "effective number of species" are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species, which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation measures, including N-assemblage functional
Liu, Leili; Li, Jie; Zhang, Lingyao; Tian, Siyu
2017-08-24
MgH2, Mg2NiH4, and Mg2CuH3 were prepared, and their structure and hydrogen storage properties were determined through X-ray photoelectron spectroscopy and thermal analyzer. The effects of MgH2, Mg2NiH4, and Mg2CuH3 on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant were subsequently studied. Results indicated that MgH2, Mg2NiH4, and Mg2CuH3 can decrease the thermal decomposition peak temperature and increase the total released heat of decomposition. These compounds can improve the effect of thermal decomposition of the propellant. The burning rates of the propellant increased using Mg-based hydrogen storage materials as promoter. The burning rates of the propellant also increased using MgH2 instead of Al in the propellant, but its explosive heat was not enlarged. Nonetheless, the combustion heat of MgH2 was higher than that of Al. A possible mechanism was thus proposed. Copyright © 2017. Published by Elsevier B.V.
Bae, Jung-Eun; Cho, Kwang Soo
2017-09-01
Shear stress of Large Amplitude Oscillatory Shear (LAOS) is known to be decomposed to elastic and viscous stresses. According to the parity of normal stress with respect to shear strain and shear rate, it also can be mathematically decomposed into two parts: NEE (even symmetry part for both strain and strain rate) and NOO (odd symmetry part for both shear strain and shear rate). However, the physical meaning of the decomposed normal stress is questionable. This paper is to prove the conjecture that NEE is elastic and NOO is viscous under the condition of time-strain separability. For the purpose of the proof, we developed mathematical tools for the analytical solutions of LAOS. We applied the mathematical methods to some popularly used constitutive equations such as the convected Maxwell models, the separable Kaye-Bernstein-Kearsley-Zepas (K-BKZ) model, the Giesekus model, and the Phan-Thien and Tanner model.
Qiang Guo
2017-04-01
Full Text Available In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences. Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.
Zhang, Xiaoxing; Huang, Rong; Gui, Yingang; Zeng, Hong
2016-01-01
Detection of decomposition products of sulfur hexafluoride (SF6) is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene) and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene), for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT). We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2) and double gas molecules (2SO2F2, 2SOF2, 2SO2) on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6. PMID:27809269
Xiaoxing Zhang
2016-11-01
Full Text Available Detection of decomposition products of sulfur hexafluoride (SF6 is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene, for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT. We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2 and double gas molecules (2SO2F2, 2SOF2, 2SO2 on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6.
Zhang, Xiaoxing; Huang, Rong; Gui, Yingang; Zeng, Hong
2016-11-01
Detection of decomposition products of sulfur hexafluoride (SF₆) is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene) and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene), for decomposition products of SF₆, including SO₂F₂, SOF₂, and SO₂, based on density functional theory (DFT). We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO₂F₂, SOF₂, SO₂) and double gas molecules (2SO₂F₂, 2SOF₂, 2SO₂) on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO₂F₂ and SO₂ gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO₂F₂ > SO₂, while SOF₂ absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO₂F₂. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF₆-insulated equipment based on detecting decomposition products of SF₆.
Men, Kuo; Quan, Hong; Yang, Peipei; Cao, Ting; Li, Weihao
2010-04-01
The frequency-domain magnetic resonance spectroscopy (MRS) is achieved by the Fast Fourier Transform (FFT) of the time-domain signals. Usually we are only interested in the portion lying in a frequency band of the whole spectrum. A method based on the singular value decomposition (SVD) and frequency-selection is presented in this article. The method quantifies the spectrum lying in the interested frequency band and reduces the interference of the parts lying out of the band in a computationally efficient way. Comparative experiments with the standard time-domain SVD method indicate that the method introduced in this article is accurate and timesaving in practical situations.
无
2010-01-01
The Unit Vector Method (UVM) is an orbit determination method extensively applied. In this paper, the UVM and classical Differential Orbit Improvement (DOI) are compared, and a fusion method is given for the orbit determination with different kind data. Based on non-orthogonal decomposition of position and velocity vectors, an approximation scheme is constructed to calculate the state transition matrix. This method simplifies the calculation of the approximate state transition matrix, analyzes the convergence mechanism of the UVM, and makes clear the defect of weight strategy in UVM. Results of orbit the determination with simulating and real data show that this method has good numerical stability and rational weight distribution.
Higham, J. E.; Brevis, W.; Keylock, C. J.
2016-12-01
The present work proposes a novel method of detection and estimation of outliers in particle image velocimetry measurements by the modification of the temporal coefficients associated with a proper orthogonal decomposition of an experimental time series. Using synthetic outliers applied to two sequences of vector fields, the method is benchmarked against state-of-the-art approaches recently proposed to remove the influence of outliers. Compared with these methods, the proposed approach offers an increase in accuracy and robustness for the detection of outliers and comparable accuracy for their estimation.
Hong-Juan Li
2013-04-01
Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.
Progressivity of personal income tax in Croatia: decomposition of tax base and rate effects
Ivica Urban
2006-09-01
Full Text Available This paper presents progressivity breakdowns for Croatian personal income tax (henceforth PIT in 1997 and 2004. The decompositions reveal how the elements of the system – tax schedule, allowances, deductions and credits – contribute to the achievement of progressivity, over the quantiles of pre-tax income distribution. Through the use of ‘single parameter’ Gini indices, the social decision maker’s (henceforth SDM relatively more or less favorable inclination toward taxpayers in the lower tails of pre-tax income distribution is accounted for. Simulations are undertaken to show how the introduction of a flat-rate system would affect progressivity.
Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy
Hünemohr, Nora, E-mail: n.huenemohr@dkfz.de; Greilich, Steffen [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg (Germany); Paganetti, Harald; Seco, Joao [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Jäkel, Oliver [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany and Department of Radiation Oncology and Radiation Therapy, University Hospital of Heidelberg, 69120 Heidelberg (Germany)
2014-06-15
Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would
Baturin, Pavlo
2015-03-01
Material decomposition in absorption-based X-ray CT imaging suffers certain inefficiencies when differentiating among soft tissue materials. To address this problem, decomposition techniques turn to spectral CT, which has gained popularity over the last few years. Although proven to be more effective, such techniques are primarily limited to the identification of contrast agents and soft and bone-like materials. In this work, we introduce a novel conditional likelihood, material-decomposition method capable of identifying any type of material objects scanned by spectral CT. The method takes advantage of the statistical independence of spectral data to assign likelihood values to each of the materials on a pixel-by-pixel basis. It results in likelihood images for each material, which can be further processed by setting certain conditions or thresholds, to yield a final material-diagnostic image. The method can also utilize phase-contrast CT (PCI) data, where measured absorption and phase-shift information can be treated as statistically independent datasets. In this method, the following cases were simulated: (i) single-scan PCI CT, (ii) spectral PCI CT, (iii) absorption-based spectral CT, and (iv) single-scan PCI CT with an added tumor mass. All cases were analyzed using a digital breast phantom; although, any other objects or materials could be used instead. As a result, all materials were identified, as expected, according to their assignment in the digital phantom. Materials with similar attenuation or phase-shift values (e.g., glandular tissue, skin, and tumor masses) were especially successfully when differentiated by the likelihood approach.
Irene Lock Sow Mei
2016-08-01
Full Text Available Hydrogen production from the direct thermo-catalytic decomposition of methane is a promising alternative for clean fuel production. However, thermal decomposition of methane can hardly be of any practical and empirical interest in the industry unless highly efficient and effective catalysts, in terms of both catalytic activity and operational lifetime have been developed. In this study, the effect of palladium (Pd as a promoter onto Ni supported on alumina catalyst has been investigated by using co-precipitation technique. The introduction of Pd promotes better catalytic activity, operational lifetime and thermal stability of the catalyst. As expected, highest methane conversion was achieved at reaction temperature of 800 °C while the bimetallic catalyst (1 wt.% Ni -1wt.% Pd/Al2O3 gave the highest methane conversion of 70% over 15 min of time-on-stream (TOS. Interestingly, the introduction of Pd as promoter onto Ni-based catalyst also has a positive effect on the operational lifetime and thermal stability of the catalyst as the methane conversion has improved significantly over 240 min of TOS. Copyright © 2016 BCREC GROUP. All rights reserved Received: 21st January 2016; Revised: 6th February 2016; Accepted: 6th March 2016 How to Cite: Mei, I.L.S., Lock, S.S.M., Vo, D.V.N., Abdullah, B. (2016. Thermo-Catalytic Methane Decomposition for Hydrogen Production: Effect of Palladium Promoter on Ni-based Catalysts. Bulletin of Chemical Reaction Engineering & Catalysis, 11 (2: 191-199 (doi:10.9767/bcrec.11.2.550.191-199 Permalink/DOI: http://dx.doi.org/10.9767/bcrec.11.2.550.191-199
Fu, Mao-Jing; Zhuang, Jian-Jun; Hou, Feng-Zhen; Zhan, Qing-Bo; Shao, Yi; Ning, Xin-Bao
2010-05-01
In this paper, the ensemble empirical mode decomposition (EEMD) is applied to analyse accelerometer signals collected during normal human walking. First, the self-adaptive feature of EEMD is utilised to decompose the accelerometer signals, thus sifting out several intrinsic mode functions (IMFs) at disparate scales. Then, gait series can be extracted through peak detection from the eigen IMF that best represents gait rhythmicity. Compared with the method based on the empirical mode decomposition (EMD), the EEMD-based method has the following advantages: it remarkably improves the detection rate of peak values hidden in the original accelerometer signal, even when the signal is severely contaminated by the intermittent noises; this method effectively prevents the phenomenon of mode mixing found in the process of EMD. And a reasonable selection of parameters for the stop-filtering criteria can improve the calculation speed of the EEMD-based method. Meanwhile, the endpoint effect can be suppressed by using the auto regressive and moving average model to extend a short-time series in dual directions. The results suggest that EEMD is a powerful tool for extraction of gait rhythmicity and it also provides valuable clues for extracting eigen rhythm of other physiological signals.
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
Functionalization of Tactile Sensation for Robot Based on Haptograph and Modal Decomposition
Yokokura, Yuki; Katsura, Seiichiro; Ohishi, Kiyoshi
In the real world, robots should be able to recognize the environment in order to be of help to humans. A video camera and a laser range finder are devices that can help robots recognize the environment. However, these devices cannot obtain tactile information from environments. Future human-assisting-robots should have the ability to recognize haptic signals, and a disturbance observer can possibly be used to provide the robot with this ability. In this study, a disturbance observer is employed in a mobile robot to functionalize the tactile sensation. This paper proposes a method that involves the use of haptograph and modal decomposition for the haptic recognition of road environments. The haptograph presents a graphic view of the tactile information. It is possible to classify road conditions intuitively. The robot controller is designed by considering the decoupled modal coordinate system, which consists of translational and rotational modes. Modal decomposition is performed by using a quarry matrix. Once the robot is provided with the ability to recognize tactile sensations, its usefulness to humans will increase.
Research on rice acreage estimation in fragmented area based on decomposition of mixed pixels
Zhang, H.; Li, Q. Z.; Lei, F.; Du, X.; Wei, J. D.
2015-04-01
Rice acreage estimation is a key aspect to guarantee food security and also important to support government agricultural subsidy system. In this paper, we explored a sophisticated method to improve rice estimation accuracy at county scale and we developed our approach with China Environment Satellite HJ-1A/B data in Hunan Province, a fragmented area with complex rice cropping patterns. Our approach improved the estimation accuracy by combing supervised and unsupervised classification upon decomposition of mixed pixels model, and the rice estimation results, validated by ground survey data, showed a close relationship (RMSE~3.40) with survey figures, the estimated accuracy (EA) reached 83.74% at county level according to the sub-pixel method, and the accuracy can be increased about 12% compared to the pure-pixel method. The results suggest that decomposition of mixed pixels method has great significance to the improvement of rice acreage estimation accuracy, and can be used in mountainous and broken planting area.
Wang Jiajun
2010-05-01
Full Text Available Abstract Background The inverse problem of fluorescent molecular tomography (FMT often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.
Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition
Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen
2017-04-01
Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.
Spectral Tensor-Train Decomposition
Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.
2016-01-01
The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT.......e., the “cores”) comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting spectral tensor-train decomposition combines the favorable dimension-scaling of the TT...... decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm \\tt TT-DMRG-cross to obtain the TT decomposition of tensors resulting from suitable...
Feng Qi ZHAO; Hong Xu GAO; Rong Zu HU; Gui E LU; Jin Yong JIANG
2006-01-01
A method of estimating the safe storage life (τ), self-accelerating decomposition temperature (TSADT) and critical temperature of thermal explosion (Tb) of double-base propellant using isothermal and non-isothermal decomposition behaviours is presented. For double-base propellant composed of 56±1wt% of nitrocellulose (NC), 27±0.5wt% of nitroglycerine (NG),8.15±0.15wt% of dinitrotoluene (DNT), 2.5±0.1wt% of methyl centralite, 5.0±0.15wt% of catalyst and 1.0±0.1wt% of other, the values of τ of 49.4 years at 40℃, of TSADT of 151.35℃ and of Tb of 163.01℃ were obtained.
ZhiYong Lv
2016-09-01
Full Text Available Very high resolution (VHR remote sensing images are widely used for land cover classification. However, to the best of our knowledge, few approaches have been shown to improve classification accuracies through image scene decomposition. In this paper, a simple yet powerful observational scene scale decomposition (OSSD-based system is proposed for the classification of VHR images. Different from the traditional methods, the OSSD-based system aims to improve the classification performance by decomposing the complexity of an image’s content. First, an image scene is divided into sub-image blocks through segmentation to decompose the image content. Subsequently, each sub-image block is classified respectively, or each block is processed firstly through an image filter or spectral–spatial feature extraction method, and then each processed segment is taken as the feature input of a classifier. Finally, classified sub-maps are fused together for accuracy evaluation. The effectiveness of our proposed approach was investigated through experiments performed on different images with different supervised classifiers, namely, support vector machine, k-nearest neighbor, naive Bayes classifier, and maximum likelihood classifier. Compared with the accuracy achieved without OSSD processing, the accuracy of each classifier improved significantly, and our proposed approach shows outstanding performance in terms of classification accuracy.
TIAN XiangJun; XIE ZhengHui
2009-01-01
The proper orthogonal decomposition (POD) method is used to construct a set of basis functions for spanning the ensemble of data in a certain least squares optimal sense. Compared with the singular value decomposition (SVD), the POD basis functions can capture more energy in the forecast ensemble space and can represent its spatial structure and temporal evolution more effectively. After the analysis variables are expressed by a truncated expansion of the POD basis vectors in the ensemble space, the control variables appear explicitly in the cost function, so that the adjoint model, which is used to de-rive the gradient of the cost function with respect to the control variables, is no longer needed. The application of this new technique significantly simplifies the data assimilation process. Several as-similation experiments show that this POD-based explicit four-dimensional variational data assimila-tion method performs much better than the usual ensemble Kalman filter method on both enhancing the assimilation precision and reducing the computation cost. It is also better than the SVD-based ex-plicit four-dimensional assimilation method, especially when the forecast model is not perfect and the forecast error comes from both the noise of the initial filed and the uncertainty of the forecast model.
Li, Xibing; Shang, Xueyi; Morales-Esteban, A.; Wang, Zewei
2017-03-01
Seismic P phase arrival picking of weak events is a difficult problem in seismology. The algorithm proposed in this research is based on Empirical Mode Decomposition (EMD) and on the Akaike Information Criterion (AIC) picker. It has been called the EMD-AIC picker. The EMD is a self-adaptive signal decomposition method that not only improves Signal to Noise Ratio (SNR) but also retains P phase arrival information. Then, P phase arrival picking has been determined by applying the AIC picker to the selected main Intrinsic Mode Functions (IMFs). The performance of the EMD-AIC picker has been evaluated on the basis of 1938 micro-seismic signals from the Yongshaba mine (China). The P phases identified by this algorithm have been compared with manual pickings. The evaluation results confirm that the EMD-AIC pickings are highly accurate for the majority of the micro-seismograms. Moreover, the pickings are independent of the kind of noise. Finally, the results obtained by this algorithm have been compared to the wavelet based Discrete Wavelet Transform (DWT)-AIC pickings. This comparison has demonstrated that the EMD-AIC picking method has a better picking accuracy than the DWT-AIC picking method, thus showing this method's reliability and potential.
Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi
2014-01-01
The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.
Golestan, Saeed; Ebrahimzadeh, Esmaeil; Guerrero, Josep M.
2017-01-01
Without any doubt, phase-locked loops (PLLs) are the most popular and widely used technique for the synchronization purposes in the power and energy areas. They are also popular for the selective extraction of fundamental and harmonic/disturbance components of the grid voltage and current. Like...... most of the control algorithms, designing PLLs involves a tradeoff between the accuracy and dynamic response, and improving this tradeoff is what recent research efforts have focused on. These efforts are often based on designing advanced filters and using them as a preprocessing tool before the PLL...... input. A filtering technique that has received a little attention for this purpose is the least-error squares (LES)-based filter. In this paper, an adaptive LES filter-based PLL, briefly called the LES-PLL, for the synchronization and signal decomposition purposes is presented. The proposed LES filter...
Shen, Xuehua; Xiong, Qingyu; Shi, Xin; Wang, Kai; Liang, Shan; Gao, Min
2015-09-01
Temperature distribution reconstruction is of critical importance for circular area, and an ultrasonic technique is investigated to meet this demand in this paper. Considering the particularity of circular area, algorithm based on Markov radial basis approximation and singular value decomposition is proposed, while ultrasonic transducers layout and division of measured area are properly designed. The reconstruction performance is validated via numerical experiments using different temperature distribution models, and is compared with algorithm based on least square method. To study the anti-interference, various noises are adding to the theoretical value of time-of-flight. Experiment results indicate that the proposed algorithm can reconstruct temperature distribution with higher accuracy and stronger anti-interference, while without the problem of algorithm based on least square method that its reconstructions will lose much temperature information near the edge of measured area. Copyright © 2015 Elsevier B.V. All rights reserved.
张韧; 周林; 董兆俊; 李训强
2002-01-01
Methods and approaches are discussed that identify and filter off affecting factors (noise) above primary signals, based on the Adaptive-Network-Based Fuzzy Inference System. Influences of the zonal winds in equatorial castern and middle/western Pacific on the SSTA in the equatorial region and their contribution to the latter are diagnosed and verified with observations of a number of significant El Nino and La Nina episodes. New viewpoints are proposed. The method of wavelet decomposition and reconstruction are used to build a predictive model based on independent domains of frequency, which shows some advantages in composite prediction and prediction validity. The methods presented above are of non-linearity, error-allowing and auto-adaptive / learning.in addition to rapid and easy access, illustrative and quantitative presentation, and analyzed results that agree generally with facts. They are useful in diagnosing and predicting the El Nino and La Nina problems that are just roughly described in dynamics.
Bearing fault detection based on hybrid ensemble detector and empirical mode decomposition
Georgoulas, George; Loutas, Theodore; Stylios, Chrysostomos D.; Kostopoulos, Vassilis
2013-12-01
Aiming at more efficient fault diagnosis, this research work presents an integrated anomaly detection approach for seeded bearing faults. Vibration signals from normal bearings and bearings with three different fault locations, as well as different fault sizes and loading conditions are examined. The Empirical Mode Decomposition and the Hilbert Huang transform are employed for the extraction of a compact feature set. Then, a hybrid ensemble detector is trained using data coming only from the normal bearings and it is successfully applied for the detection of any deviation from the normal condition. The results prove the potential use of the proposed scheme as a first stage of an alarm signalling system for the detection of bearing faults irrespective of their loading condition.
A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.
Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco
2011-01-01
The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion.
Huang, Yong; Wang, Kehong; Zhou, Zhilan; Zhou, Xiaoxiao; Fang, Jimi
2017-03-01
The arc of gas metal arc welding (GMAW) contains abundant information about its stability and droplet transition, which can be effectively characterized by extracting the arc electrical signals. In this study, ensemble empirical mode decomposition (EEMD) was used to evaluate the stability of electrical current signals. The welding electrical signals were first decomposed by EEMD, and then transformed to a Hilbert–Huang spectrum and a marginal spectrum. The marginal spectrum is an approximate distribution of amplitude with frequency of signals, and can be described by a marginal index. Analysis of various welding process parameters showed that the marginal index of current signals increased when the welding process was more stable, and vice versa. Thus EEMD combined with the marginal index can effectively uncover the stability and droplet transition of GMAW.
Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan
2016-07-01
We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.
A stabilized explicit Lagrange multiplier based domain decomposition method for parabolic problems
Zheng, Zheming; Simeon, Bernd; Petzold, Linda
2008-05-01
A fully explicit, stabilized domain decomposition method for solving moderately stiff parabolic partial differential equations (PDEs) is presented. Writing the semi-discretized equations as a differential-algebraic equation (DAE) system where the interface continuity constraints between subdomains are enforced by Lagrange multipliers, the method uses the Runge-Kutta-Chebyshev projection scheme to integrate the DAE explicitly and to enforce the constraints by a projection. With mass lumping techniques and node-to-node matching grids, the method is fully explicit without solving any linear system. A stability analysis is presented to show the extended stability property of the method. The method is straightforward to implement and to parallelize. Numerical results demonstrate that it has excellent performance.
Tamellini, L.
2014-01-01
In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.
Vision-Based Mobile Robot Navigation Using Image Processing and Cell Decomposition
Shojaeipour, Shahed; Mohamed Haris, Sallehuddin; Khairir, Muhammad Ihsan
In this paper, we present a method to navigate a mobile robot using a webcam. This method determines the shortest path for the robot to transverse to its target location, while avoiding obstacles along the way. The environment is first captured as an image using a webcam. Image processing methods are then performed to identify the existence of obstacles within the environment. Using the Cell Decomposition method, locations with obstacles are identified and the corresponding cells are eliminated. From the remaining cells, the shortest path to the goal is identified. The program is written in MATLAB with the Image Processing toolbox. The proposed method does not make use of any other type of sensor other than the webcam.
Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising
Yan-Fang Sang
2010-06-01
Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.
GUO Qintao; ZHANG Lingmi; TAO Zheng
2008-01-01
Thin wall component is utilized to absorb impact energy of a structure. However, the dynamic behavior of such thin-walled structure is highly non-linear with material, geometry and boundary non-linearity. A model updating and validation procedure is proposed to build accurate finite element model of a frame structure with a non-linear thin-walled component for dynamic analysis. Design of experiments (DOE) and principal component decomposition (PCD) approach are applied to extract dynamic feature from nonlinear impact response for correlation of impact test result and FE model of the non-linear structure. A strain-rate-dependent non-linear model updating method is then developed to build accurate FE model of the structure. Computer simulation and a real frame structure with a highly non-linear thin-walled component are employed to demonstrate the feasibility and effectiveness of the proposed approach.
Duality-based domain decomposition with natural coarse-space for variational inequalities
Dostál, Zdenek; Neto, Francisco A. M. Gomes; Santos, Sandra A.
2000-12-01
An efficient non-overlapping domain decomposition algorithm of Neumann-Neumann type for solving variational inequalities arising from the elliptic boundary value problems with inequality boundary conditions has been presented. The discretized problem is first turned by the duality theory of convex programming into a quadratic programming problem with bound and equality constraints and the latter is further modified by means of orthogonal projectors to the natural coarse space introduced recently by Farhat and Roux. The resulting problem is then solved by an augmented Lagrangian type algorithm with an outer loop for the Lagrange multipliers for the equality constraints and an inner loop for the solution of the bound constrained quadratic programming problems. The projectors are shown to guarantee an optimal rate of convergence of iterative solution of auxiliary linear problems. Reported theoretical results and numerical experiments indicate high numerical and parallel scalability of the algorithm.
Xian, Lu; He, Kaijian; Lai, Kin Keung
2016-07-01
In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.
Surface EMG decomposition based on K-means clustering and convolution kernel compensation.
Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun
2015-03-01
A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Kako, Tetsuya, E-mail: kako.tetsuya@nims.go.jp [Environmental Remediation Materials Unit, National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki 305-0044 (Japan); Meng, Xianguang; Ye, Jinhua [Environmental Remediation Materials Unit, National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki 305-0044 (Japan); Graduate School of Chemical Science and Engineering, Hokkaido University, Sapporo 060-0814 (Japan)
2015-10-01
Composite of NaBiO{sub 3}-loaded WO{sub 3} with a mixing ratio of 10:100 was prepared for photocatalytic harmful-organic-contaminant decomposition. The composite properties were measured using X-ray diffraction, ultraviolet-visible spectrophotometer (UV-Vis), and valence band-X-ray photoelectron spectroscope (VB-XPS). The results exhibited that the potentials for top of the valence band and bottom of conduction band for NaBiO{sub 3} can be estimated, respectively, as +2.5 V and -0.1 to 0 V. Furthermore, WO{sub 3}, NaBiO{sub 3}, and the composite showed IPA oxidation properties under visible-light irradiation. Results show that the composite exhibited much higher photocatalytic activity about 2-propanol (IPA) decomposition into CO{sub 2} than individual WO{sub 3} or NaBiO{sub 3} because of charge separation promotion and the base effect of NaBiO{sub 3}.
Mueller matrix differential decomposition.
Ortega-Quijano, Noé; Arce-Diego, José Luis
2011-05-15
We present a Mueller matrix decomposition based on the differential formulation of the Mueller calculus. The differential Mueller matrix is obtained from the macroscopic matrix through an eigenanalysis. It is subsequently resolved into the complete set of 16 differential matrices that correspond to the basic types of optical behavior for depolarizing anisotropic media. The method is successfully applied to the polarimetric analysis of several samples. The differential parameters enable one to perform an exhaustive characterization of anisotropy and depolarization. This decomposition is particularly appropriate for studying media in which several polarization effects take place simultaneously. © 2011 Optical Society of America
Symmetric tensor decomposition
Brachat, Jerome; Mourrain, Bernard; Tsigaridas, Elias
2009-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables of total degree d as a sum of powers of linear forms (Waring's problem), incidence properties on secant varieties of the Veronese Variety and the representation of linear forms as a linear combination of evaluations at distinct points. Then we reformulate Sylvester's approach from the dual point of view. Exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of Hankel (and quasi-Hankel) matrices, derived from multivariate polynomials and normal form computations. This leads to the resolution of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on th...
Residual Stress Determination from a Laser-Based Curvature Measurement
Swank, William David; Gavalya, Rick Allen; Wright, Julie Knibloe; Wright, Richard Neil
2000-05-01
Thermally sprayed coating characteristics and mechanical properties are in part a result of the residual stress developed during the fabrication process. The total stress state in a coating/substrate is comprised of the quench stress and the coefficient of thermal expansion (CTE) mismatch stress. The quench stress is developed when molten particles impact the substrate and rapidly cool and solidify. The CTE mismatch stress results from a large difference in the thermal expansion coefficients of the coating and substrate material. It comes into effect when the substrate/coating combination cools from the equilibrated deposit temperature to room temperature. This paper describes a laser-based technique for measuring the curvature of a coated substrate and the analysis required to determine residual stress from curvature measurements. Quench stresses were determined by heating the specimen back to the deposit temperature thus removing the CTE mismatch stress. By subtracting the quench stress from the total residual stress at room temperature, the CTE mismatch stress was estimated. Residual stress measurements for thick (>1mm) spinel coatings with a Ni-Al bond coat on 304 stainless steel substrates were made. It was determined that a significant portion of the residual stress results from the quenching stress of the bond coat and that the spinel coating produces a larger CTE mismatch stress than quench stress.
Zhongxiao Jia; Yuquan Sun
2007-01-01
Based on the generalized minimal residual(GMRES)principle,Hu and Reichel proposed a minimal residual algorithm for the Sylvester equation.The algorithm requires the solution of a structured least squares problem.They form the normal equations of the least squares problem and then solve it by a direct solver,so it is susceptible to instability.In this paper,by exploiting the special structure of the least squares problem and working on the problem directly,a numerically stable QR decomposition based algorithm is presented for the problem.The new algorithm is more stable than the normal equations algorithm of Hu and Reichel.Numerical experiments are reported to confirm the superior stability of the new algorithm.
Nyklicek, I.; Mommersteeg, P.M.; Beugen, S. van; Ramakers, C.; Boxtel, G.J. Van
2013-01-01
OBJECTIVE: The aim was to examine the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on cardiovascular and cortisol activity during acute stress. METHOD: Eighty-eight healthy community-dwelling individuals reporting elevated stress levels were randomly assigned to the MBSR proto
Qian, Xi-Yuan; Gu, Gao-Feng; Zhou, Wei-Xing
2011-11-01
Detrended fluctuation analysis (DFA) is a simple but very efficient method for investigating the power-law long-term correlations of non-stationary time series, in which a detrending step is necessary to obtain the local fluctuations at different timescales. We propose to determine the local trends through empirical mode decomposition (EMD) and perform the detrending operation by removing the EMD-based local trends, which gives an EMD-based DFA method. Similarly, we also propose a modified multifractal DFA algorithm, called an EMD-based MFDFA. The performance of the EMD-based DFA and MFDFA methods is assessed with extensive numerical experiments based on fractional Brownian motion and multiplicative cascading process. We find that the EMD-based DFA method performs better than the classic DFA method in the determination of the Hurst index when the time series is strongly anticorrelated and the EMD-based MFDFA method outperforms the traditional MFDFA method when the moment order q of the detrended fluctuations is positive. We apply the EMD-based MFDFA to the 1 min data of Shanghai Stock Exchange Composite index, and the presence of multifractality is confirmed. We also analyze the daily Austrian electricity prices and confirm its anti-persistence.
Muammar Sadrawi
2016-01-01
Full Text Available Good quality cardiopulmonary resuscitation (CPR is the mainstay of treatment for managing patients with out-of-hospital cardiac arrest (OHCA. Assessment of the quality of the CPR delivered is now possible through the electrocardiography (ECG signal that can be collected by an automated external defibrillator (AED. This study evaluates a nonlinear approximation of the CPR given to the asystole patients. The raw ECG signal is filtered using ensemble empirical mode decomposition (EEMD, and the CPR-related intrinsic mode functions (IMF are chosen to be evaluated. In addition, sample entropy (SE, complexity index (CI, and detrended fluctuation algorithm (DFA are collated and statistical analysis is performed using ANOVA. The primary outcome measure assessed is the patient survival rate after two hours. CPR pattern of 951 asystole patients was analyzed for quality of CPR delivered. There was no significant difference observed in the CPR-related IMFs peak-to-peak interval analysis for patients who are younger or older than 60 years of age, similarly to the amplitude difference evaluation for SE and DFA. However, there is a difference noted for the CI (p<0.05. The results show that patients group younger than 60 years have higher survival rate with high complexity of the CPR-IMFs amplitude differences.
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Edward Jero, S; Ramu, Palaniappan; Ramakrishnan, S
2014-10-01
ECG Steganography provides secured transmission of secret information such as patient personal information through ECG signals. This paper proposes an approach that uses discrete wavelet transform to decompose signals and singular value decomposition (SVD) to embed the secret information into the decomposed ECG signal. The novelty of the proposed method is to embed the watermark using SVD into the two dimensional (2D) ECG image. The embedding of secret information in a selected sub band of the decomposed ECG is achieved by replacing the singular values of the decomposed cover image by the singular values of the secret data. The performance assessment of the proposed approach allows understanding the suitable sub-band to hide secret data and the signal degradation that will affect diagnosability. Performance is measured using metrics like Kullback-Leibler divergence (KL), percentage residual difference (PRD), peak signal to noise ratio (PSNR) and bit error rate (BER). A dynamic location selection approach for embedding the singular values is also discussed. The proposed approach is demonstrated on a MIT-BIH database and the observations validate that HH is the ideal sub-band to hide data. It is also observed that the signal degradation (less than 0.6%) is very less in the proposed approach even with the secret data being as large as the sub band size. So, it does not affect the diagnosability and is reliable to transmit patient information.
无
2010-01-01
In order to improve the quality of remote sensing image fusion,a new method combining nonsubsampled Laplacian pyramid (NLP)and bidimensional empirical mode decomposition(BEMD)is proposed.First,the high resolution panchromatic image (PAN)is decomposed using NLP until the approximate component and the low resolution multispectral image(MS)contain features with a similar scale.Then,the approximation component and the MS are decomposed by BEMD,resulting in a number of bidimensional intrinsic mode functions(BIMF)and a residue respectively.The instantaneous frequency is computed in 4 directions of the BIMFs.Considering the positive or negative coefficients in the corresponding position,a weighted algorithm is designed for fusing the high frequency details using the instantaneous frequency and the coefficient absolute value of the BIMFs as fusion feature.The fused image is then obtained through inverse BEMD and NLP.Experimental results have illustrated the advantage of this method over the IHS,DWT andà-Trous wavelet in both spectral and spatial detail qualities.
Torregrosa, A. J.; Broatch, A.; Margot, X.; García-Tíscar, J.
2016-08-01
An experimental methodology is proposed to assess the noise emission of centrifugal turbocompressors like those of automotive turbochargers. A step-by-step procedure is detailed, starting from the theoretical considerations of sound measurement in flow ducts and examining specific experimental setup guidelines and signal processing routines. Special care is taken regarding some limiting factors that adversely affect the measuring of sound intensity in ducts, namely calibration, sensor placement and frequency ranges and restrictions. In order to provide illustrative examples of the proposed techniques and results, the methodology has been applied to the acoustic evaluation of a small automotive turbocharger in a flow bench. Samples of raw pressure spectra, decomposed pressure waves, calibration results, accurate surge characterization and final compressor noise maps and estimated spectrograms are provided. The analysis of selected frequency bands successfully shows how different, known noise phenomena of particular interest such as mid-frequency "whoosh noise" and low-frequency surge onset are correlated with operating conditions of the turbocharger. Comparison against external inlet orifice intensity measurements shows good correlation and improvement with respect to alternative wave decomposition techniques.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
Schmitt, J Eric; Lenroot, Rhoshel K; Ordaz, Sarah E; Wallace, Gregory L; Lerch, Jason P; Evans, Alan C; Prom, Elizabeth C; Kendler, Kenneth S; Neale, Michael C; Giedd, Jay N
2009-08-01
The role of genetics in driving intracortical relationships is an important question that has rarely been studied in humans. In particular, there are no extant high-resolution imaging studies on genetic covariance. In this article, we describe a novel method that combines classical quantitative genetic methodologies for variance decomposition with recently developed semi-multivariate algorithms for high-resolution measurement of phenotypic covariance. Using these tools, we produced correlational maps of genetic and environmental (i.e. nongenetic) relationships between several regions of interest and the cortical surface in a large pediatric sample of 600 twins, siblings, and singletons. These analyses demonstrated high, fairly uniform, statistically significant genetic correlations between the entire cortex and global mean cortical thickness. In agreement with prior reports on phenotypic covariance using similar methods, we found that mean cortical thickness was most strongly correlated with association cortices. However, the present study suggests that genetics plays a large role in global brain patterning of cortical thickness in this manner. Further, using specific gyri with known high heritabilities as seed regions, we found a consistent pattern of high bilateral genetic correlations between structural homologues, with environmental correlations more restricted to the same hemisphere as the seed region, suggesting that interhemispheric covariance is largely genetically mediated. These findings are consistent with the limited existing knowledge on the genetics of cortical variability as well as our prior multivariate studies on cortical gyri.
Rakitskaya, Tatyana; Truba, Alla; Ennan, Alim; Volkova, Vitaliya
2015-12-01
Samples of the solid component of welding aerosols (SCWAs) were obtained as a result of steel welding by ANO-4, TsL-11, and UONI13/55 electrodes of Ukrainian manufacture. The phase compositions of the samples, both freshly prepared (FP) and modified (M) by water treatment at 60 °C, were studied by X-ray phase analysis and IR spectroscopy. All samples contain magnetite demonstrating its reflex at 2 θ ~ 35° characteristic of cubic spinel as well as manganochromite and iron oxides. FP SCWA-TsL and FP SCWA-UONI contain such phases as CaF2, water-soluble fluorides, chromates, and carbonates of alkali metals. After modification of the SCWA samples, water-soluble phases in their composition are undetectable. The size of magnetite nanoparticles varies from 15 to 68 nm depending on the chemical composition of electrodes under study. IR spectral investigations confirm the polyphase composition of the SCWAs. As to IR spectra, the biggest differences are apparent in the regions of deformation vibrations of M-O-H bonds and stretching vibrations of M-O bonds (M-Fe, Cr). The catalytic activity of the SCWAs in the reaction of ozone decomposition decreases in the order SCWA-ANO > SCWA-UONI > SCWA-TsL corresponding to the decrease in the content of catalytically active phases in their compositions.
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Zhang, Yan; Bhamber, Ranjeet; Riba-Garcia, Isabel; Liao, Hanqing; Unwin, Richard D; Dowsey, Andrew W
2015-04-01
As data rates rise, there is a danger that informatics for high-throughput LC-MS becomes more opaque and inaccessible to practitioners. It is therefore critical that efficient visualisation tools are available to facilitate quality control, verification, validation, interpretation, and sharing of raw MS data and the results of MS analyses. Currently, MS data is stored as contiguous spectra. Recall of individual spectra is quick but panoramas, zooming and panning across whole datasets necessitates processing/memory overheads impractical for interactive use. Moreover, visualisation is challenging if significant quantification data is missing due to data-dependent acquisition of MS/MS spectra. In order to tackle these issues, we leverage our seaMass technique for novel signal decomposition. LC-MS data is modelled as a 2D surface through selection of a sparse set of weighted B-spline basis functions from an over-complete dictionary. By ordering and spatially partitioning the weights with an R-tree data model, efficient streaming visualisations are achieved. In this paper, we describe the core MS1 visualisation engine and overlay of MS/MS annotations. This enables the mass spectrometrist to quickly inspect whole runs for ionisation/chromatographic issues, MS/MS precursors for coverage problems, or putative biomarkers for interferences, for example. The open-source software is available from http://seamass.net/viz/. © 2015 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
STRATEGY OF SOLUTION FOR THE INVENTORY ROUTING PROBLEM BASED ON SEPARABLE CROSS DECOMPOSITION
M. Elizondo-Cortés
2005-08-01
Full Text Available The Inventory-Routing Problem (IRP involves a central warehouse, a fleet of trucks wlth finlte capacity, a set of customers, and a known storage capacity. The objective is to determine when to serve each customer, as well as what route each truck should take, with the lowest expense. IRP is a NP-hard problem, this means that searching for solutions can take a very long time. A three-phase strategy is used to solve the problem. This strategy is constructedn by answering the key questions: Which customers should be attended in a planned period? What volume of n products should be delivered to each customer? And, which route should be followed by each truck? The second phase uses Cross Separable Decomposition to solve an Allocation Problem, in order to answer questions two and three, solving a location problem. The result is a very efficient ranking algorithm O(n3 for large cases of the lRP.
Hofmann, Philipp; Sedlmair, Martin; Krauss, Bernhard; Wichmann, Julian L.; Bauer, Ralf W.; Flohr, Thomas G.; Mahnken, Andreas H.
2016-03-01
Osteoporosis is a degenerative bone disease usually diagnosed at the manifestation of fragility fractures, which severely endanger the health of especially the elderly. To ensure timely therapeutic countermeasures, noninvasive and widely applicable diagnostic methods are required. Currently the primary quantifiable indicator for bone stability, bone mineral density (BMD), is obtained either by DEXA (Dual-energy X-ray absorptiometry) or qCT (quantitative CT). Both have respective advantages and disadvantages, with DEXA being considered as gold standard. For timely diagnosis of osteoporosis, another CT-based method is presented. A Dual Energy CT reconstruction workflow is being developed to evaluate BMD by evaluating lumbar spine (L1-L4) DE-CT images. The workflow is ROI-based and automated for practical use. A dual energy 3-material decomposition algorithm is used to differentiate bone from soft tissue and fat attenuation. The algorithm uses material attenuation coefficients on different beam energy levels. The bone fraction of the three different tissues is used to calculate the amount of hydroxylapatite in the trabecular bone of the corpus vertebrae inside a predefined ROI. Calibrations have been performed to obtain volumetric bone mineral density (vBMD) without having to add a calibration phantom or to use special scan protocols or hardware. Accuracy and precision are dependent on image noise and comparable to qCT images. Clinical indications are in accordance with the DEXA gold standard. The decomposition-based workflow shows bone degradation effects normally not visible on standard CT images which would induce errors in normal qCT results.
Carlos Reyes-Garcia
2013-08-01
Full Text Available This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user’s blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD. EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-01-01
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873
Kang, Seung Hee; Kim, Sung Jun; Shin, Eui Sup [Chonbuk National University, Jeonju (Korea, Republic of)
2012-03-15
A smart interfacing method based on domain/boundary decomposition is presented for the non-linear analysis of thermo-elastoviscoplastic damage and contact. The smart interfacing method provides adaptive reinterfacing of the subdomains and the interface as a result of changes in the viscoplasticity and damage level. Since the whole domain is divided into subdomains, interface, and contact interfaces, non-linear analyses of the problems can be localized within a few subdomains and on the contact interfaces. For the continuity constraints on the interface and the contact interfaces, a penalty method is applied to the variational formulations and finite element approximations. By applying suitable solution algorithms and adopting the smart interfacing method, the computational efficiency can be considerably improved. The important features of the proposed method were also evaluated through numerical experiments.
D.Torres; S.de Llobet; J.L.Pinilla; M.J.Lázaro; I.Suelves; R.Moliner
2012-01-01
Catalytic decomposition of methane using a Fe-based catalyst for hydrogen production has been studied in this work.A Fe/Al2O3 catalyst previously developed by our research group has been tested in a fluidized bed reactor (FBR).A parametric study of the effects of some process variables,including reaction temperature and space velocity,is undertaken.The operating conditions strongly affect the catalyst performance.Methane conversion was increased by increasing the temperature and lowering the space velocity.Using temperatures between 700 and 900 ℃ and space velocities between 3 and 6 LN/(gcat·h),a methane conversion in the range of 25％-40％ for the gas exiting the reactor could be obtained during a 6 h run.In addition,carbon was deposited in the form of nanofilaments (chain like nanofibers and multiwall nanotubes) with similar properties to those obtained in a fixed bed reactor.
Zhongliang Lv
2016-01-01
Full Text Available A novel fault diagnosis method based on variational mode decomposition (VMD and multikernel support vector machine (MKSVM optimized by Immune Genetic Algorithm (IGA is proposed to accurately and adaptively diagnose mechanical faults. First, mechanical fault vibration signals are decomposed into multiple Intrinsic Mode Functions (IMFs by VMD. Then the features in time-frequency domain are extracted from IMFs to construct the feature sets of mixed domain. Next, Semisupervised Locally Linear Embedding (SS-LLE is adopted for fusion and dimension reduction. The feature sets with reduced dimension are inputted to the IGA optimized MKSVM for failure mode identification. Theoretical analysis demonstrates that MKSVM can approximate any multivariable function. The global optimal parameter vector of MKSVM can be rapidly identified by IGA parameter optimization. The experiments of mechanical faults show that, compared to traditional fault diagnosis models, the proposed method significantly increases the diagnosis accuracy of mechanical faults and enhances the generalization of its application.
M. Baghdadi
2011-01-01
Full Text Available This paper presents a comprehensive framework model of a distribution company with security and reliability considerations. A probabilistic wind farm, which is a renewable energy resource, is modeled in this work. The requirement energy of distribution company can be either provided by distribution company's own distributed generations or purchased from power market. Two reliability indices as well as DC load flow equations are also considered in order to satisfy reliability and security constraints, respectively. Since allocating proper spinning reserve improves reliability level, the amount of spinning reserve will be calculated iteratively. In this work, all equations are expressed in a linear fashion in which unit commitment formulation depends on binary variables associated with only on/off of units. The benders decomposition method is used to solve security-based unit commitment.
Jianfeng Zhang
2015-01-01
Full Text Available During the operation process of the high voltage circuit breaker, the changes of vibration signals can reflect the machinery states of the circuit breaker. The extraction of the vibration signal feature will directly influence the accuracy and practicability of fault diagnosis. This paper presents an extraction method based on ensemble empirical mode decomposition (EEMD. Firstly, the original vibration signals are decomposed into a finite number of stationary intrinsic mode functions (IMFs. Secondly, calculating the envelope of each IMF and separating the envelope by equal-time segment and then forming equal-time segment energy entropy to reflect the change of vibration signal are performed. At last, the energy entropies could serve as input vectors of support vector machine (SVM to identify the working state and fault pattern of the circuit breaker. Practical examples show that this diagnosis approach can identify effectively fault patterns of HV circuit breaker.
Xie, Wen-Jie; Li, Ming-Xia; Xu, Hai-Chuan; Chen, Wei; Zhou, Wei-Xing; Stanley, H. Eugene
2016-10-01
Traders in a stock market exchange stock shares and form a stock trading network. Trades at different positions of the stock trading network may contain different information. We construct stock trading networks based on the limit order book data and classify traders into k classes using the k-shell decomposition method. We investigate the influences of trading behaviors on the price impact by comparing a closed national market (A-shares) with an international market (B-shares), individuals and institutions, partially filled and filled trades, buyer-initiated and seller-initiated trades, and trades at different positions of a trading network. Institutional traders professionally use some trading strategies to reduce the price impact and individuals at the same positions in the trading network have a higher price impact than institutions. We also find that trades in the core have higher price impacts than those in the peripheral shell.
Chundawat, Shishir P S; Vismeh, Ramin; Sharma, Lekh N; Humpula, James F; da Costa Sousa, Leonardo; Chambliss, C Kevin; Jones, A Daniel; Balan, Venkatesh; Dale, Bruce E
2010-11-01
Decomposition products formed/released during ammonia fiber expansion (AFEX) and dilute acid (DA) pretreatment of corn stover (CS) were quantified using robust mass spectrometry based analytical platforms. Ammonolytic cleavage of cell wall ester linkages during AFEX resulted in the formation of acetamide (25mg/g AFEX CS) and various phenolic amides (15mg/g AFEX CS) that are effective nutrients for downstream fermentation. After ammonolysis, Maillard reactions with carbonyl-containing intermediates represent the second largest sink for ammonia during AFEX. On the other hand, several carboxylic acids were formed (e.g. 35mg acetic acid/g DA CS) during DA pretreatment. Formation of furans was 36-fold lower for AFEX compared to DA treatment; while carboxylic acids (e.g. lactic and succinic acids) yield was 100-1000-fold lower during AFEX compared to previous reports using sodium hydroxide as pretreatment reagent. Copyright 2010 Elsevier Ltd. All rights reserved.
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-01-01
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Asadi, Mozaffar; Asadi, Zahra; Savaripoor, Nooshin; Dusek, Michal; Eigner, Vaclav; Shorkaei, Mohammad Ranjkesh; Sedaghat, Moslem
2015-02-05
A series of new VO(IV) complexes of tetradentate N2O2 Schiff base ligands (L(1)-L(4)), were synthesized and characterized by FT-IR, UV-vis and elemental analysis. The structure of the complex VOL(1)⋅DMF was also investigated by X-ray crystallography which revealed a vanadyl center with distorted octahedral coordination where the 2-aza and 2-oxo coordinating sites of the ligand were perpendicular to the "-yl" oxygen. The electrochemical properties of the vanadyl complexes were investigated by cyclic voltammetry. A good correlation was observed between the oxidation potentials and the electron withdrawing character of the substituents on the Schiff base ligands, showing the following trend: MeO5-H>5-Br>5-Cl. Furthermore, the kinetic parameters of thermal decomposition were calculated by using the Coats-Redfern equation. According to the Coats-Redfern plots the kinetics of thermal decomposition of studied complexes is of the first-order in all stages, the free energy of activation for each following stage is larger than the previous one and the complexes have good thermal stability. The preparation of VOL(1)⋅DMF yielded also another compound, one kind of vanadium oxide [VO]X, with different habitus of crystals, (platelet instead of prisma) and without L(1) ligand, consisting of a V10O28 cage, diaminium moiety and dimethylamonium as a counter ions. Because its crystal structure was also new, we reported it along with the targeted complex. Copyright © 2014 Elsevier B.V. All rights reserved.
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.
2014-05-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.
The Research of Welding Residual Stress Based Finite Element Method
Qinghua Bai
2013-06-01
Full Text Available Welding residual stress was caused by local heating during the welding process, tensile residual stress reduce fatigue strength and corrosion resistance, Compressive residual stress decreases stability limit. So it will produce brittle fracture, reduce working life and strength of workpiece; Based on the simulation of welding process with finite element method, calculate the welding temperature field and residual stress, and then measure residual stress in experiments, So as to get the best welding technology and welding parameters, to reduce welding residual stress effective, it has very important significance.
Functionality Decomposition by Compositional Correctness Preserving Transformation
Brinksma, Hendrik; Langerak, Romanus
1995-01-01
We present an algorithm for the decomposition of processes in a process algebraic framework. Decomposition, or the refinement of process substructure, is an important design principle in the top-down development of concurrent systems. In the approach that we follow the decomposition is based on a
Stress-based Variable-inductor for Electronic Ballasts
Zhang, Lihui; Xia, Yongming; Lu, Kaiyuan
2015-01-01
presents a new stress-based variable inductor to control inductance using the inverse magnetostrictive effect of a magnetostrictive material. The stress can be applied by a piezoelectrical material, and thus a voltage-controlled variable inductor can be realized with zero-power consumption. The new stress...
A yield criterion based on mean shear stress
Emmens, W.C.; Boogaard, van den A.H.
2014-01-01
This work investigates the relation between shear stress and plastic yield considering that a crystal can only deform in a limited set of directions. The shear stress in arbitrary directions is mapped for some cases showing relevant differences. Yield loci based on mean shear stress are con- structe
Bergmann, R C; Ralebitso-Senior, T K; Thompson, T J U
2014-08-01
Despite emergent research initiatives, significant knowledge gaps remain of soil microbiology-associated cadaver decomposition. Nevertheless, preliminary studies have shown that the vast diversity and complex interactions of soil microbial communities have great potential for forensic applications such as clandestine grave location and postmortem interval estimation. This study investigated changes in soil bacterial communities during pig (Sus scrofa domesticus) leg decomposition. 16S rRNA, instead of the usually applied 16S rDNA marker, was used to compare the metabolically active bacteria. Total bacterial RNA was extracted from soil samples of three different layers on day 3, 28 and 77 after the shallow burial of a pig leg. The V3 region of the 16S rRNA was amplified, analysed by RT-PCR DGGE, and compared with control soil bacterial community profiles. Statistically significant differences in soil bacterial biodiversity were observed. For the control, bacterial diversity (H') and species richness (S) of the three layers averaged 2.48±0.14 (H') and 18.8±2.5 (S), respectively, while for the test soil increases (p=0.027) were recorded between day 3 (H'=2.71±0.02; S=21.3±2.0) and 28 (H'=3.46±0.32; S=60.3±16.9), particularly in the middle (10-20 cm) and bottom (20-30 cm) soil layers. Between day 28 and 77 the diversity and richness then decreased on average for all three layers (H'=3.43±0.20; S=60.0±17.3) but remained higher than on day 3. Thus, responses in soil bacterial profiles and activity to carcass decomposition, detected and characterised by RNA-based DGGE, could be used together with RNA sequencing data, changes in physico-chemical variables (carbon, nitrogen, phosphorus, temperature, redox potential, water activity and pH) and conventional macroecology markers (e.g. insects and vegetation), to develop a suite of analytical protocols for different forensic scenarios.
GAO Liang; CHEN Wenhua; QIAN Ping; PAN Jun; HE Qingchuan
2014-01-01
For planning optimum multiple stresses accelerated life test plans, a commonly followed guiding principle is that all parameters of the life-stress relationship should be estimated, and the number of the stress level combinations must be no less than the number of parameters of the life-stress relationship. However, the general objective of an accelerated life test(ALT) is to assess thep-th quantile of the product life distribution under normal stress. For this objective,estimating all model parameters is not necessary, and this will increase the cost of test. Based on the theoretical conclusion that the stress level combinations of the optimum multiple stresses ALT plan locate on a straight line through the origin of coordinate, it is proposed that a design idea of planning the optimum multiple stresses ALT plan through transforming the problem of designing an optimum multiple stresses ALT plan to designing an optimum single stress ALT plan. Moreover, a method of planning the optimum multiple stresses ALT plan which can avoid estimating all model parameters is established. An example shows that, the proposed plan which only has two stress level combinations could achieve an accuracy no less than the traditional plan, and save the test time and cost on one stress level combination at least; when the actual product life is less than the design value, even the deviation of the model initial parameters value is up to 20%, the variance of the estimation of thep-th quantile of the proposed plan is still smaller than the traditional plans approximately 25%. A design method is provided for planning the optimum multiple stresses ALT which uses the statistical optimum degenerate test plan as the optimum multiple stresses accelerated life test plan.
Shi, Feifei
2014-07-10
The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.
Ando, T.; Ueyama, M.
2015-12-01
The urban heat island has received much attention as an important environmental problem. Urban parks have abilities to mitigate the urban heat island, because surface and surrounding air temperature are often lower in urban parks than urban built-up. In this study, we have conducted a comparative measurements of the surface energy balance at an urban built-up and a large urban park. We clarify the important factors mitigating surface temperatures by creating the urban park based on a methodology called the temperature decomposition (Lee et al., 2011). Two observation sites have been set at the urban built-up and the large urban park, Oizumiryokuchi, in Sakai, Japan. Sensible and latent heat fluxes have been measured by the eddy covariance method. Ground heat flux was estimated using the Objective Hysteresis Model (Grimmond et al., 1991). Anthropogenic heat flux was estimated by the inventory data for the urban built-up area. To decompose the change in surface temperatures due to a land use change from the urban built-up to the urban park, the temperature decomposition method was applied for daytime and nighttime. We separated the contributions of the surface temperature changes by change in albedo, surface roughness, Bowen ratio, ground heat flux, and anthropogenic heat. The result indicates that increasing surface roughness in the urban park was the most effective factor in cooling daytime surface temperature. For nighttime, decreasing ground heat flux due to increasing vegetation cover was the most effective factor in cooling surface temperature. The comparative measurement in the urban area is a powerful tool for developing mitigation strategies for the urban heat island. References Grimmond et al., 1991: Atmos. Environ., 25, 311-326. Lee et al., 2011: Nature, 479, 384-387.
Li, Qian; Di, Bangrang; Wei, Jianxin; Yuan, Sanyi; Si, Wenpeng
2016-12-01
Sparsity constraint inverse spectral decomposition (SCISD) is a time-frequency analysis method based on the convolution model, in which minimizing the l1 norm of the time-frequency spectrum of the seismic signal is adopted as a sparsity constraint term. The SCISD method has higher time-frequency resolution and more concentrated time-frequency distribution than the conventional spectral decomposition methods, such as short-time Fourier transformation (STFT), continuous-wavelet transform (CWT) and S-transform. Due to these good features, the SCISD method has gradually been used in low-frequency anomaly detection, horizon identification and random noise reduction for sandstone and shale reservoirs. However, it has not yet been used in carbonate reservoir prediction. The carbonate fractured-vuggy reservoir is the major hydrocarbon reservoir in the Halahatang area of the Tarim Basin, north-west China. If reasonable predictions for the type of multi-cave combinations are not made, it may lead to an incorrect explanation for seismic responses of the multi-cave combinations. Furthermore, it will result in large errors in reserves estimation of the carbonate reservoir. In this paper, the energy and phase spectra of the SCISD are applied to identify the multi-cave combinations in carbonate reservoirs. The examples of physical model data and real seismic data illustrate that the SCISD method can detect the combination types and the number of caves of multi-cave combinations and can provide a favourable basis for the subsequent reservoir prediction and quantitative estimation of the cave-type carbonate reservoir volume.
Yin, Yi; Shang, Pengjian
2015-04-01
In this paper, we propose multiscale detrended cross-correlation analysis (MSDCCA) to detect the long-range power-law cross-correlation of considered signals in the presence of nonstationarity. For improving the performance and getting better robustness, we further introduce the empirical mode decomposition (EMD) to eliminate the noise effects and propose MSDCCA method combined with EMD, which is called MS-EDXA method, then systematically investigate the multiscale cross-correlation structure of the real traffic signals. We apply the MSDCCA and MS-EDXA methods to study the cross-correlations in three situations: velocity and volume on one lane, velocities on the present and the next moment and velocities on the adjacent lanes, and further compare their spectrums respectively. When the difference between the spectrums of MSDCCA and MS-EDXA becomes unobvious, there is a crossover which denotes the turning point of difference. The crossover results from the competition between the noise effects in the original signals and the intrinsic fluctuation of traffic signals and divides the plot of spectrums into two regions. In all the three case, MS-EDXA method makes the average of local scaling exponents increased and the standard deviation decreased and provides a relative stable persistent scaling cross-correlated behavior which gets the analysis more precise and more robust and improves the performance after noises being removed. Applying MS-EDXA method avoids the inaccurate characteristics of multiscale cross-correlation structure at the short scale including the spectrum minimum, the range for the spectrum fluctuation and general trend, which are caused by the noise in the original signals. We get the conclusions that the traffic velocity and volume are long-range cross-correlated, which is accordant to their actual evolution, while velocities on the present and the next moment and velocities on adjacent lanes reflect the strong cross-correlations both in temporal and
Layered image inpainting based on image decomposition%基于图像分解的分层图像修复
秦川; 王朔中
2007-01-01
We propose a layered image inpainting scheme based on image decomposition. The damaged image is first decomposed into three layers: cartoon, edge, and texture. The cartoon and edge layers are repaired using an adaptive offset operator that can fill-in damaged image blocks while preserving sharpness of edges. The missing information in the texture layer is generated with a texture synthesis method. By using discrete cosine transform (DCT) in image decomposition and trading between resolution and computation complexity in texture synthesis, the processing time is kept at a reasonable level.
2002-01-01
A decomposition methodology based on the concept of â thermoeconomic isolationâ applied to the synthesis/design and operational optimization of a stationary cogeneration proton exchange membrane fuel cell (PEMFC) based total energy system (TES) for residential/commercial applications is the focus of this work. A number of different configurations for the fuel cell based TES were considered. The most promising set based on an energy integration analysis of candidate configurations was devel...
一个基于信号分析的直和分解定理%A direct sum decomposition theorem based on signal analysis
李珊珊
2009-01-01
Based on the classical results of the direct sum decomposition of L2(R2), by using a series of recent results of signal analysis, which is induced by the Mobius transform, we obtain a generalized direct sum decomposition theorem which still keeps invariant under Fourier transform. Furthermore, the explicit Fourier transform formula of any function in the subspace of this generalized decomposition is deduced.%根据经典Fourier分析中关于L2(R2)的直和分解, 利用近期信号分析中由Mobius变换引出的一系列结果, 得到了更为广泛的直和分解, 证明了其所有子空间在Fourier变换下保持不变, 并且推出了其子空间里任意函数Fourier变换的具体表达式.
Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn
2013-01-01
This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...... integer programming model. We present a two phase approach that first uses Benders decomposition to solve the linear programming relaxation of a relaxed version of the problem. In the second phase, integer solutions are enumerated and a procedure is applied to make them satisfy constraints not included...
Shin, Dong-Hun; Woo, Seunghee; Yem, Hyesuk; Cha, Minjeong; Cho, Sanghun; Kang, Mingyu; Jeong, Sooncheol; Kim, Yoonhyun; Kang, Kyungtae; Piao, Yuanzhe
2014-03-12
We report a novel method for the synthesis of a self-reducible (thermally reducible without a reducing atmosphere) and alcohol-soluble copper-based metal-organic decomposition (MOD) ink for printed electronics. Alcohol-solvent-based conductive inks are necessary for commercial printing processes such as reverse offset printing. We selected copper(II) formate as a precursor and alkanolamine (2-amino-2-methyl-1-propanol) as a ligand to make an alcohol-solvent-based conductive ink and to assist in the reduction reaction of copper(II) formate. In addition, a co-complexing agent (octylamine) and a sintering helper (hexanoic acid) were introduced to improve the metallic copper film. The specific resistivity of copper-based MOD ink (Cuf-AMP-OH ink) after heat treatment at 350 °C is 9.46 μΩ·cm, which is 5.5 times higher than the specific resistivity of bulk copper. A simple stamping transfer was conducted to demonstrate the potential of our ink for commercial printing processes.
Study on the decomposition of trace benzene over V2O5-WO3/TiO2-based catalysts in simulated flue gas
Commercial and laboratory-prepared V2O5–WO3/TiO2-based catalysts with different compositions were tested for catalytic decomposition of chlorobenzene （ClBz） in simulated flue gas. Resonance enhanced multiphoton ionization-time of flight mass spectrometry (REMPI-TOFMS) was employe...
Dong Wang
2015-01-01
Full Text Available The traditional polarity comparison based travelling wave protection, using the initial wave information, is affected by initial fault angle, bus structure, and external fault. And the relationship between the magnitude and polarity of travelling wave is ignored. Because of the protection tripping and malfunction, the further application of this protection principle is affected. Therefore, this paper presents an ultra-high-speed travelling wave protection using integral based polarity comparison principle. After empirical mode decomposition of the original travelling wave, the first-order intrinsic mode function is used as protection object. Based on the relationship between the magnitude and polarity of travelling wave, this paper demonstrates the feasibility of using travelling wave magnitude which contains polar information as direction criterion. And the paper integrates the direction criterion in a period after fault to avoid wave head detection failure. Through PSCAD simulation with the typical 500 kV transmission system, the reliability and sensitivity of travelling wave protection were verified under different factors’ affection.
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Decomposition methods for unsupervised learning
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding to their genera......This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
Chen, Jun; Li, Guoxiu; Zhang, Tao; Wang, Meng; Yu, Yusong
2016-12-01
Low toxicity ammonium dinitramide (ADN)-based aerospace propulsion systems currently show promise with regard to applications such as controlling satellite attitude. In the present work, the decomposition and combustion processes of an ADN-based monopropellant thruster were systematically studied, using a thermally stable catalyst to promote the decomposition reaction. The performance of the ADN propulsion system was investigated using a ground test system under vacuum, and the physical properties of the ADN-based propellant were also examined. Using this system, the effects of the preheating temperature and feed pressure on the combustion characteristics and thruster performance during steady state operation were observed. The results indicate that the propellant and catalyst employed during this work, as well as the design and manufacture of the thruster, met performance requirements. Moreover, the 1 N ADN thruster generated a specific impulse of 223 s, demonstrating the efficacy of the new catalyst. The thruster operational parameters (specifically, the preheating temperature and feed pressure) were found to have a significant effect on the decomposition and combustion processes within the thruster, and the performance of the thruster was demonstrated to improve at higher feed pressures and elevated preheating temperatures. A lower temperature of 140 °C was determined to activate the catalytic decomposition and combustion processes more effectively compared with the results obtained using other conditions. The data obtained in this study should be beneficial to future systematic and in-depth investigations of the combustion mechanism and characteristics within an ADN thruster.
Application of the Decomposition Method to the Design Complexity of Computer-based Display
Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2012-05-15
The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display
Hongchao Fan
2014-04-01
Full Text Available This paper presents a new approach for roof facet segmentation based on ridge detection and hierarchical decomposition along ridges. The proposed approach exploits the fact that every roof can be composed of a set of gabled roofs and single facets which are separated by the gabled roofs. In this work, firstly, building footprints stored in OpenStreetMap are used to extract 3D points on roofs. Then, roofs are segmented into roof facets. The algorithm starts with detecting roof ridges using RANSAC since they are parallel to the horizon and situated on the top of the roof. The roof ridges are utilized to indicate the location and direction of the gabled roof. Thus, points on the two roof facets along a roof ridge can be identified based on their connectivity and coplanarity. The results of the segmentation benefit the further process of roof reconstruction because many parameters, including the position, angle and size of the gabled roof can be calculated and used as priori knowledge for the model-driven approach, and topologies among the point segments are made known for the data-driven approach. The algorithm has been validated in the test sites of two towns next to Bavaria Forest national park. The experimental results show that building roofs can be segmented with both high correctness and completeness simultaneously.
Ye, Linlin; Yang, Dan; Wang, Xu
2014-06-01
A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.
Jing Xu
2015-10-01
Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.
Segmentation of knee joints in x-ray images using decomposition-based sweeping and graph search
Mu, Jian; Liu, Xiaomin; Luan, Shuang; Heintz, Philip H.; Mlady, Gary W.; Chen, Danny Z.
2011-03-01
Plain radiography (i.e., X-ray imaging) provides an effective and economical imaging modality for diagnosing knee illnesses and injuries. Automatically segmenting and analyzing knee radiographs is a challenging problem. In this paper, we present a new approach for accurately segmenting the knee joint in X-ray images. We first use the Gaussian high-pass filter to remove homogeneous regions which are unlikely to appear on bone contours. We then presegment the bones and develop a novel decomposition-based sweeping algorithm for extracting bone contour topology from the filtered skeletonized images. Our sweeping algorithm decomposes the bone structures into several relatively simple components and deals with each component separately based on its geometric characteristics using a sweeping strategy. Utilizing the presegmentation, we construct a graph to model the bone topology and apply an optimal graph search algorithm to optimize the segmentation results (with respect to our cost function defined on the bone boundaries). Our segmented results match well with the manual tracing results by radiologists. Our segmentation approach can be a valuable tool for assisting radiologists and X-ray technologists in clinical practice and training.
Cai Yi
2015-01-01
Full Text Available Due to the special location and structure of transmission system on high-speed train named CRH5, dynamic unbalance state of the cardan shaft will pose a threat to the train servicing safety, so effective methods that test the cardan shaft operating information and estimate the performance state in real time are needed. In this study a useful estimation method based on ensemble empirical mode decomposition (EEMD is presented. By using this method, time-frequency characteristic of cardan shaft can be extracted effectively by separating the gearbox vibration acceleration data. Preliminary analysis suggests that the pinions rotating vibration separated from gearbox vibration by EEMD can be used as important assessment basis to estimate cardan shaft state. With two sets gearbox vibration signals collected from the in-service train at different running speed, the comparative analysis verifies that the proposed method has high effectiveness for cardan-shaft state estimate. Of course, it needs further research to quantify the performance state of cardan shaft based on this method.
Glascoe, E A; Hsu, P C; Springer, H K; DeHaven, M R; Tan, N; Turner, H C
2010-12-10
PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a small amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.
Wu, Chunyan; Liu, Jian; Peng, Fuqiang; Yu, Dejie; Li, Rong
2013-07-01
When used for separating multi-component non-stationary signals, the adaptive time-varying filter(ATF) based on multi-scale chirplet sparse signal decomposition(MCSSD) generates phase shift and signal distortion. To overcome this drawback, the zero phase filter is introduced to the mentioned filter, and a fault diagnosis method for speed-changing gearbox is proposed. Firstly, the gear meshing frequency of each gearbox is estimated by chirplet path pursuit. Then, according to the estimated gear meshing frequencies, an adaptive zero phase time-varying filter(AZPTF) is designed to filter the original signal. Finally, the basis for fault diagnosis is acquired by the envelope order analysis to the filtered signal. The signal consisting of two time-varying amplitude modulation and frequency modulation(AM-FM) signals is respectively analyzed by ATF and AZPTF based on MCSSD. The simulation results show the variances between the original signals and the filtered signals yielded by AZPTF based on MCSSD are 13.67 and 41.14, which are far less than variances (323.45 and 482.86) between the original signals and the filtered signals obtained by ATF based on MCSSD. The experiment results on the vibration signals of gearboxes indicate that the vibration signals of the two speed-changing gearboxes installed on one foundation bed can be separated by AZPTF effectively. Based on the demodulation information of the vibration signal of each gearbox, the fault diagnosis can be implemented. Both simulation and experiment examples prove that the proposed filter can extract a mono-component time-varying AM-FM signal from the multi-component time-varying AM-FM signal without distortion.
Web-Based and Mobile Stress Management Intervention for Employees
Heber, E.; Lehr, D.; Ebert, D. D.
2016-01-01
Background: Work-related stress is highly prevalent among employees and is associated with adverse mental health consequences. Web-based interventions offer the opportunity to deliver effective solutions on a large scale; however, the evidence is limited and the results conflicting. Objective......: This randomized controlled trial evaluated the efficacy of guided Web-and mobile-based stress management training for employees. Methods: A total of 264 employees with elevated symptoms of stress (Perceived Stress Scale-10, PSS-10 >= 22) were recruited from the general working population and randomly assigned......-261= 58.08, Pstress in employees in the long term...
Li, Xiao-Song; Gao, Zhi-Hai; Li, Zeng-Yuan; Bai, Li-Na; Wang, Beng-Yu
2010-01-01
Based on Hyperion hyperspectral image data, the image-derived shifting sand, false-Gobi spectra, and field-measured sparse vegetation spectra were taken as endmembers, and the sparse vegetation coverage (linear spectral mixture model (LSMM) and non-constrained LSMM, respectively. The results showed that the sparse vegetation fraction based on fully constrained LSMM described the actual sparse vegetation distribution. The differences between sparse vegetation fraction and field-measured vegetation coverage were less than 5% for all samples, and the RMSE was 3.0681. However, the sparse vegetation fraction based on non-constrained LSMM was lower than the field-measured vegetation coverage obviously, and the correlation between them was poor, with a low R2 of 0.5855. Compared with McGwire's corresponding research, the sparse vegetation coverage estimation in this study was more accurate and reliable, having expansive prospect for application in the future.
Goldsmith, Rachel E; Gerhart, James I; Chesney, Samantha A; Burns, John W; Kleinman, Brighid; Hood, Megan M
2014-10-01
Mindfulness-based psychotherapies are associated with reductions in depression and anxiety. However, few studies address whether mindfulness-based approaches may benefit individuals with posttraumatic stress symptoms. The current pilot study explored whether group mindfulness-based stress reduction therapy reduced posttraumatic stress symptoms, depression, and negative trauma-related appraisals in 9 adult participants who reported trauma exposure and posttraumatic stress or depression. Participants completed 8 sessions of mindfulness-based stress reduction treatment, as well as pretreatment, midtreatment, and posttreatment assessments of psychological symptoms, acceptance of emotional experiences, and trauma appraisals. Posttraumatic stress symptoms, depression, and shame-based trauma appraisals were reduced over the 8-week period, whereas acceptance of emotional experiences increased. Participants' self-reported amount of weekly mindfulness practice was related to increased acceptance of emotional experiences from pretreatment to posttreatment. Results support the utility of mindfulness-based therapies for posttraumatic stress symptoms and reinforce studies that highlight reducing shame and increasing acceptance as important elements of recovery from trauma.
de Almeida, Andre L. F.; Luciani, Xavier; Stegeman, Alwin; Comon, Pierre
2012-01-01
This work proposes a new tensor-based approach to solve the problem of blind identification of underdetermined mixtures of complex-valued sources exploiting the cumulant generating function (CGF) of the observations. We show that a collection of second-order derivatives of the CGF of the observation
Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun
2014-03-01
Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.
A Benders Decomposition-Based Matheuristic for the Cardinality Constrained Shift Design Problem
Lusby, Richard Martin; Range, Troels Martin; Larsen, Jesper
2016-01-01
The Shift Design Problem is an important optimization problem which arises when scheduling personnel in industries that require continuous operation. Based on the forecast, required staffing levels for a set of time periods, a set of shift types that best covers the demand must be determined...
Bloch, Søren; Christiansen, Christian Holk
versa. This, however, is often neglected in the existing literature. We solve the TSLAP simultaneously for the reserve area and the forward area. Based on randomly generated test instances we show that the solutions of TSLAP compare favorably to solutions found by other algorithms proposed...
Schaller, Matthieu; Chalk, Aidan B G; Draper, Peter W
2016-01-01
We present a new open-source cosmological code, called SWIFT, designed to solve the equations of hydrodynamics using a particle-based approach (Smooth Particle Hydrodynamics) on hybrid shared/distributed-memory architectures. SWIFT was designed from the bottom up to provide excellent strong scaling on both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0 systems), without relying on architecture-specific features or specialized accelerator hardware. This performance is due to three main computational approaches: (1) Task-based parallelism for shared-memory parallelism, which provides fine-grained load balancing and thus strong scaling on large numbers of cores. (2) Graph-based domain decomposition, which uses the task graph to decompose the simulation domain such that the work, as opposed to just the data, as is the case with most partitioning schemes, is equally distributed across all nodes. (3) Fully dynamic and asynchronous communication, in which communication is modelled as just anot...
Rai, Akhand; Upadhyay, S. H.
2017-09-01
Bearing is the most critical component in rotating machinery since it is more susceptible to failure. The monitoring of degradation in bearings becomes of great concern for averting the sudden machinery breakdown. In this study, a novel method for bearing performance degradation assessment (PDA) based on an amalgamation of empirical mode decomposition (EMD) and k-medoids clustering is encouraged. The fault features are extracted from the bearing signals using the EMD process. The extracted features are then subjected to k-medoids based clustering for obtaining the normal state and failure state cluster centres. A confidence value (CV) curve based on dissimilarity of the test data object to the normal state is obtained and employed as the degradation indicator for assessing the health of bearings. The proposed outlook is applied on the vibration signals collected in run-to-failure tests of bearings to assess its effectiveness in bearing PDA. To validate the superiority of the suggested approach, it is compared with commonly used time-domain features RMS and kurtosis, well-known fault diagnosis method envelope analysis (EA) and existing PDA classifiers i.e. self-organizing maps (SOM) and Fuzzy c-means (FCM). The results demonstrate that the recommended method outperforms the time-domain features, SOM and FCM based PDA in detecting the early stage degradation more precisely. Moreover, EA can be used as an accompanying method to confirm the early stage defect detected by the proposed bearing PDA approach. The study shows the potential application of k-medoids clustering as an effective tool for PDA of bearings.
Lucido, Mario; Panariello, Gaetano; Schettino, Fulvio
2017-01-01
The aim of this paper is the introduction of a new analytically regularizing procedure, based on Helmholtz decomposition and Galerkin method, successfully employed to analyze the electromagnetic scattering by zero-thickness perfectly electrically conducting circular disk. After expanding the fields in cylindrical harmonics, the problem is formulated as an electric field integral equation in the vector Hankel transform domain. Assuming as unknowns the surface curl-free and divergence-free contributions of the surface current density, a second-kind Fredholm infinite matrix-operator equation is obtained by means of Galerkin method with expansion functions reconstructing the expected physical behavior of the surface current density and with closed-form spectral domain counterparts, which form a complete set of orthogonal eigenfunctions of the most singular part of the integral operator. The coefficients of the scattering matrix are single improper integrals which can be quickly computed by means of analytical asymptotic acceleration technique. Comparisons with the literature have been provided in order to show the accuracy and efficiency of the presented technique.
Gholamreza Anbarjafari
2015-01-01
Full Text Available Recently, many computer vision applications are being inspired by human behavior, or human visual system. Also it is known that illumination issues have always been an important problem in many image processing applications. In this work we propose a new image illumination enhancement technique which is inspired from the human visual system behavior on illumination correction. The proposed technique uses local singular value decomposition (SVD and discrete wavelet transforms (DWT, inspired from the fact that human visual system equalizes a scene by disregarding the extreme illuminated areas. In other words, human brain uses local illumination enhancement and this localization is based on the extreme illuminations, e.g. existence or absence of too much light. In this technique, after dividing the image into several locals, each local is converted into the DWT domain and after updating the singular value matrix of the respective low-low subband, the local is reconstructed by using inverse DWT (IDWT. Combination of locals results in the equalized image. The technique is compared with the standard general histogram equalization (GHE and local histogram equalization (LHE. The experimental results are showing the superiority of the proposed method over the aforementioned techniques.
张军; 欧建平; 占荣辉
2015-01-01
In order to improve measurement accuracy of moving target signals, an automatic target recognition model of moving target signals was established based on empirical mode decomposition (EMD) and support vector machine (SVM). Automatic target recognition process on the nonlinear and non-stationary of Doppler signals of military target by using automatic target recognition model can be expressed as follows. Firstly, the nonlinearity and non-stationary of Doppler signals were decomposed into a set of intrinsic mode functions (IMFs) using EMD. After the Hilbert transform of IMF, the energy ratio of each IMF to the total IMFs can be extracted as the features of military target. Then, the SVM was trained through using the energy ratio to classify the military targets, and genetic algorithm (GA) was used to optimize SVM parameters in the solution space. The experimental results show that this algorithm can achieve the recognition accuracies of 86.15%, 87.93%, and 82.28%for tank, vehicle and soldier, respectively.
Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo
2015-05-01
Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting.
Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo
2013-04-01
Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.
Nasser Najibi
2013-07-01
Full Text Available The establishment and maintenance of marine structures and near-shore constructions together require having sufficient and accurate information about sea level variations not only in the present time but also in the near future as a reliable prediction. It is therefore necessary to analyze and predict Mean Sea Level (MSL for a specific time considering all possible effects which may modify the accuracy and precision of the results. This study presents tidal harmonic decomposition solutions based on the first and second method of solving the Fourier series to analyze of the tides in January 2010 hourly and predict for the whole days of 2012 year considering the astronomical arguments and nodal corrections in Bandar-e-Abbas, Kangan Port and Bushehr Port tide gauge stations located in the Persian Gulf at the South of Iran. Moreover the accurate predictions of Mean Tide Level (MTL are provided for the entire of 2012 year in each tide gauge station by excluding the effects of astronomical arguments and nodal corrections due to their unreasonable destroying effects. The MTL's fluctuations derived from the predicted results during 2012 year and different phases of the Moon show a very good agreement together according to tide-generating forces theories.
Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun
2016-03-01
In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method.
Analysis of Crosstalk Impact on the Cloude-decomposition-based Scattering Characteristic
Hu Dingsheng
2017-04-01
Full Text Available Crosstalk is not only one of the main error sources in the polarimetric SAR system, but is also an indicator for evaluating calibration performance. In this paper, to determine the impact of crosstalk on land cover classification, we first retrieve the mathematical relation expressions between crosstalk and the Cloudedecomposition-based scattering characteristic. Then, we verify our theoretical conclusions in a semi-physical simulation based on Radarsat-2 polarimetric data for different land covers. Finally, we perform H/a/Wishart classification on the experimental data. From the ratio curve of pixels labeled differently under changing crosstalk, we can determine the crosstalk requirement that will meet the needs of specific applications.
Activity-Based Scene Decomposition for Topology Inference of Video Surveillance Network
Hongguang Zhang
2014-01-01
Full Text Available The topology inference is the study of spatial and temporal relationships among cameras within a video surveillance network. We propose a novel approach to understand activities based on the visual coverage of a video surveillance network. In our approach, an optimal camera placement scheme is firstly presented by using a binary integer programming algorithm in order to maximize the surveillance coverage. Then, each camera view is decomposed into regions based on the Histograms of Color Optical Flow (HCOF, according to the spatial-temporal distribution of activity patterns observed in a training set of video sequences. We conduct experiments by using hours of video sequences captured at an office building with seven camera views, all of which are sparse scenes with complex activities. The results of real scene experiment show that the features of histograms of color optic flow offer important contextual information for spatial and temporal topology inference of a camera network.
Zheng, Xiang
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.
付朝江; 张武
2006-01-01
Parallel finite element method using domain decomposition technique is adapted to a distributed parallel environment of workstation cluster. The algorithm is presented for parallelization of the preconditioned conjugate gradient method based on domain decomposition. Using the developed code, a dam structural analysis problem is solved on workstation cluster and results are given. The parallel performance is analyzed.
Frommer, A; Krieg, S; Leder, B; Rottmann, M
2013-01-01
In lattice QCD computations a substantial amount of work is spent in solving linear systems arising in Wilson's discretization of the Dirac equations. We show first numerical results of the extension of the two-level DD-\\alpha AMG method to a true multilevel method based on our parallel MPI-C implementation. Using additional levels pays off, allowing to cut down the core minutes spent on one system solve by a factor of approximately 700 compared to standard Krylov subspace methods and yielding another speed-up of a factor of 1.7 over the two-level approach.
Sensorless Monitoring of a Motor-Drive Machanical System Based on Adaptive Signal Decomposition
MENG Qing-feng; JIAO Li-cheng
2006-01-01
A method for estimating current harmonics of an induction motor is introduced which is used for sensorless monitoring of a mechanical system driven by the motor. The method is based on an adaptive signal representation and is proposed to extract weak harmonics from a noisy current signal, especially in the presence of additive interference caused by transient modulation waves. As an application, a rotor unbalance experiment of rotating machinery driven by an induction motor is carried out. The result shows that the eccentricity harmonic magnitude of a current signal obtained by the method represents the rotor unbalance conditions sensitively. Vibration analysis is used to validate the proposed method.
A Robust Source Coding Watermark Technique Based on Magnitude DFT Decomposition
Sushil Kumar
2012-07-01
Full Text Available Image watermarking is considered a powerful tool forCopyright protection, Content authentication, Fingerprintingand for protecting intellectual property. We present in thispaper a watermarking algorithm based on block wise changingmagnitude of DFT domain. This algorithm can be used as anapplication for copyright protection. To provide multi-levelsecurities we have first used best self-synchronizing T-codes toencode the watermark. The encoded watermark is thenembedded into the cover image using a stego-key. We haveanalyzed our algorithm against noise such as Salt and Pepper,Gaussian and Speckle.
Shi, Changfa; Cheng, Yuanzhi; Wang, Jinke; Wang, Yadong; Mori, Kensaku; Tamura, Shinichi
2017-02-22
One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters.
Nantian Huang
2016-11-01
Full Text Available Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD and multi-layer classifier (MLC is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs. The IMF matrix is divided into submatrices to compute the local singular values (LSV. The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs and a support vector machine (SVM is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF6 HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.
Thirman, Jonathan, E-mail: thirman@berkeley.edu; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu [Department of Chemistry, Kenneth S. Pitzer Center for Theoretical Chemistry, University of California, Berkeley, Berkeley, California 94720 (United States)
2015-08-28
An energy decomposition analysis (EDA) of intermolecular interactions is proposed for second-order Møller–Plesset perturbation theory (MP2) based on absolutely localized molecular orbitals (ALMOs), as an extension to a previous ALMO-based EDA for self-consistent field methods. It decomposes the canonical MP2 binding energy by dividing the double excitations that contribute to the MP2 wave function into classes based on how the excitations involve different molecules. The MP2 contribution to the binding energy is decomposed into four components: frozen interaction, polarization, charge transfer, and dispersion. Charge transfer is defined by excitations that change the number of electrons on a molecule, dispersion by intermolecular excitations that do not transfer charge, and polarization and frozen interactions by intra-molecular excitations. The final two are separated by evaluations of the frozen, isolated wave functions in the presence of the other molecules, with adjustments for orbital response. Unlike previous EDAs for electron correlation methods, this one includes components for the electrostatics, which is vital as adjustment to the electrostatic behavior of the system is in some cases the dominant effect of the treatment of electron correlation. The proposed EDA is then applied to a variety of different systems to demonstrate that all proposed components behave correctly. This includes systems with one molecule and an external electric perturbation to test the separation between polarization and frozen interactions and various bimolecular systems in the equilibrium range and beyond to test the rest of the EDA. We find that it performs well on these tests. We then apply the EDA to a halogen bonded system to investigate the nature of the halogen bond.
Wei Kong
2014-01-01
Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.
Adaptation of motor imagery EEG classification model based on tensor decomposition
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Keng Ang, Kai; Ong, Sim Heng
2014-10-01
Objective. Session-to-session nonstationarity is inherent in brain-computer interfaces based on electroencephalography. The objective of this paper is to quantify the mismatch between the training model and test data caused by nonstationarity and to adapt the model towards minimizing the mismatch. Approach. We employ a tensor model to estimate the mismatch in a semi-supervised manner, and the estimate is regularized in the discriminative objective function. Main results. The performance of the proposed adaptation method was evaluated on a dataset recorded from 16 subjects performing motor imagery tasks on different days. The classification results validated the advantage of the proposed method in comparison with other regularization-based or spatial filter adaptation approaches. Experimental results also showed that there is a significant correlation between the quantified mismatch and the classification accuracy. Significance. The proposed method approached the nonstationarity issue from the perspective of data-model mismatch, which is more direct than data variation measurement. The results also demonstrated that the proposed method is effective in enhancing the performance of the feature extraction model.
Matsubara, Hiroki; Kikugawa, Gota; Ishikiriyama, Mamoru; Yamashita, Seiji; Ohara, Taku
2017-09-01
Thermal conductivity of a material can be comprehended as being composed of microscopic building blocks relevant to the energy transfer due to a specific microscopic process or structure. The building block is called the partial thermal conductivity (PTC). The concept of PTC is essential to evaluate the contributions of various molecular mechanisms to heat conduction and has been providing detailed knowledge of the contribution. The PTC can be evaluated by equilibrium molecular dynamics (EMD) and non-equilibrium molecular dynamics (NEMD) in different manners: the EMD evaluation utilizes the autocorrelation of spontaneous heat fluxes in an equilibrium state whereas the NEMD one is based on stationary heat fluxes in a non-equilibrium state. However, it has not been fully discussed whether the two methods give the same PTC or not. In the present study, we formulate a Green-Kubo relation, which is necessary for EMD to calculate the PTCs equivalent to those by NEMD. Unlike the existing theories, our formulation is based on the local equilibrium hypothesis to describe a clear connection between EMD and NEMD simulations. The equivalence of the two derivations of PTCs is confirmed by the numerical results for liquid methane and butane. The present establishment of the EMD-NEMD correspondence makes the MD analysis of PTCs a robust way to clarify the microscopic origins of thermal conductivity.
Research on Space Target Recognition Algorithm Based on Empirical Mode Decomposition
Shen Yiying
2013-07-01
Full Text Available The space target recognition algorithm, which is based on the time series of radar cross section (RCS, is proposed in this paper to solve the problems of space target recognition in the active radar system. In the algorithm, EMD method is applied for the first time to extract the eigen of RCS time series. The normalized instantaneous frequencies of high-frequency intrinsic mode functions obtained by EMD are used as the eigen values for the recognition, and an effective target recognition criterion is established. The effectiveness and the stability of the algorithm are verified by both simulation data and real data. In addition, the algorithm could reduce the estimation bias of RCS caused by inaccurate evaluation, and it is of great significance in promoting the target recognition ability of narrow-band radar in practice.
Stable Gait Generation of a Quasi-Passive Biped Walking Robot Based on Mode Decomposition
Matsumoto, Itaru
A passive walker is a robot which can walk down a shallow slope without active control or energy input, being powered only by gravity. This paper proposes a control law that can stabilize the gait of a quasi-passive walker by manipulating torque at the hip joint. The motion of the quasi-passive walker is divided into two modes: one is a sinusoidal mode and the other a hyperbolic sinusoidal mode. The controller is designed with a servo system which forces the motion of the sinusoidal mode to track the reference input signal obtained from the phase-plane trajectory of the hyperbolic sinusoidal mode. The generated gait is quite natural, because the input of the servo system is made based on the system dynamics. The results of simulations have demonstrated the effectiveness of the proposed control law.
Features of energy distribution for blast vibration signals based on wavelet packet decomposition
LING Tong-hua; LI Xi-bing; DAI Ta-gen; PENG Zhen-bin
2005-01-01
Blast vibration analysis constitutes the foundation for studying the control of blasting vibration damage and provides the precondition of controlling blasting vibration. Based on the characteristics of short-time nonstationary random signal, the laws of energy distribution are investigated for blasting vibration signals in different blasting conditions by means of the wavelet packet analysis technique. The characteristics of wavelet transform and wavelet packet analysis are introduced. Then, blasting vibration signals of different blasting conditions are analysed by the wavelet packet analysis technique using MATLAB; energy distribution for different frequency bands is obtained. It is concluded that the energy distribution of blasting vibration signals varies with maximum decking charge,millisecond delay time and distances between explosion and the measuring point. The results show that the wavelet packet analysis method is an effective means for studying blasting seismic effect in its entirety, especially for constituting velocity-frequency criteria.
Effect of iron ion on doxycycline photocatalytic and Fenton-based autocatatalytic decomposition.
Bolobajev, Juri; Trapido, Marina; Goi, Anna
2016-06-01
Doxycycline plays a key role in Fe(III)-to-Fe(II) redox cycling and therefore in controlling the overall reaction rate of the Fenton-based process (H2O2/Fe(III)). This highlights the autocatalytic profile of doxycycline degradation. Ferric iron reduction in the presence of doxycycline relied on doxycycline-to-Fe(III) complex formation with an ensuing reductive release of Fe(II). The lower ratio of OH-to-contaminant in an initial H2O2/Fe(III) oxidation step than in that of classical Fenton (H2O2/Fe(II)) decreased the doxycycline degradation rate. The quantum yield of doxycycline in direct UV-C photolysis was 3.1 × 10(-3) M E(-1). In spite of doxycycline-Fe(III) complexes could produce the adverse effect on the doxycycline degradation in the UV/Fe(III) system some acceleration of the rate was observed upon irradiation of the Fe(III)-hydroxy complex. Acidic reaction media (pH 3.0) and the molar ratio of DC/Fe(III) = 2/1 favored the complex formation. Doxycycline close degradation rates and complete mineralization achieved for 120 min (Table 1) with both UV/H2O2 and UV/H2O2/Fe(III) indicated the unsubstantial role of the reduction of Fe(III) to Fe(II) in UV/H2O2/Fe(III) system efficacy. Thus, factors such as doxycycline's ability to form complexes with ferric iron and the ability of complexes to participate in a reductive pathway should be considered at a technological level in process optimization, with chemistry based on iron ion catalysis to enhance the doxycycline oxidative pathway.
Ceramic design concepts based on stress distribution analysis.
Esquivel-Upshaw, J F; Anusavice, K J
2000-08-01
This article discusses general design concepts involved in fabricating ceramic and metal-ceramic restorations based on scientific stress distribution data. These include the effects of ceramic layer thickness, modulus of elasticity of supporting substrates, direction of applied loads, intraoral stress, and crown geometry on the susceptibility of certain restoration designs to fracture.
A new physics-based method for detecting weak nuclear signals via spectral decomposition
Chan, Kung-Sik, E-mail: kung-sik-chan@uiowa.edu [Department of Statistics and Actuarial Science, University of Iowa, Iowa City, IA 52242 (United States); Li, Jinzheng, E-mail: jinzheng-li@uiowa.edu [Department of Statistics and Actuarial Science, University of Iowa, Iowa City, IA 52242 (United States); Eichinger, William, E-mail: william-eichinger@uiowa.edu [Department of Civil and Environmental Engineering, University of Iowa, Iowa City, IA 52242 (United States); Bai, Erwei, E-mail: er-wei-bai@uiowa.edu [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States)
2012-03-01
We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15 db.
Residual Stress Analysis Based on Acoustic and Optical Methods
Sanichiro Yoshida
2016-02-01
Full Text Available Co-application of acoustoelasticity and optical interferometry to residual stress analysis is discussed. The underlying idea is to combine the advantages of both methods. Acoustoelasticity is capable of evaluating a residual stress absolutely but it is a single point measurement. Optical interferometry is able to measure deformation yielding two-dimensional, full-field data, but it is not suitable for absolute evaluation of residual stresses. By theoretically relating the deformation data to residual stresses, and calibrating it with absolute residual stress evaluated at a reference point, it is possible to measure residual stresses quantitatively, nondestructively and two-dimensionally. The feasibility of the idea has been tested with a butt-jointed dissimilar plate specimen. A steel plate 18.5 mm wide, 50 mm long and 3.37 mm thick is braze-jointed to a cemented carbide plate of the same dimension along the 18.5 mm-side. Acoustoelasticity evaluates the elastic modulus at reference points via acoustic velocity measurement. A tensile load is applied to the specimen at a constant pulling rate in a stress range substantially lower than the yield stress. Optical interferometry measures the resulting acceleration field. Based on the theory of harmonic oscillation, the acceleration field is correlated to compressive and tensile residual stresses qualitatively. The acoustic and optical results show reasonable agreement in the compressive and tensile residual stresses, indicating the feasibility of the idea.
Yang, Yang; Ren, R.-C.; Cai, Ming
2016-12-01
The stratosphere has been cooling under global warming, the causes of which are not yet well understood. This study applied a process-based decomposition method (CFRAM; Coupled Surface-Atmosphere Climate Feedback Response Analysis Method) to the simulation results of a Coupled Model Intercomparison Project, phase 5 (CMIP5) model (CCSM4; Community Climate System Model, version 4), to demonstrate the responsible radiative and non-radiative processes involved in the stratospheric cooling. By focusing on the long-term stratospheric temperature changes between the "historical run" and the 8.5 W m-2 Representative Concentration Pathway (RCP8.5) scenario, this study demonstrates that the changes of radiative radiation due to CO2, ozone and water vapor are the main divers of stratospheric cooling in both winter and summer. They contribute to the cooling changes by reducing the net radiative energy (mainly downward radiation) received by the stratospheric layer. In terms of the global average, their contributions are around -5, -1.5, and -1 K, respectively. However, the observed stratospheric cooling is much weaker than the cooling by radiative processes. It is because changes in atmospheric dynamic processes act to strongly mitigate the radiative cooling by yielding a roughly 4 K warming on the global average base. In particular, the much stronger/weaker dynamic warming in the northern/southern winter extratropics is associated with an increase of the planetary-wave activity in the northern winter, but a slight decrease in the southern winter hemisphere, under global warming. More importantly, although radiative processes dominate the stratospheric cooling, the spatial patterns are largely determined by the non-radiative effects of dynamic processes.
Mode decomposition evolution equations.
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2012-03-01
Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be
刘良兵; 陶超; 刘晓峻; 李先利; 张海涛
2015-01-01
Pulse decomposition has been proven to be efficient to analyze the complicated signals and it is introduced into the photoacoustic and thermoacoustic tomography to eliminate reconstruction distortions caused by negative lobes. During image reconstruction, negative lobes bring errors in the estimation of acoustic pulse amplitude, which is closely related to the distribution of absorption coefficient. The negative lobe error degrades imaging quality seriously in limited-view conditions because it cannot be offset so well as in full-view conditions. Therefore, a pulse decomposition formula is provided with detailed deduction to eliminate the negative lobe error and is incorporated into the popular delay-and-sum method for better reconstructing the image without additional complicated computation. Numerical experiments show that the pulse decomposition improves the image quality obviously in the limited-view conditions, such as separating adjacent absorbers, discovering a small absorber despite disturbance from a big absorber nearby, etc.
Impact of Stress and Glucocorticoids on Schema-Based Learning.
Kluen, Lisa Marieke; Nixon, Patricia; Agorastos, Agorastos; Wiedemann, Klaus; Schwabe, Lars
2016-12-14
Pre-existing knowledge, a 'schema', facilitates the encoding, consolidation, and retrieval of schema-relevant information. Such schema-based memory is key to every form of education and provides intriguing insights into the integration of new information and prior knowledge. Stress is known to have a critical impact on memory processes, mainly through the action of glucocorticoids and catecholamines. However, whether stress and these major stress mediators affect schema-based learning is completely unknown. To address this question, we performed two experiments, in which participants acquired a schema on day 1 and learned schema-related as well as schema-unrelated information on day 2. In the first experiment, participants underwent a stress or control manipulation either immediately or about 25 min before schema-based memory testing. The second experiment tested whether glucocorticoid and/or noradrenergic activation is sufficient to modulate schema-based memory. To this end, participants received orally a placebo, hydrocortisone, the α2-adrenoceptor-antagonist yohimbine, leading to increased noradrenergic stimulation, or both drugs, before completing the schema-based memory test. Our data indicate that stress, irrespective of the exact timing of the stress exposure, impaired schema-based learning, while leaving learning of schema-unrelated information intact. A very similar effect was obtained after hydrocortisone, but not yohimbine, administration. These data show that stress disrupts participants' ability to benefit from prior knowledge during learning and that glucocorticoid activation is sufficient to produce this effect. Our findings provide novel insights into the impact of stress and stress hormones on the dynamics of human memory and have important practical implications, specifically for educational contexts.Neuropsychopharmacology advance online publication, 14 December 2016; doi:10.1038/npp.2016.256.
Sónia Cristina
2016-05-01
Full Text Available The European Space Agency has acquired 10 years of data on the temporal and spatial distribution of phytoplankton biomass from the MEdium Resolution Imaging Spectrometer (MERIS sensor for ocean color. The phytoplankton biomass was estimated with the MERIS product Algal Pigment Index 1 (API 1. Seasonal-Trend decomposition of time series based on Loess (STL identified the temporal variability of the dynamical features in the MERIS products for water leaving reflectance (ρw(λ and API 1. The advantages of STL is that it can identify seasonal components changing over time, it is responsive to nonlinear trends, and it is robust in the presence of outliers. One of the novelties in this study is the development and the implementation of an automatic procedure, stl.fit(, that searches the best data modeling by varying the values of the smoothing parameters, and by selecting the model with the lowest error measure. This procedure was applied to 10 years of monthly time series from Sagres in the Southwestern Iberian Peninsula at three Stations, 2, 10 and 18 km from the shore. Decomposing the MERIS products into seasonal, trend and irregular components with stl.fit(, the ρw(λ indicated dominance of the seasonal and irregular components while API 1 was mainly dominated by the seasonal component, with an increasing effect from inshore to offshore. A comparison of the seasonal components between the ρw(λ and the API 1 product, showed that the variations decrease along this time period due to the changes in phytoplankton functional types. Furthermore, inter-annual seasonal variation for API 1 showed the influence of upwelling events and in which month of the year these occur at each of the three Sagres stations. The stl.fit( is a good tool for any remote sensing study of time series, particularly those addressing inter-annual variations. This procedure will be made available in R software.
Zhou, Qingping; Jiang, Haiyan; Wang, Jianzhou; Zhou, Jianling
2014-10-15
Exposure to high concentrations of fine particulate matter (PM₂.₅) can cause serious health problems because PM₂.₅ contains microscopic solid or liquid droplets that are sufficiently small to be ingested deep into human lungs. Thus, daily prediction of PM₂.₅ levels is notably important for regulatory plans that inform the public and restrict social activities in advance when harmful episodes are foreseen. A hybrid EEMD-GRNN (ensemble empirical mode decomposition-general regression neural network) model based on data preprocessing and analysis is firstly proposed in this paper for one-day-ahead prediction of PM₂.₅ concentrations. The EEMD part is utilized to decompose original PM₂.₅ data into several intrinsic mode functions (IMFs), while the GRNN part is used for the prediction of each IMF. The hybrid EEMD-GRNN model is trained using input variables obtained from principal component regression (PCR) model to remove redundancy. These input variables accurately and succinctly reflect the relationships between PM₂.₅ and both air quality and meteorological data. The model is trained with data from January 1 to November 1, 2013 and is validated with data from November 2 to November 21, 2013 in Xi'an Province, China. The experimental results show that the developed hybrid EEMD-GRNN model outperforms a single GRNN model without EEMD, a multiple linear regression (MLR) model, a PCR model, and a traditional autoregressive integrated moving average (ARIMA) model. The hybrid model with fast and accurate results can be used to develop rapid air quality warning systems.
Sengupta, Tapan K.; Gullapalli, Atchyut
2016-11-01
Spinning cylinder rotating about its axis experiences a transverse force/lift, an account of this basic aerodynamic phenomenon is known as the Robins-Magnus effect in text books. Prandtl studied this flow by an inviscid irrotational model and postulated an upper limit of the lift experienced by the cylinder for a critical rotation rate. This non-dimensional rate is the ratio of oncoming free stream speed and the surface speed due to rotation. Prandtl predicted a maximum lift coefficient as CLmax = 4π for the critical rotation rate of two. In recent times, evidences show the violation of this upper limit, as in the experiments of Tokumaru and Dimotakis ["The lift of a cylinder executing rotary motions in a uniform flow," J. Fluid Mech. 255, 1-10 (1993)] and in the computed solution in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)]. In the latter reference, this was explained as the temporal instability affecting the flow at higher Reynolds number and rotation rates (>2). Here, we analyze the flow past a rotating cylinder at a super-critical rotation rate (=2.5) by the enstrophy-based proper orthogonal decomposition (POD) of direct simulation results. POD identifies the most energetic modes and helps flow field reconstruction by reduced number of modes. One of the motivations for the present study is to explain the shedding of puffs of vortices at low Reynolds number (Re = 60), for the high rotation rate, due to an instability originating in the vicinity of the cylinder, using the computed Navier-Stokes equation (NSE) from t = 0 to t = 300 following an impulsive start. This instability is also explained through the disturbance mechanical energy equation, which has been established earlier in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)].
Polyakov Vyacheslav Sergeevich
2012-07-01
4 the optimal composite additive that increases the time period of stiffening of the cement grout , improves the water resistance and the compressive strength of concrete, represents the composition of polyacrylates and polymethacrylates, products of thermal decomposition of polyamide-6 and low-molecular polyethylene in the weight ratio of 1:1:0.5.
Davydov, S. Yu.; Lebedev, A. A.; Lebedev, S. P.; Sitnikova, A. A.; Sorokin, L. M.
2016-12-01
The transition region of a 3C-SiC/4 H-SiC heterostructure constituted by layers of the 3 C and 4 H polytypes has been studied. A previously proposed spinodal decomposition model was used to estimate the thickness ratio of 4 H and 3 C layers in comparison with the image furnished by transmission electron microscopy.
Siang-Piao Chai; Sharif Hussein Sharif Zein; Abdul Rahman Mohamed
2006-01-01
Direct decomposition of methane was carried out using a fixed-bed reactor at 700 ℃ for the production of COx-free hydrogen and carbon nanofibers. The catalytic performance of NiO-M/SiO2catalysts (where M=AgO, CoO, CuO, FeO, MnOx and MoO) in methane decomposition was investigated.The experimental results indicate that among the tested catalysts, NiO/SiO2 promoted with CuO give the highest hydrogen yield. In addition, the examination of the most suitable catalyst support, including Al2O3, CeO2, La2O3, SiO2, and TiO2, shows that the decomposition of methane over NiO-CuO favors SiO2 support. Furthermore, the optimum ratio of NiO to CuO on SiO2 support for methane decomposition was determined. The experimental results show that the optimum weight ratio of NiO to CuO fell at 8:2(w/w) since the highest yield of hydrogen was obtained over this catalyst.
Maria Grazia De Giorgi
2014-08-01
Full Text Available A high penetration of wind energy into the electricity market requires a parallel development of efficient wind power forecasting models. Different hybrid forecasting methods were applied to wind power prediction, using historical data and numerical weather predictions (NWP. A comparative study was carried out for the prediction of the power production of a wind farm located in complex terrain. The performances of Least-Squares Support Vector Machine (LS-SVM with Wavelet Decomposition (WD were evaluated at different time horizons and compared to hybrid Artificial Neural Network (ANN-based methods. It is acknowledged that hybrid methods based on LS-SVM with WD mostly outperform other methods. A decomposition of the commonly known root mean square error was beneficial for a better understanding of the origin of the differences between prediction and measurement and to compare the accuracy of the different models. A sensitivity analysis was also carried out in order to underline the impact that each input had in the network training process for ANN. In the case of ANN with the WD technique, the sensitivity analysis was repeated on each component obtained by the decomposition.
Clustering via Kernel Decomposition
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Symmetric Tensor Decomposition
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of total degree d as a sum of powers of linear forms (Waring’s problem), incidence properties on secant varieties of the Veronese variety and the representation of linear forms as a linear combination of evaluations at distinct points. Then we reformulate Sylvester’s approach from the dual point of view...
Base Stress of the Opened Bottom Cylinder Structures
刘建起; 孟晓娟
2004-01-01
The base stress of the opened bottom cylinder structure differs greatly from that of the structure with a closed bottom. By investigating the inner soil pressure on the cylinder wall and the base stress of the cylinder base, which were obtained from the model experiments, the interactions among the filler inside the cylinder,subsoil and cylinder are analyzed. The adjusting mechanism of frictional resistance between the inner filler and the wall of the cylinder during the overturning of the cylinder is discussed. Based on the experimental study, a method for calculating the base stress of the opened bottom cylinder structure is proposed. Meanwhile, the formulas for calculating the effective anti-overturning ratio of the opened bottom cylinder are derived.
Yingni Zhai
2014-10-01
Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the
Song, Pengfei; Trzasko, Joshua D; Manduca, Armando; Qiang, Bo; Kadirvel, Ramanathan; Kallmes, David F; Chen, Shigao
2017-04-01
Singular value decomposition (SVD)-based ultrasound blood flow clutter filters have recently demonstrated substantial improvement in clutter rejection for ultrafast plane wave microvessel imaging, and have become the commonly used clutter filtering method for many novel ultrafast imaging applications such as functional ultrasound and super-resolution imaging. At present, however, the computational burden of SVD remains as a major hurdle for practical implementation and clinical translation of this method. To address this challenge, in the study we present two blood flow clutter filtering methods based on randomized SVD (rSVD) and randomized spatial downsampling to accelerate SVD clutter filtering with minimal compromise to the clutter filter performance. rSVD accelerates SVD computation by approximating the k largest singular values, while random downsampling accelerates both full SVD and rSVD by decomposing the original large data matrix into small matrices that can be processed in parallel. An in vitro blood flow phantom study with the presence of heavy tissue clutter showed significantly improved computational performance using the proposed methods with minimal deterioration to the clutter filter performance (less than 3-dB reduction in blood to clutter ratio, less than 0.2-cm(2)/s(2) increase in flow mean squared error, less than 0.1-cm/s increase in the standard deviation of the vessel blood flow signal, and less than 0.3-cm/s increase in tissue clutter velocity for both full SVD and rSVD when the downsampling factor was less than 20× ). The maximum acceleration was about threefold from randomized spatial downsampling, and approximately another threefold from rSVD. An in vivo rabbit kidney perfusion study showed that rSVD provided comparable performance to full SVD in clutter rejection in vivo (maximum difference of blood to clutter ratio was less than 0.6 dB), and random downsampling provided artifact-free perfusion imaging results when combined with both
Piecewise-adaptive decomposition methods
Ramos, J.I. [Room I-320-D, E.T.S. Ingenieros Industriales, Universidad de Malaga, Plaza El Ejido, s/n, 29013 Malaga (Spain)], E-mail: jirs@lcc.uma.es
2009-05-30
Piecewise-adaptive decomposition methods are developed for the solution of nonlinear ordinary differential equations. These methods are based on some theorems that show that Adomian's decomposition method is a homotopy perturbation technique and coincides with Taylor's series expansions for autonomous ordinary differential equations. Piecewise-decomposition methods provide series solutions in intervals which are subject to continuity conditions at the end points of each interval, and their adaption is based on the use of either a fixed number of approximants and a variable step size, a variable number of approximants and a fixed step size or a variable number of approximants and a variable step size. It is shown that the appearance of noise terms in the decomposition method is related to both the differential equation and the manner in which the homotopy parameter is introduced, especially for the Lane-Emden equation. It is also shown that, in order to avoid the use of numerical quadrature, there is a simple way of introducing the homotopy parameter in the two first-order ordinary differential equations that correspond to the second-order Thomas-Fermi equation. It is also shown that the piecewise homotopy perturbation methods presented here provide more accurate results than a modified Adomian decomposition technique which makes use of Pade approximants and the homotopy analysis method, for the Thomas-Fermi equation.
Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types
Sang-Hoon Hong
2015-07-01
Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.
Force transducers based on the stress dependence of coercive force
Garshelis, I. J.
1993-05-01
An alternative measurement regime for magnetoelastic force transducers, based on variations in coercive field, is described. Hc is shown to be more directly related to the primary magnetic influence of stress, namely, the orientation of effective anisotropy, than conventionally used magnetization related parameters. The stress dependence of Hc is shown to generally reflect opposing factors associated with rotational and wall displacement magnetization reversal processes. In materials wherein Hc≪K/Ms wall motion dominates and if the product of λs/K and yield stress is high enough, large monotonic reductions of Hc with positive (tensile) stress are shown to be possible. A more complex variation of Hc with increasing compression is similarly expected. Experimental results from a transducer having an 18% Ni maraging steel core support these expectations.
Climate fails to predict wood decomposition at regional scales
Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King
2014-01-01
Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...
Effect of mindfulness-based stress reduction on sleep quality
Andersen, Signe; Würtzen, Hanne; Steding-Jessen, Marianne;
2013-01-01
The prevalence of sleep disturbance is high among cancer patients, and the sleep problems tend to last for years after the end of treatment. As part of a large randomized controlled clinical trial (the MICA trial, NCT00990977) of the effect of mindfulness-based stress reduction (MBSR) on psycholo......The prevalence of sleep disturbance is high among cancer patients, and the sleep problems tend to last for years after the end of treatment. As part of a large randomized controlled clinical trial (the MICA trial, NCT00990977) of the effect of mindfulness-based stress reduction (MBSR...
Wavefront reconstruction by modal decomposition
Schulze, C
2012-08-01
Full Text Available We propose a new method to determine the wavefront of a laser beam based on modal decomposition by computer-generated holograms. The hologram is encoded with a transmission function suitable for measuring the amplitudes and phases of the modes...
Khalaji, Aliakbar Dehno; Das, Debasis
2014-08-01
To raise the need of new precursors in the synthesis of NiO nanoparticles, mononuclear nickel(II) Schiff base complexes, viz. Ni(salbn) and Ni(Me2-salpn), were employed as precursor in solid-state thermal decomposition. Structure, purity and morphology of these nanoparticles have been examined by Fourier transform infrared spectroscopy, X-ray powder diffraction, scanning electron microscopy and transmission electron microscopy (TEM). TEM analysis reveals that the synthesized nanoparticles have cubic particles with an average diameter of around 5-15 nm. This method is simple, less costly, and fast and safe for production of NiO nanoparticles in industrial applications.
Gómez-Núñez, Alberto [University of Barcelona, Department of Electronics, Martí i Franquès 1, E08028-Barcelona (Spain); Roura, Pere [University of Girona, Department of Physics, Campus Montilivi, Edif. PII, E17071-Girona, Catalonia (Spain); López, Concepción [University of Barcelona, Department of Inorganic Chemistry, Martí i Franquès 1, E08028-Barcelona (Spain); Vilà, Anna, E-mail: avila@el.ub.edu [University of Barcelona, Department of Electronics, Martí i Franquès 1, E08028-Barcelona (Spain)
2016-09-15
Highlights: • Four alternatives to ethanolamine as stabilizer for the chemical synthesis of ZnO with zinc acetate dihydrate are proposed: aminopropanol, aminomethyl butanol, aminophenol and aminobenzyl alcohol. • Thermal decomposition processes described. Nitrogen cyclic compounds result. • Molecule flexibility helps decomposition, and in particular aliphatic aminoalcohols (quite flexible) decompose the precursor at lower temperatures than aromatic ones (more rigid). • Aminopropanol, aminomethyl butanol and aminobenzyl crystallize ZnO at a lower temperature than ethanolamine. • Nitrogen cyclic specimens have been identified and evolve in all cases (included ethanolamine) at temperatures up to 600 °C. - Abstract: Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).
Polymer-based stress sensor with integrated readout
Thaysen, Jacob; Yalcinkaya, Arda Deniz; Vettiger, P.
2002-01-01
We present a polymer-based mechanical sensor with an integrated strain sensor element. Conventionally, silicon has been used as a piezoresistive material due to its high gauge factor and thereby high sensitivity to strain changes in the sensor. By using the fact that the polymer SU-8 [1] is much...... softer than silicon and that a gold resistor is easily incorporated in SU-8, we have proven that a SU-8-based cantilever sensor is almost as sensitive to stress changes as the silicon piezoresistive cantilever. First, the surface stress sensing principle is discussed, from which it can be shown...... that the SU-8-based sensor is nearly as sensitive as the silicon based mechanical sensor. We hereafter demonstrate the chip fabrication technology of such a sensor, which includes multiple SU-8 and gold layer deposition. The SU-8-based mechanical sensor is finally characterized with respect to sensitivity...
Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang
2013-01-01
China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model–panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions...
Zilong Zhang
2014-11-01
Full Text Available Integrated analysis on socio-economic metabolism could provide a basis for understanding and optimizing regional sustainability. The paper conducted socio-economic metabolism analysis by means of the emergy accounting method coupled with data envelopment analysis and decomposition analysis techniques to assess the sustainability of Qingyang city and its eight sub-region system, as well as to identify the major driving factors of performance change during 2000–2007, to serve as the basis for future policy scenarios. The results indicate that Qingyang greatly depended on non-renewable emergy flows and feedback (purchased emergy flows, except the two sub-regions, named Huanxian and Huachi, which highly depended on renewable emergy flow. Zhenyuan, Huanxian and Qingcheng were identified as being relatively emergy efficient, and the other five sub-regions have potential to reduce natural resource inputs and waste output to achieve the goal of efficiency. The results of decomposition analysis show that the economic growth, as well as the increased emergy yield ratio and population not accompanied by a sufficient increase of resource utilization efficiency are the main drivers of the unsustainable economic model in Qingyang and call for polices to promote the efficiency of resource utilization and to optimize natural resource use.
GUO Guoqiang; YANG Yixin; SUN Chao
2009-01-01
Abstract Combined the decomposition of time reversal operator and the time reversal reverberation nulling, a new time reversal processing approach for echo-to-reverberation ratio enhancement is proposed. In this method, a 2-dimensional signal subspace for the range of the target and two bottom focusing weight vectors for the ranges near the target are obtained by the decomposition of time reversal operator. From the signal subspace and focusing weight vectors, a constrained optimal excitation weight vector of source receiver array can be deduced to null the acoustic energy on the corresponding bottom and maximize the energy at the target. This method remedies the shortages of conventional time reversal processing, time reversal reverberation nulling and time reversal selective focusing method. It focuses sound energy at the target and nulls the energy at the bottom near the target range simultaneously, therefore enhancing the echo-to-reverberation ratio without probe source and prior-knowledge of the relative scattering intensity of target and bottom. Numerical simulations in typical shallow water environments showed the effectiveness of the proposed method and its improved performance for echo-reverberation enhancement than conventional time reversal processing.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-11-06
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.
Yahyaei, Mohsen; Bashiri, Mahdi
2017-03-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Boyer-Provera, E; Rossi, A; Oriol, L; Dumontet, C; Plesa, A; Berguiga, L; Elezgaray, J; Arneodo, A; Argoul, F
2013-03-25
Surface plasmon resonance is conventionally conducted in the visible range and, during the past decades, it has proved its efficiency in probing molecular scale interactions. Here we elaborate on the first implementation of a high resolution surface plasmon microscope that operates at near infrared (IR) wavelength for the specific purpose of living matter imaging. We analyze the characteristic angular and spatial frequencies of plasmon resonance in visible and near IR lights and how these combined quantities contribute to the V(Z) response of a scanning surface plasmon microscope (SSPM). Using a space-frequency wavelet decomposition, we show that the V(Z) response of the SSPM for red (632.8 nm) and near IR (1550 nm) lights includes the frequential response of plasmon resonance together with additional parasitic frequencies induced by the objective pupil. Because the objective lens pupil profile is often unknown, this space-frequency decomposition turns out to be very useful to decipher the characteristic frequencies of the experimental V(Z) curves. Comparing the visible and near IR light responses of the SSPM, we show that our objective lens, primarily designed for visible light microscopy, is still operating very efficiently in near IR light. Actually, despite their loss in resolution, the SSPM images obtained with near IR light remain contrasted for a wider range of defocus values from negative to positive Z values. We illustrate our theoretical modeling with a preliminary experimental application to blood cell imaging.
Jain, Aadhar; Rey, Elizabeth; Lee, Seoho; O'Dell, Dakota; Erickson, David
2016-03-01
Anxiety disorders are estimated to be the most common mental illness in US affecting around 40 million people and related job stress is estimated to cost US industry up to $300 billion due to lower productivity and absenteeism. A personal diagnostic device which could help identify stressed individuals would therefore be a huge boost for workforce productivity. We are therefore developing a point of care diagnostic device that can be integrated with smartphones or tablets for the measurement of cortisol - a stress related salivary biomarker, which is known to be strongly involved in body's fight-or-flight response to a stressor (physical or mental). The device is based around a competitive lateral flow assay whose results can then be read and quantified through an accessory compatible with the smartphone. In this presentation, we report the development and results of such an assay and the integrated device. We then present the results of a study relating the diurnal patterns of cortisol levels and the alertness of an individual based on the circadian rhythm and sleep patterns of the individual. We hope to use the insight provided by combining the information provided by levels of stress related to chemical biomarkers of the individual with the physical biomarkers to lead to a better informed and optimized activity schedule for maximized work output.
Esteves, Ph.; Granger, P.; Leclercq, L.; Leclercq, G.; Payen, E. [Universite des Sciences et technologies de Lille, 59 - Villeneuve d' Ascq (France); Kieger, St. [Grande Paroisse S.A., Usine de Rouen, 76 - Grand Quevilly (France); Navascues, L. [Grande Paroisse S.A., 92 - Paris la Defense (France)
2001-07-01
Various preparation procedures of zirconia based catalysts modified by additives and their catalytic properties in the decomposition of N{sub 2}O at high temperature have been investigated. The most relevant observation was for ZrO{sub 2} containing 1% of additive with a synergy effect in comparison with a reference ZrO{sub 2} catalyst. For higher additive contents such a synergy effect disappears. (authors)
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Segou, M.; Parsons, T.
2014-06-01
Main shocks are calculated to cast stress shadows across broad areas where aftershocks occur. Thus, a key problem with stress-based operational forecasts is that they can badly underestimate aftershock occurrence in the shadows. We examine the performance of two physics-based earthquake forecast models (Coulomb rate/state (CRS)) based on Coulomb stress changes and a rate-and-state friction law for their predictive power on the 1989 Mw = 6.9 Loma Prieta aftershock sequence. The CRS-1 model considers the stress perturbations associated with the main shock rupture only, whereas CRS-2 uses an updated stress field with stresses imparted by M ≥ 3.5 aftershocks. Including secondary triggering effects slightly improves predictability, but physics-based models still underestimate aftershock rates in locations of initial negative stress changes. Furthermore, CRS-2 does not explain aftershock occurrence where secondary stress changes enhance the initial stress shadow. Predicting earthquake occurrence in calculated stress shadow zones remains a challenge for stress-based forecasts, and additional triggering mechanisms must be invoked.
Differentially Private Spatial Decompositions
Cormode, Graham; Shen, Entong; Srivastava, Divesh; Yu, Ting
2011-01-01
Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-known how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release different kinds of data. In this paper, we focus on spatial data such as locations and more generally any data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. Instead, we introduce a new class of "private spatial decompositions": these adapt standard spatial indexing methods such as quadtrees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various primitives, such as choosing splitting points and describi...
Evaluation of an app-based stress protocol
Noeh Claudius
2016-09-01
Full Text Available Stress is a major influence on the quality of life in our fast-moving society. This paper describes a standardized and contemporary protocol that is capable of inducing moderate psychological stress in a laboratory setting. Furthermore, it evaluates its effects on physiological biomarkers. The protocol called “THM-Stresstest” mainly consists of a rest period (30 min, an app-based stress test under the surveillance of an audience (4 min and a regeneration period (32 min. We investigated 12 subjects to evaluate the developed protocol. We could show significant changes in heart rate variability, electromyography, electro dermal activity and salivary cortisol and α-amylase. From this data we conclude that the THM-Stresstest can serve as a psychobiological tool for provoking responses in the cardiovascular-, the endocrine and exocrine system as well as the sympathetic part of the central nervous system.
Study on residual stresses of Ni-based WC coating by laser remelting based on XRD
Chen, Zhigang; Kong, Dejun; Wang, Ling; Zhu, Xiaoron; Zhao, Xiaobing
2007-12-01
The morphologies of Ni-based WC coating by flame spraying and laser cladding respectively were observed with scanning electric microscope (SEM), respectively, and residual stresses were measured with XRD (X-ray diffraction). At the same time, the spectra of WC coating were analyzed by XRD, and the forming mechanisms of residual stress were analyzed. Experimental results are shown that residual stresses of Ni-based WC coating by flame spraying are all tensile while those by laser cladding are compressive, chemical-physical reaction of the coating is the cause to result in material volume change, which makes residual stress into compressive from tensile; when residual stress is changed into compressive from tensile, micro-cracks on the coating surface greatly decrease, which is illustrated that the effect of residual stress on micro-crack is obvious; XRD spectra peak of WC coating is only contained Ni and W, and has no impurity and other reaction productions.
A DECOMPOSITION METHOD OF STRUCTURAL DECOMPOSITION ANALYSIS
LI Jinghua
2005-01-01
Over the past two decades,structural decomposition analysis(SDA)has developed into a major analytical tool in the field of input-output(IO)techniques,but the method was found to suffer from one or more of the following problems.The decomposition forms,which are used to measure the contribution of a specific determinant,are not unique due to the existence of a multitude of equivalent forms,irrational due to the weights of different determinants not matching,inexact due to the existence of large interaction terms.In this paper,a decomposition method is derived to overcome these deficiencies,and we prove that the result of this approach is equal to the Shapley value in cooperative games,and so some properties of the method are obtained.Beyond that,the two approaches that have been used predominantly in the literature have been proved to be the approximate solutions of the method.
基于稀疏分解的纸浆流量软测量%The Pulp Flow Soft Measurement Based on Sparse Decomposition
吴祎; 周强; 吴文军
2016-01-01
针对纸浆流量测量设备成本高、测量条件苛刻及测量精度低等问题，提出基于稀疏分解的纸浆流量软测量新方法。利用稀疏分解技术，根据纸浆浓度固有噪声特点，自适应地建立过完备原子库，利用过完备原子库与最佳原子对纸浆浓度固有噪声进行稀疏表示，得出纸浆流量值。实验证明了本方法的可行性、实时性与精确性。%For the pulp flow measurement equipment cost is high, the measuring conditions demanding and low accuracy of mea-surement problem, put forward new pulp flow soft measurement method based on sparse decomposition. Using the sparse de-composition technology, based on the characteristics of the pulp concentration inherent noise, adaptive to establish a complete at-om library, with a complete library and best atoms to sparse representation of the pulp concentration inherent noise, it is conclud-ed that the pulp flow value. Experimental results show the feasibility of this method, real-time and accuracy.
无
2006-01-01
Three complexes, [Pr(NO3)3(HL)2] (1), [Nd(NO3)3(HL)2] (2) and [Er(NO3)3(HL)2] ·0.5H2O (3),were synthesized from the reaction of a Schiff base ligand 2-[ (4-methylphenylimino)methyl ]-6-methoxyphenol (C15 H15 mental analysis, molar conductance, FT-IR, UV-Vis, 1H NMR and thermal analysis shows the title complexes are neutral molecules where the central Ln( Ⅲ ) ion is ten-coordinated in biapical anti-hexahedron prism geometry, with four oxygen atoms of the phenolic hydroxy and methoxy groups in the two bidentate Schiff base ligands and six oxygen atoms provided by the three bidentate NO3- anions. Additionally, the kinetic mechanism of thermal decomposition of complex 3 was determined with a TG-DTG curves by both integral and differential methods. The functions of thermal decomposition reaction mechanism and the equation of kinetic compensation effect were obtained.
The nucleon spin decomposition: news and experimental implications
Lorcé, Cédric
2014-01-01
Recently, many nucleon spin decompositions have been proposed in the literature, creating a lot of confusion. This revived in particular old controversies regarding the measurability of theoretically defined quantities. We propose a brief overview of the different decompositions, discuss the sufficient requirements for measurability and stress the experimental implications.
Preparation of α-Al2O3 base ceramic coating on aluminum alloy via thermo-decomposition of diaspore
2001-01-01
The aim of this work is to describe the possibilities of preparing a corundum coating onaluminum alloy through in-situ chemical reaction at a relative low temperature. The transformationconditions of diaspore （β-AIOOH） to corundum （α-Al2O3 ） are studied using X-ray diffraction analy-sis. Temperature and heating time are two main factors influencing the transformation. Suitableheating parameters can lower the transformation temperature. On this basis, a new process isdeveloped to produce corundum ceramic coating on an aluminum alloy substrate. The phasecomposition and microstructure of the coating are studied using X-ray diffraction analysis andScanning Electron Microscopy. Abrasion properties of the coating are evaluated by ring-block tri-botester. The results show that it is feasible to obtain ceramic coatings on aluminum alloy sub-strates by means of thermo-decomposition of diaspore.
Zhao, Lei; Wu, Meiping; Forsberg, René
2015-01-01
Surveying the Earth's gravity field refers to an important domain of Geodesy, involving deep connections with Earth Sciences and Geo-information. Airborne gravimetry is an effective tool for collecting gravity data with mGal accuracy and a spatial resolution of several kilometers. The main obstacle...... of airborne gravimetry is extracting gravity disturbance from the extremely low signal to noise ratio measuring data. In general, the power of noise concentrates on the higher frequency of measuring data, and a low pass filter can be used to eliminate it. However, the noise could distribute in a broad range...... of frequency while low pass filter cannot deal with it in pass band of the low pass filter. In order to improve the accuracy of the airborne gravimetry, Empirical Mode Decomposition (EMD) is employed to denoise the measuring data of two primary repeated flights of the strapdown airborne gravimetry system SGA...
Javidi, M. [Department of Mathematics, Iran University of Science and Technology, Narmak, Tehran 16844 (Iran, Islamic Republic of)], E-mail: mo_javidi@yahoo.com; Golbabai, A. [Department of Mathematics, Iran University of Science and Technology, Narmak, Tehran 16844 (Iran, Islamic Republic of)], E-mail: golbabai@iust.ac.ir
2009-01-30
In this study, we use the spectral collocation method using Chebyshev polynomials for spatial derivatives and fourth order Runge-Kutta method for time integration to solve the generalized Burger's-Huxley equation (GBHE). To reduce round-off error in spectral collocation (pseudospectral) method we use preconditioning. Firstly, theory of application of Chebyshev spectral collocation method with preconditioning (CSCMP) and domain decomposition on the generalized Burger's-Huxley equation presented. This method yields a system of ordinary differential algebric equations (DAEs). Secondly, we use fourth order Runge-Kutta formula for the numerical integration of the system of DAEs. The numerical results obtained by this way have been compared with the exact solution to show the efficiency of the method.
Fangqing Wen
2013-01-01
Full Text Available A low complexity monostatic cross multiple-in multiple-out (MIMO radar scheme is proposed in this paper. The minimum-redundancy linear array (MRLA is introduced in the cross radar to improve the efficiency of the array elements. The two-dimensional direction-of-arrival (DOA estimation problem links to the trilinear model, which automatically pairs the estimated two-dimensional angles, requiring neither eigenvalue decomposition of received signal covariance matrix nor spectral peak searching. The proposed scheme performs better than the uniform linear arrays (ULA configuration under the same conditions, and the proposed algorithm has less computational complexity than that of multiple signal classification (MUSIC algorithm. Simulation results show the effectiveness of our scheme.
Zhang, Xiaofei; Zhou, Min; Li, Jianfeng
2013-01-01
In this paper, we combine the acoustic vector-sensor array parameter estimation problem with the parallel profiles with linear dependencies (PARALIND) model, which was originally applied to biology and chemistry. Exploiting the PARALIND decomposition approach, we propose a blind coherent two-dimensional direction of arrival (2D-DOA) estimation algorithm for arbitrarily spaced acoustic vector-sensor arrays subject to unknown locations. The proposed algorithm works well to achieve automatically paired azimuth and elevation angles for coherent and incoherent angle estimation of acoustic vector-sensor arrays, as well as the paired correlated matrix of the sources. Our algorithm, in contrast with conventional coherent angle estimation algorithms such as the forward backward spatial smoothing (FBSS) estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, not only has much better angle estimation performance, even for closely-spaced sources, but is also available for arbitrary arrays. Simulation results verify the effectiveness of our algorithm. PMID:23604030
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-03-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.
王兰勋; 闫姗姗
2013-01-01
Aiming at the problem of modulation recognition in OFDM signals under multi-path channel, a new algorithm based on wavelet decomposition geting detail feature is proposed. According to the different signal types have different detail component in the same decomposition level, the detail component of signals can be separated after the wavelet decomposition of signals. Because detail component contains almost all the signal' s detail information, the detail information can directly reflect the change of different signal types. The theory and simulation demonstrate that the amplitude values corresponding to detail component change differently after OFDM signals and single carrier digital signals making wavelet decomposition. Then the amplitude difference is regarded as a feature value, and the rule of Nearest Neighbor is used to recognize of OFDM signals and single carrier digital signals. Computer simulation results show that the feature value is immune to multi-path channel, and the algorithm is both feasibility and correctness in low SNR environment.%针对多径信道下OFDM信号的识别问题,提出了一种基于小波分解获得细节特征的信号识别方法.由于不同调制方式信号的细节分量在同一分解水平下存在较大差异,而细节分量几乎包含了信号的全部细节信息,从而直接反应信号的变化情况.通过理论推导和仿真测试证明单载波和OFDM信号在进行多层小波分解后,每层对应细节分量的幅度值具有不同的变化,然后利用其幅度差作为特征参数,并基于最近邻法判决规则实现单载波信号和OFDM信号的有效识别.仿真结果证实了该方法在较低信噪比下识别的正确性和可行性.
Mindfulness Based Stress Reduction: effect on emotional distress in diabetes
Young, Laura A; Cappola, Anne R; Baime, Michael J
2017-01-01
Psychological distress is common in patients with diabetes. Little is known about the impact of Mindfulness Based Stress Reduction (MBSR), a non-traditional, cognitive behavioural intervention designed to improve stress management skills, in patients with diabetes. The purpose of this retrospective analysis was to evaluate the impact of MBSR training on mood states in 25 individuals with diabetes. All participants completed the Profile of Mood States Short Form (POMS-SF) at baseline and following eight weeks of MBSR. Overall psychological distress measured by the total mood score (TMS) and six subscales – including tension/anxiety, depression/dejection, anger/hostility, fatigue/inertia, confusion/bewilderment and vigour/activity – were assessed. Overall mood, measured by the TMS, as well as all subscale mood measurements improved significantly from baseline following MBSR training. Compared to population means, those with diabetes had higher distress at baseline and similar levels of distress following MBSR training. The primary reason participants reported for enrolling in the MBSR course was to improve stress management skills. It was concluded that MBSR training is a promising, group-based intervention that can be used to decrease psychological distress in individuals with diabetes who perceive a need for training in stress management. PMID:28781569
An Estimation Method of Stress in Soft Rock Based on In-situ Measured Stress in Hard Rock
LI Wen-ping; LI Xiao-qin; SUN Ru-hua
2007-01-01
The law of variation of deep rock stress in gravitational and tectonic stress fields is analyzed based on the Hoek-Brown strength criterion. In the gravitational stress field, the rocks in the shallow area are in an elastic state and the deep, relatively soft rock may be in a plastic state. However, in the tectonic stress field, the relatively soft rock in the shallow area is in a plastic state and the deep rock in an elastic state. A method is proposed to estimate stress values in coal and soft rock based on in-situ measurements of hard rock. Our estimation method relates to the type of stress field and stress state. The equations of rock stress in various stress states are presented for the elastic, plastic and critical states. The critical state is a special stress state, which indicates the conversion of the elastic to the plastic state in the gravitational stress field and the conversion of the plastic to the elastic state in the tectonic stress field. Two cases studies show that the estimation method is feasible.
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
Multiresolution signal decomposition schemes
Goutsias, J.; Heijmans, H.J.A.M.
1998-01-01
[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis and synthes
Meng, Yi; Yi, Weijian
2011-06-01
Polyvinylidene fluoride (PVDF) piezoelectric material has been successfully applied in many engineering fields and scientific research. However, it has rarely been used for direct measurement of concrete stresses under impact loading. In this paper, a new PVDF-based stress gauge was developed to measure concrete stresses under impact loading. Calibrated on a split Hopkinson pressure bar (SHPB) with a simple measurement circuit of resistance strain gauges, the PVDF gauge was then used to establish dynamic stress-strain curves of concrete cylinders from a series of axial impact testing on a drop-hammer test facility. Test results show that the stress curves measured by the PVDF-based stress gauges are more stable and cleaner than that of the stress curves calculated with the impact force measured from a load cell.
Wei Gong; Jun Li; Feiyue Mao; Jinye Zhang
2011-01-01
@@ Although the empirical mode decomposition (EMD) method is an effective tool for noise reduction in lidar signals, evaluating the effectiveness of the denoising method is difficult.A dual-field-of-view lidar for observing atmospheric aerosols is described.The backscattering signals obtained from two channels have different signal-to-noise ratios (SNRs).The performance of noise reduction can be investigated by comparing the high SNR signal and the denoised low SNR signal without a simulation experiment.%Although the empirical mode decomposition (EMD) method is an effective tool for noise reduction in lidar signals, evaluating the effectiveness of the denoising method is difficult. A dual-field-of-view lidar for observing atmospheric aerosols is described. The backscattering signals obtained from two channels have different signal-to-noise ratios (SNRs). The performance of noise reduction can be investigated by comparing the high SNR signal and the denoised low SNR signal without a simulation experiment. With this approach, the signal and noise are extracted to one intrinsic mode function (IMF) by the EMD-based denolsing; thus, the threshold method is applied to the IMFs. Experimental results show that the improved threshold method can effectively perform noise reduction while preserving useful sudden-change information.
Castillo, D. A., [Department of Geology and Geophysics, University of Adelaide (Australia); Younker, L.W. [Lawrence Livermore National Lab., CA (United States)
1997-01-30
Nearly 200 new in-situ determinations of stress directions and stress magnitudes near the Carrizo plain segment of the San Andreas fault indicate a marked change in stress state occurring within 20 km of this principal transform plate boundary. A natural consequence of this stress transition is that if the observed near-field ``fault-oblique`` stress directions are representative of the fault stress state, the Mohr-Coulomb shear stresses resolved on San Andreas sub-parallel planes are substantially greater than previously inferred based on fault-normal compression. Although the directional stress data and near-hydrostatic pore pressures, which exist within 15 km of the fault, support a high shear stress environment near the fault, appealing to elevated pore pressures in the fault zone (Byerlee-Rice Model) merely enhances the likelihood of shear failure. These near-field stress observations raise important questions regarding what previous stress observations have actually been measuring. The ``fault-normal`` stress direction measured out to 70 km from the fault can be interpreted as representing a comparable depth average shear strength of the principal plate boundary. Stress measurements closer to the fault reflect a shallower depth-average representation of the fault zone shear strength. If this is true, only stress observations at fault distances comparable to the seismogenic depth will be representative of the fault zone shear strength. This is consistent with results from dislocation monitoring where there is pronounced shear stress accumulation out to 20 km of the fault as a result of aseismic slip within the lower crust loading the upper locked section. Beyond about 20 km, the shear stress resolved on San Andreas fault-parallel planes becomes negligible. 65 refs., 15 figs.
Qiu Yasong; Bai Junqiang
2015-01-01
In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD). Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compress-ibility has been avoided in the whole process. Moreover, there are no constraints for the inner prod-uct form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.
Gómez-Núñez, Alberto; Roura, Pere; López, Concepción; Vilà, Anna
2016-09-01
Four inks for the production of ZnO semiconducting films have been prepared with zinc acetate dihydrate as precursor salt and one among the following aminoalcohols: aminopropanol (APr), aminomethyl butanol (AMB), aminophenol (APh) and aminobenzyl alcohol (AB) as stabilizing agent. Their thermal decomposition process has been analyzed in situ by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and evolved gas analysis (EGA), whereas the solid product has been analysed ex-situ by X-ray diffraction (XRD) and infrared spectroscopy (IR). Although, except for the APh ink, crystalline ZnO is already obtained at 300 °C, the films contain an organic residue that evolves at higher temperature in the form of a large variety of nitrogen-containing cyclic compounds. The results indicate that APr can be a better stabilizing agent than ethanolamine (EA). It gives larger ZnO crystal sizes with similar carbon content. However, a common drawback of all the amino stabilizers (EA included) is that nitrogen atoms have not been completely removed from the ZnO film at the highest temperature of our experiments (600 °C).
Wei Li
2013-01-01
Full Text Available Belt conveyors are the equipment widely used in coal mines and other manufacturing factories, whose main components are a number of idlers. The faults of belt conveyors can directly influence the daily production. In this paper, a fault diagnosis method combining wavelet packet decomposition (WPD and support vector machine (SVM is proposed for monitoring belt conveyors with the focus on the detection of idler faults. Since the number of the idlers could be large, one acceleration sensor is applied to gather the vibration signals of several idlers in order to reduce the number of sensors. The vibration signals are decomposed with WPD, and the energy of each frequency band is extracted as the feature. Then, the features are employed to train an SVM to realize the detection of idler faults. The proposed fault diagnosis method is firstly tested on a testbed, and then an online monitoring and fault diagnosis system is designed for belt conveyors. An experiment is also carried out on a belt conveyor in service, and it is verified that the proposed system can locate the position of the faulty idlers with a limited number of sensors, which is important for operating belt conveyors in practices.
Kadum, Hawwa; Ali, Naseem; Cal, Raúl
2016-11-01
Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.
Zongxi Qu
2016-01-01
Full Text Available As a type of clean and renewable energy, the superiority of wind power has increasingly captured the world’s attention. Reliable and precise wind speed prediction is vital for wind power generation systems. Thus, a more effective and precise prediction model is essentially needed in the field of wind speed forecasting. Most previous forecasting models could adapt to various wind speed series data; however, these models ignored the importance of the data preprocessing and model parameter optimization. In view of its importance, a novel hybrid ensemble learning paradigm is proposed. In this model, the original wind speed data is firstly divided into a finite set of signal components by ensemble empirical mode decomposition, and then each signal is predicted by several artificial intelligence models with optimized parameters by using the fruit fly optimization algorithm and the final prediction values were obtained by reconstructing the refined series. To estimate the forecasting ability of the proposed model, 15 min wind speed data for wind farms in the coastal areas of China was performed to forecast as a case study. The empirical results show that the proposed hybrid model is superior to some existing traditional forecasting models regarding forecast performance.
Qiu Yasong
2015-02-01
Full Text Available In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD. Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compressibility has been avoided in the whole process. Moreover, there are no constraints for the inner product form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.
Lei Zhao
2015-10-01
Full Text Available Surveying the Earth’s gravity field refers to an important domain of Geodesy, involving deep connections with Earth Sciences and Geo-information. Airborne gravimetry is an effective tool for collecting gravity data with mGal accuracy and a spatial resolution of several kilometers. The main obstacle of airborne gravimetry is extracting gravity disturbance from the extremely low signal to noise ratio measuring data. In general, the power of noise concentrates on the higher frequency of measuring data, and a low pass filter can be used to eliminate it. However, the noise could distribute in a broad range of frequency while low pass filter cannot deal with it in pass band of the low pass filter. In order to improve the accuracy of the airborne gravimetry, Empirical Mode Decomposition (EMD is employed to denoise the measuring data of two primary repeated flights of the strapdown airborne gravimetry system SGA-WZ carried out in Greenland. Comparing to the solutions of using finite impulse response filter (FIR, the new results are improved by 40% and 10% of root mean square (RMS of internal consistency and external accuracy, respectively.
Xun Chen
2014-01-01
Full Text Available Electroencephalogram (EEG recordings are often contaminated with muscle artifacts. This disturbing muscular activity strongly affects the visual analysis of EEG and impairs the results of EEG signal processing such as brain connectivity analysis. If multichannel EEG recordings are available, then there exist a considerable range of methods which can remove or to some extent suppress the distorting effect of such artifacts. Yet to our knowledge, there is no existing means to remove muscle artifacts from single-channel EEG recordings. Moreover, considering the recently increasing need for biomedical signal processing in ambulatory situations, it is crucially important to develop single-channel techniques. In this work, we propose a simple, yet effective method to achieve the muscle artifact removal from single-channel EEG, by combining ensemble empirical mode decomposition (EEMD with multiset canonical correlation analysis (MCCA. We demonstrate the performance of the proposed method through numerical simulations and application to real EEG recordings contaminated with muscle artifacts. The proposed method can successfully remove muscle artifacts without altering the recorded underlying EEG activity. It is a promising tool for real-world biomedical signal processing applications.
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
Jiani Heng
2016-01-01
Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.
Jana, Prabhas; de la Pena O' Shea, Victor A.; Coronado, Juan M. [Thermochemical Process Unit, Instituto IMDEA Energia, C/Tulipan s/n 28933, Mostoles, Madrid (Spain); Serrano, David P. [Thermochemical Process Unit, Instituto IMDEA Energia, C/Tulipan s/n 28933, Mostoles, Madrid (Spain); Department of Chemical and Environmental Technology, ESCET, Rey Juan Carlos University, c/ Tulipan s/n, 28933 Mostoles, Madrid (Spain)
2010-10-15
A variety of unsupported cobalt catalysts was synthesized using the Pechini method and tested for CO{sub 2}-free H{sub 2} production via methane decomposition. In order to study the influence of the synthesis conditions on the properties of cobalt materials, the Cobalt:Citric acid (Co:CA) ratio was varied systematically (from 1:2 to 1:20). In addition, a study of the effect of the activation process on the catalyst activity was performed by activating the catalyst with H{sub 2} or CH{sub 4}. In both the activation processes, metallic cobalt with fcc structure was obtained, but the particle morphology varied with the activation treatment. The catalytic behavior was highly influenced when the reduction procedure was performed under methane atmosphere. For the Co:CA ratios, the best results were obtained with the catalyst prepared with a Co:CA 1:20 ratio reduced in presence of methane, which shows a production of 6.47 mol of H{sub 2} per mol of cobalt even without deactivation behavior for 30 min of the reaction period. (author)
Yueyue Liu
2015-01-01
Full Text Available This paper studies a production scheduling problem with deteriorating jobs, which frequently arises in contemporary manufacturing environments. The objective is to find an optimal sequence of the set of jobs to minimize the total weighted tardiness, which is an indicator of service quality. The problem belongs to the class of NP-hard. When the number of jobs increases, the computational time required by an optimization algorithm to solve the problem will increase exponentially. To tackle large-scale problems efficiently, a two-stage method is presented in this paper. We partition the set of jobs into a few subsets by applying a neural network approach and thereby transform the large-scale problem into a series of small-scale problems. Then, we employ an improved metaheuristic algorithm (called GTS which combines genetic algorithm with tabu search to find the solution for each subproblem. Finally, we integrate the obtained sequences for each subset of jobs and produce the final complete solution by enumeration. A fair comparison has been made between the two-stage method and the GTS without decomposition, and the experimental results show that the solution quality of the two-stage method is much better than that of GTS for large-scale problems.
Ambrozinski, Lukasz; Stepinski, Tadeusz; Packo, Pawel; Uhl, Tadeusz
2012-02-01
Active ultrasonic arrays are very useful for structural health monitoring (SHM) of large plate-like structures. Large areas of a plate can be monitored from a fixed position but it normally requires precise information on material properties. Self-focusing methods can perform well without the exact knowledge of a medium and array parameters. In this paper a method for selective focusing of Lamb waves will be presented. The algorithm is an extension of the DORT method (French acronym for decomposition of time-reversal operator) where the continuous wavelet transform (CWT) is used for the time-frequency representation (TFR) of nonstationary signals instead of the discrete Fourier transform. The performance of the methods is compared and verified in the paper using both simulated and experimental data. It is shown that the extension of the DORT method with the use of TFR considerably improved its resolving ability. To experimentally evaluate the performance of the proposed method, a linear array of small piezoelectric transducers attached to an aluminum plate was used to obtain interelement responses, required for beam self-focusing on targets present in the plate. The array was used for the transmission of signals calculated with the DORT-CWT algorithm. To verify the self-focusing effect the backpropagated field generated in the experiment was sensed using laser scanning vibrometer.
Wang, Jun; Meng, Xiaohong; Guo, Lianghui; Chen, Zhaoxi; Li, Fang
2014-10-01
We present a correlation coefficient analysis (CCA) method for obtaining threshold when using singular value decomposition (SVD) filtering method to reduce noise in potential field data. Before computation of correlation coefficients, SVD is performed on the gridded potential field data with the purpose of obtaining singular values of the data. A sliding window is utilized to truncate the acquired singular values, which allows us to obtain different singular value sequences. The lower limit of the sliding window is generally set to zero and the upper limit of the sliding window is the threshold. Then, we calculate and plot the correlation coefficients associated with the initial sequence and the newly obtained sequences, choosing the inflection point of the plotted correlation coefficients as the threshold. The CCA method offers a quantitative way to determine a threshold, which can be easily implemented by a computer program. We illustrate the method using synthetic datasets and field data from a metallic deposit area in the middle-lower reaches of the Yangtze River in China. The results show that the proposed method is effective and is able to provide an optimal threshold.
Rakytska, Tetyana; Truba, Alla; Radchenko, Evgen; Golub, Alexander
2015-12-01
In this article, we submit the description of synthesis and identification of manganese(II) complexes with pyrogenic nanosilica-immobilized ( d av = 10 nm; S sp = 290 m2/g) hydroxyaldimine ligands (Mn{(L)}_2/overline{Si}) : salicilaldiminopropyl (L1); 5-bromosalicilaldiminopropyl (L2); 2-hydroxynaphtaldiminopropyl (L3); 2-hydroxy-3-methoxybenzaldiminopropyl (L4); 2-hydroxy-3,5-dichloroacetophenoniminopropyl (L5); and 4-hydroxy-3-methoxybenzaldiminopropyl (L6). The ligands and complexes were characterized by UV-VIS and IR spectrometry. Nanocomposites consisting of complexes Mn{(L)}_2/overline{Si} showed a high catalytic activity in low-temperature ozone decomposition in the range of concentrations between 2.1 × 10-6 and 8.4 × 10-6 mol/l. The number of catalytic cycles increased for isostructural pseudotetrahedral complexes Mn{(L)}_2/overline{Si} (L1-L5) in the following order: Mn(L3)2 >> Mn(L4)2 > Mn(L1)2 > Mn(L2)2 > Mn(L5)2. In the case of pseudooctahedral complexes with L6, the change of coordination polyhedral does not influence the kinetics and stoichiometric parameters of the reaction.
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Pre-Stressing Timber-Based Plate Tensegrity Structures
Falk, Andreas; Kirkegaard, Poul Henning
2012-01-01
Tensile structures occur in numerous varieties utilising combinations of tension and compression. Introducing structural plates in the basic tensegrity unit and tensegric assemblies varies the range of feasible topologies and provides the structural system with an integrated surface. The present...... paper considers the concept of plate tensegrity based on CLT plates (cross-laminated timber). It combines the principles of tensegrity with the principles of plate shells and is characterised by a plate shell stabilised by struts and cables. The paper deals with material aspects and robustness of timber......-based plate shells and outlines needs, methods and effects of controlling cable stresses for secured capacity, form and function of plate tensegrity....
EEG sensor based classification for assessing psychological stress.
Begum, Shahina; Barua, Shaibal
2013-01-01
Electroencephalogram (EEG) reflects the brain activity and is widely used in biomedical research. However, analysis of this signal is still a challenging issue. This paper presents a hybrid approach for assessing stress using the EEG signal. It applies Multivariate Multi-scale Entropy Analysis (MMSE) for the data level fusion. Case-based reasoning is used for the classification tasks. Our preliminary result indicates that EEG sensor based classification could be an efficient technique for evaluation of the psychological state of individuals. Thus, the system can be used for personal health monitoring in order to improve users health.
Daverman, Robert J
2007-01-01
Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve
Orlando Soriano-Vargas
2016-12-01
Full Text Available Spinodal decomposition was studied during aging of Fe-Cr alloys by means of the numerical solution of the linear and nonlinear Cahn-Hilliard differential partial equations using the explicit finite difference method. Results of the numerical simulation permitted to describe appropriately the mechanism, morphology and kinetics of phase decomposition during the isothermal aging of these alloys. The growth kinetics of phase decomposition was observed to occur very slowly during the early stages of aging and it increased considerably as the aging progressed. The nonlinear equation was observed to be more suitable for describing the early stages of spinodal decomposition than the linear one.
... diabetes. Shopdiabetes.org: Your Stress-Free System for Family Dinners! - 2017-03-book-oclock-scramble.html Shopdiabetes.org Your Stress-Free System for Family Dinners! A year of delicious meals to help prevent ...
... sudden negative change, such as losing a job, divorce, or illness Traumatic stress, which happens when you ... stress, so you can avoid more serious health effects. NIH: National Institute of Mental Health
Adherence to internet-based mobile-supported stress management
Zarski, A C; Lehr, D.; Berking, M.
2016-01-01
Background: Nonadherence to treatment is a prevalent issue in Internet interventions. Guidance from health care professionals has been found to increase treatment adherence rates in Internet interventions for a range of physical and mental disorders. Evaluating different guidance formats of varying...... intensity is important, particularly with respect to improvement of effectiveness and cost-effectiveness. Identifying predictors of nonadherence allows for the opportunity to better adapt Internet interventions to the needs of participants especially at risk for discontinuing treatment. Objective: The goal...... of this study was to investigate the influence of different guidance formats (content-focused guidance, adherence-focused guidance, and administrative guidance) on adherence and to identify predictors of nonadherence in an Internet-based mobile-supported stress management intervention (ie, GET.ON Stress...
Stress Resultant Based Elasto-Viscoplastic Thick Shell Model
Pawel Woelke
2012-01-01
Full Text Available The current paper presents enhancement introduced to the elasto-viscoplastic shell formulation, which serves as a theoretical base for the finite element code EPSA (Elasto-Plastic Shell Analysis [1–3]. The shell equations used in EPSA are modified to account for transverse shear deformation, which is important in the analysis of thick plates and shells, as well as composite laminates. Transverse shear forces calculated from transverse shear strains are introduced into a rate-dependent yield function, which is similar to Iliushin's yield surface expressed in terms of stress resultants and stress couples [12]. The hardening rule defined by Bieniek and Funaro [4], which allows for representation of the Bauschinger effect on a moment-curvature plane, was previously adopted in EPSA and is used here in the same form. Viscoplastic strain rates are calculated, taking into account the transverse shears. Only non-layered shells are considered in this work.
Tang, Christina Y; Downs, Anthony J; Greene, Tim M; Marchant, Sarah; Parsons, Simon
2005-10-03
Thermal decomposition of monochlorogallane, [H2GaCl]n, at ambient temperatures results in the formation of subvalent gallium species. To Ga[HGaCl3], previously reported, has now been added a second mixed-valence solid, Ga4[HGaCl3]2[Ga2Cl6] (1), the crystal structure of which at 150 K shows a number of unusual features. Adducts of monochlorogallane, most readily prepared from the hydrochloride of the base and LiGaH4 in appropriate proportions, include not only the 1:1 molecular complex Me3P.GaH2Cl (2), but also 2:1 amine complexes which prove to be cationic gallane derivatives, [H2Ga(NH2R)2]+Cl-, where R = tBu (3a) or sBu (3b). All three of these complexes have been characterized crystallographically at 150 K.
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
Yu, Yong-Jie; Wu, Hai-Long; Shao, Sheng-Zhi; Kang, Chao; Zhao, Juan; Wang, Yu; Zhu, Shao-Hua; Yu, Ru-Qin
2011-09-15
A novel strategy that combines the second-order calibration method based on the trilinear decomposition algorithms with high performance liquid chromatography with diode array detector (HPLC-DAD) was developed to mathematically separate the overlapped peaks and to quantify quinolones in honey samples. The HPLC-DAD data were obtained within a short time in isocratic mode. The developed method could be applied to determine 12 quinolones at the same time even in the presence of uncalibrated interfering components in complex background. To access the performance of the proposed strategy for the determination of quinolones in honey samples, the figures of merit were employed. The limits of quantitation for all analytes were within the range 1.2-56.7 μg kg(-1). The work presented in this paper illustrated the suitability and interesting potential of combining second-order calibration method with second-order analytical instrument for multi-residue analysis in honey samples.
Yujie Wei; Yongheng Jiang; Dexian Huang⁎
2014-01-01
This paper introduces a practical solving scheme of gradetransition trajectory optimization (GTTO) problems under typical certificate-checking–updating framework. Due to complicated kinetics of polymerization, differential/algebraic equations (DAEs) always cause great computational burden and system non-linearity usual y makes GTTO non-convex bearing multiple optima. Therefore, coupled with the three-stage decomposi-tion model, a three-section algorithm of dynamic programming (TSDP) is proposed based on the general iteration mechanism of iterative programming (IDP) and incorporated with adaptivegrid allocation scheme and heuristic modifications. The algorithm iteratively performs dynamic programming with heuristic modifications under constant calculation loads and adaptively allocates the valued computational resources to the regions that can further improve the optimality under the guidance of local error estimates. TSDP is finally compared with IDP and interior point method (IP) to verify its efficiency of computation.
Su, Xiao-Xing; Wang, Yue-Sheng; Zhang, Chuanzeng
2017-05-01
A time-domain method for calculating the defect states of scalar waves in two-dimensional (2D) periodic structures is proposed. In the time-stepping process of the proposed method, the column vector containing the spatially sampled field values is updated by multiplying it with an iteration matrix, which is written in a matrix-exponential form. The matrix-exponential is first computed by using the Suzuki's decomposition based technique of the fourth order, in which the Floquet-Bloch boundary conditions are incorporated. The obtained iteration matrix is then squared to enlarge the time-step that can be used in the time-stepping process (namely, the squaring technique), and the small nonzero elements in the iteration matrix is finally pruned to improve the sparse structure of the matrix (namely, the pruning technique). The numerical examples of the super-cell calculations for 2D defect-containing phononic crystal structures show that, the fourth order decomposition based technique for the matrix-exponential computation is much more efficient than the frequently used precise integration technique (PIT) if the PIT is of an order greater than 2. Although it is not unconditionally stable, the proposed time-domain method is particularly efficient for the super-cell calculations of the defect states in a 2D periodic structure containing a defect with a wave speed much higher than those of the background materials. For this kind of defect-containing structures, the time-stepping process can run stably for a sufficiently large number of the time-steps with a time-step much larger than the Courant-Friedrichs-Lewy (CFL) upper limit, and consequently the overall efficiency of the proposed time-domain method can be significantly higher than that of the conventional finite-difference time-domain (FDTD) method. Some physical interpretations on the properties of the band structures and the defect states of the calculated periodic structures are also presented.
A Frequency Blind Deconvolution Algorithm Based on Parallel Factor Decomposition%基于平行因子分解的频域盲解卷积算法
李剑; 杨贤
2012-01-01
针对卷积混合盲分离问题，文章提出了一张基于张量平行因子分解的盲分离算法。该算法通过将接收信号的频域相关矩阵叠加成三阶张量，再对此三阶张量进行平行因子分解，最后利用基于K-neans聚类的全排列解模糊算法来完成无排列模糊的混合矩阵估计。通过仿真实验，计算分离信号与源信号的相似系数，结果表明提出的算法具有很好的分离效果。而且实现简单，可满足实际应用的要求。%To solve the problem of the blind separation of convolutive separation, this paper proposes an algorithm based on tensor parallel factor decomposition （PARAFAC）. Firsdy the frequency domain correlation matrix group of the received signals is stacked in a third-order tensor. Then parallel factor decomposition of this tensor is performed. Finally estimation of the mixing matrix without frequency ambiguity is done using the all-permutations based on K-means algorithm. Simulation results show that the performance of the algorithm is provides good blind separation of convolutive mixture. Also it is relatively simple to implement, which can satisfy the demand of engineering application.
Litter Decomposition Rates, 2015
U.S. Geological Survey, Department of the Interior — This data set contains decomposition rates for litter of Salicornia pacifica, Distichlis spicata, and Deschampsia cespitosa buried at 7 tidal marsh sites in 2015....
Aluminum sheet-based S-doped TiO2 for photocatalytic decomposition of toxic organic vapors
Wan-Kuen Jo; Hyun-Jung Kang
2014-01-01
S-doped TiO2 (S-TiO2) films were immobilized on flexible low-cost aluminum sheets (S-TiO2-AS) using a sol-gel dipping process and low post-processing temperatures. The photocatalytic degrada-tion of toxic organic vapors using the prepared films was evaluated using a continuous-flow glass tube under visible light exposure. The surface properties of the S-TiO2-AS and TiO2-AS films were examined by scanning electron microscopy, energy-dispersive X-ray spectroscopy, X-ray diffraction, and ultraviolet-visible spectroscopy. The photolysis of benzene, toluene, ethyl benzene, and xylene (BTEX) did not occur on the bare AS. In contrast, the photocatalytic degradation efficiencies of the target pollutants using S-TiO2-AS were higher than those obtained using reference TiO2-AS photo-catalyst. In particular, the average photocatalytic degradation efficiencies of BTEX using S-TiO2-0.8-AS (S/Ti ratio = 0.8) over a 3-h process were 34%, 78%, 91%, and 94%, respectively, whereas those of TiO2-AS were 2%, 11%, 21%, and 36%, respectively. The photocatalytic decompo-sition efficiencies of BTEX under visible irradiation using S-TiO2-AS increased with increasing S/Ti ratios from 0.2 to 0.8, but decreased when the ratio further increased to 1.6. Thus, S-TiO2-AS can be prepared using optimal S/Ti ratios. The degradation of BTEX over S-TiO2-AS depended on the air flow rates and initial concentrations of the target chemical. Overall, under optimal conditions, S-TiO2-AS can be effectively applied for the purification of toxic organic vapors.